OpenAI seeks to fill ‘stressful’ AI safety role with $555,000 pay

Amid mounting scrutiny and multiple wrongful death lawsuits, OpenAI has moved to fill a key AI safety position that has reportedly remained vacant for several months.

Last Saturday, the ChatGPT maker announced that it is seeking a new Head of Preparedness to steer the company’s AI safety strategy. The role, highlighted in a job listing shared on X by OpenAI Chief Executive Officer Sam Altman, will focus on anticipating potential harms posed by advanced AI models and assessing how such systems could be misused.

The position carries an annual salary of $555,000, in addition to equity in the company. According to OpenAI, the successful candidate “will lead the technical strategy and execution of OpenAI’s Preparedness framework,” which outlines the company’s approach to monitoring and preparing for frontier AI capabilities that could create severe risks.

OpenAI’s recruitment drive comes at a sensitive moment, as the company faces growing criticism over the impact of ChatGPT on users’ mental health. These concerns include several wrongful death lawsuits. An internal OpenAI study found that more than one million ChatGPT users—around 0.07 per cent of its weekly active users—showed signs of mental health emergencies such as mania, psychosis, or suicidal ideation.

Acknowledging these risks, Altman said that the “potential impact of models on mental health was something we saw a preview of in 2025,” describing the head of preparedness role as “critical at an important time”.

In his post, Altman added: “If you want to help the world figure out how to enable cybersecurity defenders with cutting-edge capabilities while ensuring attackers can’t use them for harm—ideally by making all systems more secure—and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying.”

He also cautioned that the role would be demanding, stating, “This will be a stressful job, and you’ll jump into the deep end pretty much immediately.”

OpenAI’s safety teams have seen notable staff turnover in recent years. In July 2024, the company reassigned then head of preparedness Aleksander Madry, with the role temporarily taken over by AI safety researchers Joaquin Quinonero Candela and Lilian Weng. Weng left the company a few months later, while Candela earlier this year announced his move away from the preparedness team to lead recruiting at OpenAI.

Further changes followed in November 2025, when Andrea Vallone, who headed a safety research unit known as model policy, said she would leave OpenAI at the end of the year. Vallone was reportedly instrumental in shaping how ChatGPT responds to users experiencing mental health crises.

The latest hiring effort underscores OpenAI’s attempts to strengthen its safety infrastructure amid intensifying concerns over the societal and psychological risks posed by powerful artificial intelligence systems.

Tags: