An anonymous reader shared this report from Engadget:
OpenAI is looking for a new Head of Preparedness who can help it anticipate the potential harms of its models and how they can be abused, in order to guide the company’s safety strategy.
It comes at the end of a year that’s seen OpenAI hit with numerous accusations about ChatGPT’s impacts on users’ mental health, including a few wrongful death lawsuits. In a post on X about the position, OpenAI CEO Sam Altman acknowledgedthat the “potential impact of models on mental health was something we saw a preview of in 2025,” along with other “real challenges” that have arisen alongside models’ capabilities. The Head of Preparedness “is a critical role at an important time,” he said.
Per the job listing, the Head of Preparedness (who will make $555K, plus equity), “will lead the technical strategy and execution of OpenAI’s Preparedness framework, our framework explaining OpenAI’s approach to tracking and preparing for frontier capabilities that create new risks of severe harm.”
“These questions are hard,” Altman posted on X.com, “and there is little precedent; a lot of ideas that sound good have some real edge cases… This will be a stressful job and you’ll jump into the deep end pretty much immediately.”
The listing says OpenAI’s Head of Preparedness “will lead a small, high-impact team to drive core Preparedness research, while partnering broadly across Safety Systems and OpenAI for end-to-end adoption and execution of the framework.” They’re looking for someone “comfortable making clear, high-stakes technical judgments under uncertainty.”