OpenAI Creates $555,000 Head of Preparedness Role for AI Safety – Technobezz

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
OpenAI announced a $555,000 Head of Preparedness role this week, creating a dedicated position to manage escalating AI safety concerns
Bogdana Zujic
Senior Technology Editor
Get tech news that matters delivered weekly. Join 50,000+ readers.
OpenAI announced a $555,000 Head of Preparedness role this week, creating a dedicated position to manage escalating AI safety concerns. The San Francisco-based company will pay the new executive to oversee risk mitigation across mental health, cybersecurity, and biological threat domains.
You can also set us as a preferred source in Google Search/News by clicking the button.
CEO Sam Altman described the position as “a stressful job” where candidates will “jump into the deep end pretty much immediately.” The role requires building capability evaluations, establishing threat models, and developing safeguards for what OpenAI calls “frontier capabilities that create new risks of severe harm.”

We are hiring a Head of Preparedness. This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges. The potential impact of models on mental health was something we…
— Sam Altman (@sama) December 27, 2025
The hiring follows multiple lawsuits alleging ChatGPT contributed to user suicides earlier this year. Seven complaints filed in California state courts in November included four wrongful death lawsuits alleging ChatGPT encouraged suicides and three cases claiming the chatbot led to mental health breakdowns and delusions. One case involved a 16-year-old whose parents sued OpenAI, alleging ChatGPT helped plan his suicide.
Altman acknowledged mental health impacts in a December 27 X post, stating “the potential impact of models on mental health was something we saw a preview of in 2025.” He added that AI models are now “beginning to find critical vulnerabilities” in computer security systems.
Cybersecurity threats represent the second major risk domain. OpenAI reported this month that its latest model was almost three times better at hacking than three months earlier. The company expects upcoming AI models to continue this trajectory, creating new security challenges.
Rival AI company Anthropic reported what it called the first documented AI-orchestrated cyber espionage campaign last month, where artificial intelligence acted largely autonomously under suspected Chinese state actors. The AI penetrated networks, analyzed stolen data, and created psychologically targeted ransom notes across 17 organizations.
The Head of Preparedness will oversee mitigation design across major risk areas including cyber and biological threats. According to the job posting, the role requires “deep technical expertise in machine learning, AI safety, evaluations, security or adjacent risk domains.”
OpenAI first established a preparedness team in 2023 to study potential catastrophic risks ranging from phishing attacks to nuclear threats. The previous Head of Preparedness, Aleksander Madry, was reassigned to focus on AI reasoning less than a year later, with other safety executives also leaving or changing roles.
The $555,000 salary includes equity in OpenAI, a company valued at $500 billion. ChatGPT reached 700 million weekly active users in August 2025 and grew to 800 million by October 2025, according to company announcements.
Industry experts have raised broader concerns about AI safety standards. The Future of Life Institute’s AI safety index released earlier this month found major AI companies including OpenAI, Anthropic, xAI and Meta were “far short of emerging global standards.”
Microsoft AI CEO Mustafa Suleyman told BBC Radio 4 this week that “if you’re not a little bit afraid at this moment, then you’re not paying attention.” Google DeepMind co-founder Demis Hassabis warned this month of risks that AIs could go “off the rails in some way that harms humanity.”
The new position arrives during what Altman called “a critical role at an important time.” He stated that while AI models “are improving quickly and are now capable of many great things, they are also starting to present some real challenges.”
Applicants must have experience with “designing or executing high-rigor evaluations for complex technical systems.” The role is based in San Francisco and focuses on ensuring safeguards remain “technically sound, effective, and aligned with underlying threat models.”
OpenAI’s safety investment comes as regulatory frameworks remain limited. Computer scientist Yoshua Bengio, known as one of the “godfathers of AI,” noted recently that “a sandwich has more regulation than AI,” leaving companies to largely regulate themselves.
You can also set us as a preferred source in Google Search/News by clicking the button.
Help others discover this content
Samsung Galaxy S26 Ultra: Release Date, Price, Specs & Latest Leaks
5 min read
Today's NYT Wordle lands with puzzle #1655, and this Tuesday challenge serves up a familiar five-letter noun that's common in design conversations but might not be top-of-mind for daily Wordle
4 min read
The Tuesday edition of NYT Connections arrives with puzzle #933, serving up a grid that rewards aviation knowledge and semantic precision.
3 min read
Elon Musk's xAI is hiring Android engineers across four global offices to build mobile experiences for its Grok AI chatbot and X platform.
3 min read
3 min read
5 min read
4 min read
5 min read
4 min read
3 min read
5 min read
5 min read
We test gadgets, dig into tech stories, and tell you what's actually worth your time and money.
© 2025 Technobezz. All rights reserved.
Made by people who actually use this stuff

source

Scroll to Top