OpenAI to introduce age-verification for ChatGPT – Computing UK

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Chief executive Sam Altman set out the plans in a company blog post, saying OpenAI would prioritise “safety ahead of privacy and freedom for teens”.
The new framework will use behaviour-based age prediction to estimate how old a user is. If there is uncertainty, the system will default to the under-18 experience. In some regions, users may also be asked to provide ID. Altman acknowledged this would be a “privacy compromise for adults” but described it as a necessary trade-off.
Accounts flagged as under-18 will see significant changes. ChatGPT will be prevented from producing sexually explicit material, will refuse to engage in flirtatious exchanges, and will not respond to requests relating to suicide or self-harm, even in a fictional or creative context.
In cases of suspected imminent danger, OpenAI said it may attempt to notify parents or, failing that, contact local authorities. “These are difficult decisions, but after consulting with experts, this is what we believe is best,” Altman wrote in the blog.
The move comes after a lawsuit filed in California by the family of 16-year-old Adam Raine, who died in April. The family claims GPT-4o gave him guidance on methods of suicide and even helped draft a farewell note. Court filings allege he exchanged up to 650 messages a day with the chatbot.
OpenAI admitted in August that its safeguards were more effective in short conversations than in lengthy, repeated exchanges. The company has pledged to strengthen guardrails around sensitive topics.
The case has intensified scrutiny over how generative AI platforms manage mental health risks, with experts warning that extended interactions could pose new dangers.
OpenAI also announced measures to further protect user data from internal access. While teen protections will be tightened, Altman stressed that adult users will retain wider freedoms, including the option to engage in flirtatious conversations with ChatGPT. However, suicide instructions will remain prohibited for all users.
“Our principle is simple: treat adults like adults,” Altman wrote.