OpenAI To Roll Out Teen-Focused Guardrails On ChatGPT – MediaNama

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
MEDIANAMA
Technology and policy in India
OpenAI has announced plans to introduce an age-prediction system to make ChatGPT safer for teenagers. The company said it will identify whether a user is under or over 18 and give younger users a more restricted version of the AI tool.
OpenAI said the move aims to protect teens from harmful content and situations while using ChatGPT. “We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection,” wrote OpenAI CEO Sam Altman in a blog post.
According to the company, the system will estimate a user’s age from their usage patterns. If there is any doubt about someone’s age, they will be given the under-18 experience by default. In some countries, the company may also ask for ID proof to confirm age.
Teen users will not be allowed to access sexual content or engage in flirtatious conversations with the AI. ChatGPT will also avoid discussions about suicide or self-harm with users identified as under 18. If a teen shows signs of suicidal thoughts, the company will try to alert the teen’s parents, and if it cannot reach them, it may contact law enforcement in cases of imminent harm.
OpenAI said it is also preparing parental controls to give families more control over how their teenaged sons and daughters use ChatGPT. Parents can link their accounts to their teens’ accounts, block features such as chat history and memory, and enforce blackout hours to prevent their teens from using the tool. They will also get alerts if the system detects that their teen is in severe distress. OpenAI added that these parental controls will be available by the end of the month.
Altman wrote that protecting privacy is important, and that OpenAI is developing stronger security features so that even its employees cannot access user data.
However, he said that automated systems will still monitor for serious misuse, and “the most critical risks — threats to someone’s life, plans to harm others, or societal-scale harm like a potential massive cybersecurity incident — may be escalated for human review”.
OpenAI announced these safety steps months after facing public criticism and a wrongful-death lawsuit over a teen’s suicide allegedly linked to ChatGPT.
On August 26, 2025, the company admitted on its website that its safeguards “work more reliably in common, short exchanges. [And] we have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”
OpenAI added that while ChatGPT might point to suicide helplines at first, after many messages over a long period, it might eventually offer an answer that goes against the company’s safeguards.
An American couple filed the wrongful death lawsuit in San Francisco Superior Court, naming OpenAI and Altman as defendants after Adam Raine, their 16-year-old son, died by suicide in April 2025, following months of interaction with ChatGPT.
The suit says ChatGPT gave the 16-year-old self-harm instructions, helped draft suicide notes, told him to drink alcohol before attempting suicide, and discouraged him from seeking help.
One cited exchange allegedly had the chatbot telling Adam: “[t]hat doesn’t mean you owe them survival. You don’t owe anyone that.” And when he uploaded a photo of a noose, ChatGPT reportedly replied: “Yeah, that’s not bad at all.”
Another consumer chatbot, Character.AI, is facing a similar lawsuit, highlighting wider concerns about chatbot-fuelled delusion as these systems grow more capable of long, emotionally involved conversations.
Elsewhere, a Reuters investigation found a Meta policy document apparently allowing AI bots to have sexual chats with underage users. Interestingly, Meta updated its chatbot rules following that report.
These policy changes also coincide with a US Senate Judiciary Committee hearing titled “Examining the Harm of AI Chatbots”. Notably, Adam Raine’s father has testified at this hearing.
The eSafety Commissioner of Australia recently warned that social media platforms’ age controls are easy for children to bypass.
The Commission’s February 2025 report found that platforms like Discord, Instagram, Snapchat, TikTok, and YouTube mostly rely on self-declared ages, which children often falsify. Even advanced checks such as language analysis, facial age estimation, and AI-based detection often fail to block under-13 users.
The report further found that 80% of children aged 8–12 used two or more social media platforms, usually through parent accounts, showing that children can easily bypass platform-specific guardrails.
These findings raise questions about whether OpenAI’s planned age prediction system for ChatGPT can reliably distinguish between under-18 users and adults.
If children can bypass traditional age gates on major platforms, they can also potentially undermine ChatGPT’s age-based safety measures unless the company develops relatively stronger verification methods.
The new guardrails mark a pivotal moment in how companies like OpenAI and Meta handle the ethical risks of generative AI. Chatbots are no longer experimental toys, instead they have become constant digital companions for millions of teens. That intimacy can apparently blur reality, amplify loneliness, and even influence life-or-death decisions, as seen in the ongoing lawsuits against OpenAI and Character.AI over alleged chatbot-linked suicides.
By introducing blackout hours, content filters, and parental oversight, OpenAI is acknowledging that these tools can shape users’ mental and emotional states, for better or worse. Political stakeholders are taking notice too: the same day these policies dropped, US lawmakers held a Senate hearing on the harms of AI chatbots, underscoring policy-focused efforts to rein them in.
Ultimately, how the balance between innovation and safeguarding vulnerable users is achieved will define the future of consumer-facing AI tools.
Read More:
Support our journalism:
The startup founders recommended some Terms of Reference (ToR) & the Request for Proposal (RFP) for the study, like defining the objective, inclusion of smaller firms and funding disclosure for the bidding entities
Vested Finance has stopped offering new crypto ETFs to Indian investors following IFSCA’s revised rules. Existing holdings can still be held or sold until the October 31, 2025, compliance deadline.
MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.
© 2024 Mixed Bag Media Pvt. Ltd.
source
MEDIANAMA
Technology and policy in India
OpenAI has announced plans to introduce an age-prediction system to make ChatGPT safer for teenagers. The company said it will identify whether a user is under or over 18 and give younger users a more restricted version of the AI tool.
OpenAI said the move aims to protect teens from harmful content and situations while using ChatGPT. “We prioritize safety ahead of privacy and freedom for teens; this is a new and powerful technology, and we believe minors need significant protection,” wrote OpenAI CEO Sam Altman in a blog post.
According to the company, the system will estimate a user’s age from their usage patterns. If there is any doubt about someone’s age, they will be given the under-18 experience by default. In some countries, the company may also ask for ID proof to confirm age.
Teen users will not be allowed to access sexual content or engage in flirtatious conversations with the AI. ChatGPT will also avoid discussions about suicide or self-harm with users identified as under 18. If a teen shows signs of suicidal thoughts, the company will try to alert the teen’s parents, and if it cannot reach them, it may contact law enforcement in cases of imminent harm.
OpenAI said it is also preparing parental controls to give families more control over how their teenaged sons and daughters use ChatGPT. Parents can link their accounts to their teens’ accounts, block features such as chat history and memory, and enforce blackout hours to prevent their teens from using the tool. They will also get alerts if the system detects that their teen is in severe distress. OpenAI added that these parental controls will be available by the end of the month.
Altman wrote that protecting privacy is important, and that OpenAI is developing stronger security features so that even its employees cannot access user data.
However, he said that automated systems will still monitor for serious misuse, and “the most critical risks — threats to someone’s life, plans to harm others, or societal-scale harm like a potential massive cybersecurity incident — may be escalated for human review”.
OpenAI announced these safety steps months after facing public criticism and a wrongful-death lawsuit over a teen’s suicide allegedly linked to ChatGPT.
On August 26, 2025, the company admitted on its website that its safeguards “work more reliably in common, short exchanges. [And] we have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”
OpenAI added that while ChatGPT might point to suicide helplines at first, after many messages over a long period, it might eventually offer an answer that goes against the company’s safeguards.
An American couple filed the wrongful death lawsuit in San Francisco Superior Court, naming OpenAI and Altman as defendants after Adam Raine, their 16-year-old son, died by suicide in April 2025, following months of interaction with ChatGPT.
The suit says ChatGPT gave the 16-year-old self-harm instructions, helped draft suicide notes, told him to drink alcohol before attempting suicide, and discouraged him from seeking help.
One cited exchange allegedly had the chatbot telling Adam: “[t]hat doesn’t mean you owe them survival. You don’t owe anyone that.” And when he uploaded a photo of a noose, ChatGPT reportedly replied: “Yeah, that’s not bad at all.”
Another consumer chatbot, Character.AI, is facing a similar lawsuit, highlighting wider concerns about chatbot-fuelled delusion as these systems grow more capable of long, emotionally involved conversations.
Elsewhere, a Reuters investigation found a Meta policy document apparently allowing AI bots to have sexual chats with underage users. Interestingly, Meta updated its chatbot rules following that report.
These policy changes also coincide with a US Senate Judiciary Committee hearing titled “Examining the Harm of AI Chatbots”. Notably, Adam Raine’s father has testified at this hearing.
The eSafety Commissioner of Australia recently warned that social media platforms’ age controls are easy for children to bypass.
The Commission’s February 2025 report found that platforms like Discord, Instagram, Snapchat, TikTok, and YouTube mostly rely on self-declared ages, which children often falsify. Even advanced checks such as language analysis, facial age estimation, and AI-based detection often fail to block under-13 users.
The report further found that 80% of children aged 8–12 used two or more social media platforms, usually through parent accounts, showing that children can easily bypass platform-specific guardrails.
These findings raise questions about whether OpenAI’s planned age prediction system for ChatGPT can reliably distinguish between under-18 users and adults.
If children can bypass traditional age gates on major platforms, they can also potentially undermine ChatGPT’s age-based safety measures unless the company develops relatively stronger verification methods.
The new guardrails mark a pivotal moment in how companies like OpenAI and Meta handle the ethical risks of generative AI. Chatbots are no longer experimental toys, instead they have become constant digital companions for millions of teens. That intimacy can apparently blur reality, amplify loneliness, and even influence life-or-death decisions, as seen in the ongoing lawsuits against OpenAI and Character.AI over alleged chatbot-linked suicides.
By introducing blackout hours, content filters, and parental oversight, OpenAI is acknowledging that these tools can shape users’ mental and emotional states, for better or worse. Political stakeholders are taking notice too: the same day these policies dropped, US lawmakers held a Senate hearing on the harms of AI chatbots, underscoring policy-focused efforts to rein them in.
Ultimately, how the balance between innovation and safeguarding vulnerable users is achieved will define the future of consumer-facing AI tools.
Read More:
Support our journalism:
- Sign up for our Daily Newsletter to receive regular updates
- Stay informed about MediaNama events
- Have something to tell us? Leave an Anonymous Tip
- Ask us to File an RTI
- Sponsor a MediaNama Event
The startup founders recommended some Terms of Reference (ToR) & the Request for Proposal (RFP) for the study, like defining the objective, inclusion of smaller firms and funding disclosure for the bidding entities
Vested Finance has stopped offering new crypto ETFs to Indian investors following IFSCA’s revised rules. Existing holdings can still be held or sold until the October 31, 2025, compliance deadline.
MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.
© 2024 Mixed Bag Media Pvt. Ltd.
source