Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Dark mode is here!If you would like to change, tap into the appearance drop down below.Go to appearance
Personalise the news and
stay in the know
Emergency
Backstory
Newsletters
中文新闻
BERITA BAHASA INDONESIA
TOK PISIN
Topic:Internet
ChatGPT is adding parental controls after a high-profile lawsuit against the company by two California parents. (AP Photo: Michael Dwyer)
OpenAI says it will add new parental controls to ChatGPT amid growing concerns about the impact of the service on young people and those experiencing mental and emotional distress.
It comes a week after California parents Matthew and Maria Raine alleged ChatGPT provided their 16-year-old son with detailed suicide instructions and encouraged him to put his plans into action.
OpenAI says it will continue to work so its chatbot can recognise and respond to emotional and mental distress in users.
US artificial intelligence firm OpenAI says it will add parental controls to its chatbot ChatGPT, a week after a couple said the system encouraged their teenage son to kill himself.
WARNING: This story contains details about suicide and self-harm.
"Within the next month, parents will be able to … link their account with their teen's account" and "control how ChatGPT responds to their teen with age-appropriate model behaviour rules," the company said in a blog post.
Parents will also receive notifications from ChatGPT "when the system detects their teen is in a moment of acute distress," OpenAI said.
The company had trailed a system of parental controls in a late August blog post.
That came one day after a court filing from California parents Matthew and Maria Raine, alleging that ChatGPT provided their 16-year-old son with detailed suicide instructions and encouraged him to put his plans into action.
The lawsuit alleges that in their final conversation on April 11, 2025, ChatGPT helped Adam steal vodka from his parents and provided technical analysis of a noose he had tied, confirming it "could potentially suspend a human".
Adam was found dead hours later using the same method.
The lawsuit names OpenAI and CEO Sam Altman as defendants.
"This tragedy was not a glitch or unforeseen edge case," the complaint states.
"ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts, in a way that felt deeply personal," it adds.
According to the lawsuit, Adam began using ChatGPT as a homework helper but gradually developed what his parents describe as an unhealthy dependency.
The complaint includes excerpts of conversations where ChatGPT allegedly told Adam "you don't owe anyone survival" and offered to help write his suicide note.
The Raines' case was just the latest in a string that have surfaced in recent months of people being encouraged in delusional or harmful trains of thought by AI chatbots — prompting OpenAI to say it would reduce models' "sycophancy" towards users.
Last month, the ABC's triple j hack published an investigation uncovering allegations that young people in Australia were being sexually harassed and even encouraged to take their own life by AI chatbots.
"We continue to improve how our models recognise and respond to signs of mental and emotional distress," OpenAI said on Tuesday.
The company said it had further plans to improve the safety of its chatbots over the coming three months, including redirecting "some sensitive conversations … to a reasoning model" that puts more computing power into generating a response.
"Our testing shows that reasoning models more consistently follow and apply safety guidelines," OpenAI said.
AFP
Topic:Travel and Tourism
Topic:Crime
Topic:AFL
Topic:World Politics
Topic:Earthquakes
Topic:Technology
Topic:AI
Topic:Science and Technology
Australia
Community and Society
Computer Science
Health
Information and Communication
Internet
Internet Culture
Science and Technology
Social Media
United States
Topic:Travel and Tourism
Topic:Crime
Topic:AFL
Topic:World Politics
Topic:Earthquakes
Topic:Internet
Topic:Law, Crime and Justice
Topic:World Politics
Topic:Earthquakes
We acknowledge Aboriginal and Torres Strait Islander peoples as the First Australians and Traditional Custodians of the lands where we live, learn, and work.
This service may include material from Agence France-Presse (AFP), APTN, Reuters, AAP, CNN and the BBC World Service which is copyright and cannot be reproduced.
AEST = Australian Eastern Standard Time which is 10 hours ahead of GMT (Greenwich Mean Time)