Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Notifications can be managed in browser preferences.
Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in
Swipe for next article
A new safety feature called ‘Trusted Contact’ comes amid growing concerns about AI pushing people towards mania and psychosis
Removed from bookmarks
I would like to be emailed about offers, events and updates from The Independent. Read our Privacy notice
OpenAI has launched a new safety feature that will send alerts to friends or family members when a ChatGPT user is suffering from a mental health emergency.
The opt-in Trusted Contact feature allows users to nominate someone to be notified if they begin discussing self-harm or suicide with the AI chatbot.
The new update comes growing concerns about artificial intelligence tools like ChatGPT pushing people towards mania, psychosis and death.
OpenAI revealed last year that 0.07 per cent of regular ChatGPT users displayed signs of “mental health emergencies related to psychosis or mania”.
With around 900 million weekly active users, that amounts to more than half a million people.
Another 0.15 per cent of users – or 1.3m people – reportedly expressed risk of self-harm or suicide.
The latest feature uses ChatGPT’s automated monitoring systems to detect serious safety concerns with user behaviour.
The chat history is then reviewed by a specially trained team who can determine whether a trusted contact should check in.
“Psychological science consistently shows that social connection is a powerful protective factor, especially during periods of emotional distress,” said Dr Arthur Evans, chief executive officer of the American Psychological Association.
“Helping people identify a trusted person in advance, while preserving their choice and autonomy, can make it easier to reach out to real-world support when it matters most.”
The Trusted Contacts feature builds on existing safety controls, which include providing information about local helplines when a ChatGPT user is believed to be suffering from a crisis.
“One of AI’s biggest promises is how it can foster authentic human-to-human connection and psychological safety,” said Dr Munmun De Choudhury, a professor of Interactive Computing at Georgia Tech and member of the Expert Council on Well-Being and AI.
“I am encouraged by ChatGPT’s Trusted Contacts feature, which offers a step forward to human empowerment, especially during moments of vulnerability.”
EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the US is available by calling or texting 988.
Join thought-provoking conversations, follow other Independent readers and see their replies
Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in