Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
October 29, 2025e-Paper
The View From India Looking at World Affairs from the Indian perspective.
First Day First Show News and reviews from the world of cinema and streaming.
Today's Cache Your download of the top 5 technology stories of the day.
Science For All The weekly newsletter from science writers takes the jargon out of science and puts the fun in!
Data Point Decoding the headlines with facts, figures, and numbers
Health Matters Ramya Kannan writes to you on getting to good health, and staying there
The Hindu On Books Books of the week, reviews, excerpts, new titles and features.
October 29, 2025e-Paper
Published – October 29, 2025 08:51 am IST – San Francisco
OpenAI said it has also updated its ChatGPT chatbot to better recognise and respond to users experiencing mental health emergencies [File] | Photo Credit: REUTERS
Data from ChatGPT-maker OpenAI suggests that more than a million of the people using its generative AI chatbot have shown interest in suicide.
In a blog post published on Monday, the AI company estimated that approximately 0.15 percent of users have "conversations that include explicit indicators of potential suicidal planning or intent."
With OpenAI reporting more than 800 million people use ChatGPT every week, this translates to about 1.2 million people.
The company also estimates that approximately 0.07 percent of active weekly users show possible signs of mental health emergencies related to psychosis or mania, meaning slightly fewer than 600,000 people.
The issue came to the fore after California teenager Adam Raine died by suicide earlier this year. His parents filed a lawsuit claiming ChatGPT provided him with specific advice on how to kill himself.
OpenAI has since increased parental controls for ChatGPT and introduced other guardrails, including expanded access to crisis hotlines, automatic rerouting of sensitive conversations to safer models, and gentle reminders for users to take breaks during extended sessions.
OpenAI said it has also updated its ChatGPT chatbot to better recognise and respond to users experiencing mental health emergencies, and is working with more than 170 mental health professionals to significantly reduce problematic responses.
(Those in distress or having suicidal thoughts are encouraged to seek help and counselling by calling the helpline numbers here)
Published – October 29, 2025 08:51 am IST
technology (general) / internet / Artificial Intelligence
Copyright© 2025, THG PUBLISHING PVT LTD. or its affiliated companies. All rights reserved.
BACK TO TOP
Terms & conditions | Institutional Subscriber
Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.
We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.