#Chatbots

OpenAI to add mental health features to ChatGPT – The American Bazaar

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
OpenAI is planning to introduce new features to ChatGPT that would help stop users’ unhealthy behavior. Starting Monday, the app would prompt users to take breaks from lengthy conversations with “gentle reminders,” that say “You’ve been chatting for a while — is this a good time for a break?”
“There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,” OpenAI wrote in an announcement. “While rare, we’re continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.”
READ: Apple forms team to build ChatGPT-like app (August 4, 2025)
This likely comes in response to an increasing number of incidents where ChatGPT’s answers have led users down dark paths. ChatGPT has reportedly been inefficient in shutting down unhealthy conversations which in some cases have even led to suicide ideation. OpenAI stated in its blog that ChatGPT will be updated in the future to respond more carefully to “high-stakes personal decisions.”
While OpenAI has said that it wants ChatGPT to feel helpful, encouraging and enjoyable to use, these qualities have been hard to incorporate completely into the app. Earlier this year, the company had to roll back an update to GPT-4o that made it agreeable to the point of mockery and concern online. Users shared conversations in which GPT-4o in one instance, praised them for believing their family was responsible for “radio signals coming in through the walls” and in another instance, endorsed and gave instructions for terrorism.
ChatGPT then announced in April that it revised its training techniques to “explicitly steer the model away from sycophancy” or flattery. The company now claims to have engaged experts to help ensure ChatGPT behaves more appropriately in such situations.
OpenAI wrote in its blog post that it worked with more than 90 physicians across dozens of countries to craft custom rubrics for “evaluating complex, multi-turn conversations.” It’s also seeking feedback from researchers and clinicians who, according to the post, are helping to refine evaluation methods and stress-test safeguards for ChatGPT. It also added that it is forming an advisory group made of experts in mental health, youth development and human-computer interaction.
READ: ‘ChatGPT for doctors’: AI startup OpenEvidence raises $210 million to simplify medical research (July 16, 2025)
OpenAI CEO Sam Altman expressed concern about people using the chatbot as a therapist or a life coach in a recent interview with podcaster Theo Von. He pointed out that legal confidentiality similar to that between doctors and patients, or lawyers and clients don’t apply to chatbots. “So, if you go talk to ChatGPT about your most sensitive stuff and then there’s a lawsuit or whatever, we could be required to produce that. And I think that’s very screwed up,” Altman said.
“I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever. And no one had to think about that even a year ago,” he said.
OpenAI also stated that if people spend less time on the chatbot, it would indicate that it was actually doing its job. “Instead of measuring success by time spent or clicks, we care more about whether you leave the product having done what you came for,” OpenAI wrote. “We also pay attention to whether you return daily, weekly, or monthly because that shows ChatGPT is useful enough to come back to.”
Nileena Sunil is a Reporter for the American Bazaar. A postgraduate in English Literature from Christ University, Bengaluru, she has previously worked as an instructional designer and a copywriter before switching fields.





4 + twelve =





This site uses Akismet to reduce spam. Learn how your comment data is processed.
Let’s connect on any of these social networks!


The American Bazaar is a publication of American Bazaar, Inc., based in Germantown, MD.
Type above and press Enter to search. Press Esc to cancel.

source

OpenAI to add mental health features to ChatGPT – The American Bazaar

Elon Musk says xAI will open source

OpenAI to add mental health features to ChatGPT – The American Bazaar

OpenAI to add mental health features to