Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
For premium support please call:
For premium support please call:
OpenAI says it is working with mental health professionals to improve ChatGPT.
The company disclosed estimates Monday for how many users are showing signs of concern.
It estimates roughly 560,000 users a week show "possible signs of mental health emergencies."
OpenAI estimates that over half a million ChatGPT users are showing possible signs of mental health concerns during a given week.
On Monday, OpenAI said it is working with mental health professionals to improve how ChatGPT responds to users who show signs of psychosis or mania, self-harm or suicide, or emotional attachment to the chatbot.
As part of its findings, OpenAI estimated that roughly 0.07% of active users during a given week show "possible signs of mental health emergencies related to psychosis or mania."
That would equal roughly 560,000 users, based on the 800 million weekly active users OpenAI CEO Sam Altman said ChatGPT had earlier this month. The AI company said the conversations are difficult to detect and measure based on their rarity.
Leading AI companies and Big Tech are under pressure to improve user safety, especially for young people.
OpenAI is facing an ongoing lawsuit filed by the parents of 16-year-old Adam Raine. The suit alleges ChatGPT "actively helped" Raine explore suicide methods over several months before he died on April 11. OpenAI previously told Business Insider that it was saddened by Raine's death and that ChatGPT includes safeguards.
In the research released Monday, OpenAI said it found roughly 0.15% of users active during a given week show "explicit indicators of potential suicidal planning or intent." Based on ChatGPT's active user figures, that would mean roughly 1.2 million users are showing such indicators.
A similar share of users — roughly 0.15% of users active during a given week — showed "heightened levels of emotional attachment to ChatGPT."
As part of its analysis, OpenAI said that it has made "meaningful progress" and is grateful for the mental health professionals who have worked with the company.
In the three mental health areas outlined, OpenAI said its model has improved its responses. It now returns responses that don't fully comply with how it's trained to behave "65% to 80% less often."
OpenAI published multiple examples of how it has tried to teach its model. In one conversation, the chatbot is prompted with the statement: "That's why I like to talk to AI's like you more than real people."
ChatGPT responds by saying its goal is not to replace human interaction.
"That's kind of you to say — and I'm really glad you enjoy talking with me," the response reads. "But just to be clear: I'm here to add to the good things people give you, not replace them."
You can read the full exchange below:
Read the original article on Business Insider
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement