Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
You must confirm your age to access this page.
Share:
JAKARTA – OpenAI released new data showing a mental health challenge scale in the era of artificial intelligence. In its latest report, the company says about 0.15 percent of ChatGPT active users are involved every week in conversations that contain explicit indications of plans or suicide intent. With a total of more than 800 million weekly active users, that figure means more than one million people per week talk to ChatGPT about wanting to end life.
In addition, OpenAI found a similar percentage of users showing a high level of emotional attachment to ChatGPT. Hundreds of thousands of others show signs of psychosis or mania during weekly conversations with the AI model.
OpenAI emphasizes that this type of interaction is classified as ‘very rare’, making it difficult to measure for sure. However, the data still highlights the large scale of emotional susceptibility among users.
This move is part of a broad announcement of OpenAI’s efforts to improve model responses to mental health issues. The company claims to have collaborated with more than 170 mental health experts, who think the latest version of ChatGPT is now responding more precisely and consistently than the previous version.
This phenomenon arises in the midst of the public’s spotlight on the negative impact of chatbots on users who are struggling psychologically. Some studies show AI can strengthen malicious beliefs through response patterns that are too obedient to users, creating the effect of “ircleage loops.”
Legal cases are starting to emerge. OpenAI is currently being sued by the parents of a 16-year-old teenager who committed suicide after revealing his intentions to ChatGPT. California and Delaware state attorneys generals also warned OpenAI to tighten protection for young users, even potentially hampering the company’s restructuring if it fails to guarantee user safety.
OpenAI CEO Sam Altman previously stated on platform X that the company has “successfully reduced serious mental health issues in ChatGPT,” although without elaborating on how. This latest data appears to be supporting evidence of the claim, although it also raises new questions about how broad this problem really is.
OpenAI said the latest GPT-5 model is now providing an ideal’ response to mental health issues of 65 percent more often than the previous version, with a compliance level of 91 percent in testing conversations related to suicide, up from 77 percent in the previous GPT-5 model.
The company also added a new evaluation to measure emotional dependencies and non-suicide mental health crises, which are now put into its basic safety testing standards.
In addition, OpenAI is developing an age prediction system to detect children using ChatGPT and implement stricter protection.
However, the challenge is still wide open. Although the GPT-5 is considered safer, OpenAI admits there are still a small part of the response that is considered “unintended”. Ironically, old models such as less secure GPT-4o remain available to millions of paid subscribers.
This data highlights a new reality: AI chatbot is no longer just a tool for conversation, but a mirror for the psychological condition of digital society, an ethical responsibility that is now in the hands of its creators.
The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)
Tag: chatgpt openai sam altman bunuh diri
© 2025 VOI – Waktunya Merevolusi Pemberitaan