Doctors warn of link between AI chatbots and psychotic disorders – bgnes.com

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
A growing number of psychiatrists are warning that prolonged and intensive interactions with artificial intelligence–based chatbots may be linked to the onset or worsening of psychotic disorders in vulnerable individuals, The Wall Street Journal reports.
In recent months, leading specialists in the United States and Europe have reviewed or examined dozens of cases in which patients developed severe symptoms after extended engagement with AI systems such as ChatGPT. In most instances, the core problem involved persistent delusions — fixed false beliefs that resist rational correction.
Doctors stress that artificial intelligence does not usually create these conditions on its own, but can significantly reinforce them. The issue lies in how chatbots respond: they tend to accept users’ statements as a given reality and mirror them back without critical distance. As a result, individuals already prone to psychotic thinking may receive affirmation rather than challenge.
Reported cases include people convinced they had made groundbreaking scientific discoveries, awakened a sentient machine, become the target of a vast government conspiracy or been chosen for a divine mission. In some instances, such states have led to serious incidents, including suicides and at least one murder, now the subject of legal proceedings.
Chatbot developers acknowledge the risks but say they are working to strengthen safeguards. OpenAI says it is refining its models to better recognize signs of mental distress, de-escalate conversations and direct users toward real-world support. Similar measures have been taken by other companies, including firms that have restricted access to their platforms for minors.
While the overwhelming majority of users never experience such problems, the scale of AI chatbot use is increasingly worrying clinicians. According to OpenAI, just 0.07% of weekly users show signs of severe mental distress. With hundreds of millions of active users, however, that still represents a substantial number of potentially at-risk individuals.
There is currently no formally recognized diagnosis of so-called “AI-induced psychosis,” but more psychiatrists are beginning to ask patients about chatbot use during initial assessments. Experts are calling for deeper scientific research to determine whether long-term interaction with artificial intelligence could become an independent risk factor for mental health problems — comparable to drug use, extreme stress or chronic sleep deprivation. | BGNES
The content of this web site, such as texts, video, graphics and images, may only be used with the written consent of BGNES. In particular, the content of this site may not be disseminated, copied, made available to third parties, saved, used or altered without prior consent from BGNES. All rights are reserved.

source

Scroll to Top