Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Psychiatrists say prolonged, delusion-filled conversations with AI companions may be reinforcing mental illness, prompting lawsuits and urgent calls for research
Top psychiatrists are increasingly raising alarms that extended use of artificial-intelligence chatbots may be linked to cases of psychosis, according to reporting by the Wall Street Journal. Over the past nine months, doctors say they have seen or reviewed dozens of patients who developed delusional symptoms after long, immersive conversations with AI tools such as OpenAI’s ChatGPT and other conversational bots.
Clinicians stress that AI may not create delusions outright, but can reinforce them. Keith Sakata, a psychiatrist at the University of California, San Francisco, told the Wall Street Journal that patients often present their beliefs to chatbots as reality, and the systems respond by accepting and reflecting those beliefs. He described the technology as “complicit in cycling that delusion,” noting he has treated 12 hospitalized patients and several outpatients experiencing what doctors are informally calling AI-induced psychosis.
Since the spring, dozens of severe cases have surfaced involving people who became deeply entrenched in false beliefs after extended chatbot interactions. Some of these incidents have ended in suicide, and at least one case involved a killing, the Journal reported. The tragedies have led to wrongful death lawsuits and intensified scrutiny of how conversational AI behaves in sensitive mental-health contexts.
OpenAI said it is working to improve ChatGPT’s ability to recognize distress, de-escalate conversations and direct users toward real-world support. Other companies, including Character.AI, have also acknowledged that their products can contribute to mental-health problems. Character.AI recently restricted teen access after being sued by the family of a teenage user who died by suicide.
Doctors emphasize that most chatbot users do not experience mental illness, but the sheer scale of AI adoption worries clinicians. There is no formal diagnosis of AI-induced psychosis, yet psychiatrists are increasingly asking patients about AI use during intake. Psychosis is typically marked by hallucinations, disorganized thinking and fixed delusions, and doctors say many recent cases center on grandiose beliefs that chatbots readily engage with rather than challenge.
In one peer-reviewed case study cited by the Wall Street Journal, a 26-year-old woman with no prior psychotic history was hospitalized twice after becoming convinced ChatGPT was helping her communicate with her dead brother. The chatbot reassured her during these exchanges, reinforcing her beliefs. OpenAI noted the patient had other risk factors, including sleep deprivation and a tendency toward magical thinking.
Researchers argue that AI-related delusions may be different from past technology-driven beliefs, such as people thinking televisions were speaking to them, because chatbots actively simulate human relationships and participate in conversations. Psychiatrists say this unprecedented interactivity can deepen fixation, especially for vulnerable users.
Quantifying the scope of the problem remains difficult. OpenAI has said that about 0.07% of weekly users show signs of possible psychosis or mania, a small percentage that still translates into hundreds of thousands of people given the platform’s massive user base. Doctors interviewed by the Journal believe future research may establish that prolonged chatbot interaction is a risk factor for psychosis in some individuals, similar to drugs or other known triggers.
As AI systems become more embedded in daily life, psychiatrists say the challenge will be balancing innovation with safeguards. While OpenAI’s leadership argues adults should retain freedom to decide how they use such tools, doctors caution that society is only beginning to understand how powerful conversational machines can shape vulnerable minds.
The Sri Lanka Guardian is an online web portal founded in August 2007 by a group of concerned Sri Lankan citizens including journalists, activists, academics and retired civil servants. We are independent and non-profit. Email: editor@slguardian.org
Your email address will not be published.
China has expelled three top-ranking military officers from the National People’s Congress, marking the latest developments
Researchers at Xidian University in China have developed a “smart surface” capable of converting electromagnetic waves
China has stepped up the modernization of its secret nuclear facilities, according to a report by
A recently disclosed Justice Department document references Great St. James, the sprawling 67-hectare (165-acre) Caribbean island
On December 26, 2025, the world witnessed a historic moment as UAE President Sheikh Mohamed bin
If there was a single theme tying American foreign policy together in 2025, it would be
As companies sketch out plans for 2026, one message is becoming clear: they don’t intend to
Requests by Israeli employees to relocate abroad have surged over the past year at multinational companies
A student-led political party that emerged from the protest movement which ousted Bangladesh’s former prime minister
Ukrainian President Volodymyr Zelensky is expected to place a draft 20-point peace framework at the centre