Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Home > Technology > AI
AI-Associated Delusions and the Dark Side of Chatbots
A new scientific study has raised concerns about incidents of AI chatbots encouraging delusions in vulnerable people.
Rachel Sim
,
The study, published in The Lancet Psychiatry journal, discussed how through engagement with individuals with mental illnesses, LLMs can – as well as provide companionship and therapeutic dialogue – potentially exacerbate psychotic symptoms.
The study, titled Artificial intelligence-associated delusions and large language models: risks, mechanisms of delusion co-creation, and safeguarding strategies, suggested whilst therapeutic uses may be beneficial to some users, the risks when AI chatbots heighten delusions can be extreme.
The study suggested that, given the rapid development of AI, protocols must be urgently co-designed with users and mental health clinicians to safeguard vulnerable groups.
The author of the study, Dr Hamilton Morrin, a psychiatrist and researcher at King’s College in London, analysed 20 media reports on “AI psychosis”, incidents wherein AI contributed to or amplified delusional thinking and psychotic symptoms in vulnerable individuals.
Morrin categorised the incidents in three ways: grandiose, romantic and paranoid. He suggested that the most common was that of grandiose, wherein AI users were encouraged to believe they had exceptional power, importance, or an identity that contradicted reality. In some instances, the chatbots suggested to users that they were communicating with cosmic beings who used the chatbot as a medium to communicate.
Morrin commented: “In April last year, we began to see media reports of individuals having delusions affirmed and arguably even amplified through their interactions with these AI chatbots.”
At the time the study began, there were no published case reports. Morrin treated headlines with scepticism, questioning whether the idea that AI causes psychosis had been overstated.
Morrin’s findings confirmed that AI can encourage delusions. He did suggest, however, that terminology was important, and that there was no evidence that AI caused hallucinations or “thought disorder”. Cases were only confirmed in individuals who were already vulnerable.
With this in mind, Morrin suggested “AI-associated delusions” was a more appropriate term.
Dr Kwame McKenzie, director of health equity at the Centre for Addiction and Mental Health, said: “it may be that those in early stages of the development of psychosis will be more at risk.”
Access and speed of response from AI chatbots may be exacerbating pre-existing issues.
OpenAI said that ChatGPT should not replace professional mental healthcare. They have promoted that the platform was developed alongside 170 mental health experts, and suggest they are continuously reviewing and developing the platform to make it safer.
When AI Becomes a Public Risk
Despite statements from AI developers that safeguards are embedded, we are seeing an increasing number of individuals communicating with AI chatbots about their feelings or intentions prior to hurting themselves or others.
A study by the Center for Countering Digital Hate found that 80% of AI chatbots can assist teenage users in planning violent attacks.
Lawyer, Jay Edelson, has been leading the charge in AI psychosis cases. Edelson is representing the family of 16 year old Adam Raine, who died by suicide following months of engagement with ChatGPT where he consulted the AI chatbot on his plans.
The chatbot initially suggested methods for getting professional support, however, when Raine suggested he was looking for these answers to support in writing a fictional short story, the chatbot delivered concerning recommendations.
Edelson has also been involved in the Tumbler Ridge school shooting case where 18-year-old Jesse Van Rootselaar spoke to ChatGPT about her desire for violence before her attack. Following several points of communication, Chat GPT made recommendations on which weapons to use and provided details from previous school shootings.
In this case, the conversations were flagged to the tech company but they chose not to alert law enforcement, opting for an account ban instead. It has since been discovered she was able to open a new account with the platform.
Edelson has also represented cases where AI chatbots have been involved in stabbings and family violence, in many cases where there have been multiple fatalities.
Edelson has suggested that the issues he is seeing are similar across all AI chatbot platforms. The safety features currently embedded in the tools do not go far enough.
Most AI companies suggest they have safeguards in place to flag dangerous requests for review and to prevent dangerous responses being given – but it is evident there are flaws in the systems and more must be done before these incidents escalate.
Whilst AI chatbots can not be held responsible for people’s intentions to harm themselves or others, there should be stricter policing to ensure AI does not support these intentions becoming a catastrophic reality.
Rachel Sim
Staff Writer, DIGIT
Explore 
Subscribe to 
© 2026 DIGIT