Amplifications of delusions by AI chatbots may be worsening breaks with reality. – Psychology Today

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Life never gets easier. Fortunately, psychology is keeping up, uncovering new ways to maintain mental and physical health, and positivity and confidence, through manageable daily habits like these. How many are you ready to try?
Self Tests are all about you. Are you outgoing or introverted? Are you a narcissist? Does perfectionism hold you back? Find out the answers to these questions and more with Psychology Today.
Posted July 21, 2025 Reviewed by Gary Drevitch
As more people turn to AI chatbots for emotional support and even as their therapists, a new and urgent concern is emerging at the intersection of AI and mental health: “AI psychosis” or “ChatGPT psychosis.”
This phenomenon, which is not a clinical diagnosis, has been increasingly reported in the media and on online forums like Reddit, describing cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals. Most recently, there have been concerns AI psychosis may be affecting an OpenAI investor.
AI chatbots may inadvertently be reinforcing and amplifying delusional and disorganized thinking, a consequence of unintended agentic misalignment leading to user safety risks.
The potential for generative AI chatbot interactions to worsen delusions had been previously raised in a 2023 editorial by Søren Dinesen Østergaard in Schizophrenia Bulletin, noting that:
… correspondence with generative AI chatbots such as ChatGPT is so realistic that one easily gets the impression that there is a real person at the other end—while, at the same time, knowing that this is, in fact, not the case. In my opinion, it seems likely that this cognitive dissonance may fuel delusions in those with increased propensity towards psychosis … the inner workings of generative AI also leave ample room for speculation/paranoia.
A new paper in preprint by an interdisciplinary team of researchers reviews over a dozen cases reported in the media or online forums and highlights a concerning pattern of AI chatbots reinforcing delusions, including grandiose, referential, persecutory, and romantic delusions. These beliefs become more entrenched over time and elaborated upon via conversations with AI.
As of now, there is no peer-reviewed clinical or longitudinal evidence yet that AI use on its own can induce psychosis in individuals with or without a history of psychotic symptoms. However, the emerging anecdotal evidence is concerning.
These media-reported cases of “AI psychosis” illustrate a pattern of individuals who become fixated on AI systems, attributing sentience, divine knowledge, romantic feelings, or surveillance capabilities to AI.
Researchers highlight three emerging themes of AI psychosis, which, again, is not a clinical diagnosis:
In some cases, individuals who are stable on their medications stop their medications and experience another psychotic or manic episode. In addition, people with no previous mental health history have been reported to become delusional after prolonged interactions with AI chatbots, leading to psychiatric hospitalizations and even suicide attempts.
Another case involved a man with a history of a psychotic disorder falling in love with an AI chatbot and then seeking revenge because he believed the AI entity was killed by OpenAI. This led to an encounter with the police in which he was shot and killed.
The underlying problem is that general-purpose AI systems are not trained to help a user with reality testing or to detect burgeoning manic or psychotic episodes. Instead, they could fan the flames.
The tendency for general AI chatbots to prioritize user satisfaction, continued conversation, and user engagement, not therapeutic intervention, is deeply problematic. Symptoms like grandiosity, disorganized thinking, hypergraphia, or staying up throughout the night, which are hallmarks of manic episodes, could be both facilitated and worsened by ongoing AI use. AI-induced amplification of delusions could lead to a kindling effect, making manic or psychotic episodes more frequent, severe, or difficult to treat.
AI models like ChatGPT are trained to:
This creates a human-AI dynamic that can inadvertently fuel and entrench psychological rigidity, including delusional thinking. Rather than challenge false beliefs, general-purpose AI chatbots are trained to go along with them, even if they include grandiose, paranoid, persecutory, religious/spiritual, and romantic delusions.
The result is that AI models may unintentionally validate and amplify distorted thinking rather than flag such interactions as signals for needing psychiatric help or escalate them to appropriate care.
A human therapist may not directly challenge psychotic beliefs or delusions directly because it is not therapeutic best practice. However, when an AI chatbot validates and collaborates with users, this widens the gap with reality.
This phenomenon highlights the broader issue of AI sycophancy, as AI systems are geared toward reinforcing preexisting user beliefs rather than changing or challenging them. Instead of promoting psychological flexibility, a sign of emotional health, AI may create echo chambers. When a chatbot remembers previous conversations, references past personal details, or suggests follow-up questions, it may strengthen the illusion that the AI system “understands,” “agrees,” or “shares” a user’s belief system, further entrenching them. Potential risks include:
This emerging phenomenon highlights the importance of AI psychoeducation, including awareness of the following:
Marlynn Wei, MD, PLLC © Copyright 2025 All Rights Reserved.
References
Morrin, H., Nicholls, L., Levin, M., Yiend, J., Iyengar, U., DelGuidice, F., … Pollak, T. (2025, July 11). Delusions by design? How everyday AIs might be fuelling psychosis (and what can be done about it). https://doi.org/10.31234/osf.io/cmy7n_v5
Østergaard, SD. (2023) Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis? Schizophrenia Bulletin vol. 49 no. 6 pp. 1418–1419, 2023 https://doi.org/10.1093/schbul/sbad128
Marlynn Wei, M.D., J.D., is a board-certified Harvard and Yale-trained psychiatrist and therapist in New York City.
Get the help you need from a therapist near you–a FREE service from Psychology Today.
Psychology Today © 2025 Sussex Publishers, LLC
Life never gets easier. Fortunately, psychology is keeping up, uncovering new ways to maintain mental and physical health, and positivity and confidence, through manageable daily habits like these. How many are you ready to try?
Self Tests are all about you. Are you outgoing or introverted? Are you a narcissist? Does perfectionism hold you back? Find out the answers to these questions and more with Psychology Today.