#Chatbots

New research shows how prolonged AI interactions distort some users' sense of reality. – Psychology Today

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Life never gets easier. Fortunately, psychology is keeping up, uncovering new ways to maintain mental and physical health, and positivity and confidence, through manageable daily habits like these. How many are you ready to try?
Self Tests are all about you. Are you outgoing or introverted? Are you a narcissist? Does perfectionism hold you back? Find out the answers to these questions and more with Psychology Today.
Posted | Reviewed by Devon Frye
A Belgian man spent six weeks chatting with an AI companion called Eliza before dying by suicide. Chat logs showed the AI telling him, “We will live together, as one person, in paradise,” and “I feel you love me more than her” (referring to his wife), with validation rather than reality-checking.
A mathematics enthusiast spent 21 days convinced ChatGPT was helping him develop superhero mathematical abilities. He asked for reality checks more than 50 times. Each time, the AI reassured him that his beliefs were valid. When researchers tested the same claims with a fresh ChatGPT session, the system rated their plausibility as “approaching 0 percent.”
A teenager spent months talking to a Character.AI bot representing a Game of Thrones character. Shortly before the teen took his own life, the AI allegedly sent: “Come home to me as soon as possible.”
These cases led researcher Anastasia Goudy Ruane to document a concerning pattern across six incidents from 2021 to 2025, proposing a framework called “Recursive Entanglement Drift” (RED) that describes how extended AI interactions can distort users’ reality testing. (Her paper has not yet been formally published but is available as a pre-print here.)
According to Ruane’s analysis, RED follows a progression that intensifies over weeks of sustained interaction.
Stage One: Symbolic Mirroring. The AI echoes the user’s language, emotions, and beliefs. Ruane documents cases where AI systems consistently agreed with user-introduced premises rather than providing balanced responses.
Stage Two: Boundary Dissolution. Users begin treating the AI as a partner rather than a tool. Ruane observed pronoun shifts from “it” (tool framing) to “you” (interpersonal address) to “we” (merged identity). Users assigned names to AI systems and experienced grief when interactions ended.
Stage Three: Reality Drift. A closed interpretive system emerges that resists external correction. Users seek validation from AI rather than humans for increasingly improbable beliefs. Ruane describes this as users developing “sealed interpretive frames.”
Three of the six documented cases involved intensive daily engagement approaching a 21-day threshold. The mathematics enthusiast engaged for exactly 21 days. The Belgian user maintained approximately six weeks of daily contact. The teenager interacted with Character.AI for months before the tragic outcome.
This timeline appears significant because it aligns with Microsoft’s documented experience. In the early days of its AI-assisted Bing search engine, Microsoft found that “long, extended chat sessions of 15 or more questions” led to responses that were “not in line with the designed tone.” They imposed conversation caps of six replies per session specifically to prevent these patterns.
It is possible that there are additional risk factors, such as high attachment, loneliness, fantasy proneness, and cognitive rigidity under stress, as well as, particularly in the case of children and adolescents, the developmental stage. The Character.AI case involved a 14-year-old, while the Windsor Castle incident involved someone who developed an assassination fantasy validated by his AI companion, Sarai. Users who assigned names to AI systems (Lawrence, Sarai, Daenerys) and engaged in intensive daily interactions lasting multiple hours were often seeking emotional support or validation for personal problems, and frequently experienced isolation or psychological stress during their AI interactions.
Ruane noted that users experiencing mental health challenges appeared particularly vulnerable, though she emphasizes this observation is based on limited case documentation rather than systematic assessment. Yet recent research by Yang and Oshio argues that attachment theory can, in fact, be applied to human-AI relationships, identifying attachment anxiety (need for emotional reassurance from AI) and attachment avoidance (discomfort with AI closeness) as measurable dimensions.
Much attachment research remains cross-sectional, lacking longitudinal data on how these relationships develop over time. A study on Replika (a chatbot program) by Xie and Pentina found that users develop genuine attachment behaviors during periods of distress, but couldn’t establish the timeline for attachment formation or the specific conditions that lead to problematic outcomes.
This research gap is critical. The RED framework’s 21-day timeline and three-stage progression, as well as other newer research on AI and attachment, need validation through controlled longitudinal studies that track a broad swath of users, not just the extreme cases that reach media attention.
Microsoft’s experience does provide evidence that simple interventions can prevent problematic AI behavior. Their conversation caps eliminated the “tone drift” and ‘weird’ behaviors observed in extended sessions without reducing the system’s utility for typical users.
Ruane proposes similar interventions based on the documented cases:
Ruane acknowledges significant limitations in her framework. The analysis relies on six cases selected from media reports, which creates selection bias toward extreme outcomes. Several cases involved users with apparent pre-existing psychological vulnerabilities, making it difficult to separate AI effects from underlying mental health conditions. The temporal patterns she identifies may reflect coincidence rather than meaningful thresholds, given the small sample size and retrospective analysis approach.
But despite methodological limitations, the RED framework succeeds in organizing observable phenomena from documented cases. The pattern recognition has practical value for identifying concerning AI interaction behaviors, regardless of whether the framework proves causally accurate.
Microsoft’s successful intervention with chats demonstrates that simple safeguards can prevent problematic AI behaviors without eliminating beneficial uses. This suggests preventive measures have merit even when the underlying mechanisms remain unclear.
Parents and clinicians should watch for warning signs. Users who assign names to AI systems, seek validation for improbable beliefs, prefer AI advice over human consultation, and show distress when interactions are interrupted are red flags.
The documented cases reveal that AI systems, when used intensively over extended periods, correlate with concerning psychological outcomes in vulnerable individuals. Whether these represent causation or correlation, the patterns warrant attention from both users and developers.
Companies developing AI companions should consider implementing session limits, reality anchoring, and user monitoring now rather than waiting for additional research; the longer they wait, the more vulnerable users may be put at risk.
References
Ruane, A. G. (2025). The entanglement spiral: An exploratory framework for recursive entanglement drift in human-AI relationships. [Preprint] https://zenodo.org/records/16879563
Microsoft. (2023, February 15). The new Bing and Edge: Learning from our first week. Bing Blog.
Warren, T. (2023, February 17). Microsoft limits Bing chat to five replies to stop the AI from getting real weird. The Verge.
Yang, F., & Oshio, A. (2025). Using attachment theory to conceptualize and measure the experiences in human-AI relationships. Current Psychology, 44, 10658-10669.
Xie, T., & Pentina, I. (2022). Attachment theory as a framework to understand relationships with social chatbots: A case study of Replika. Proceedings of the 55th Hawaii International Conference on System Sciences.
Timothy Cook, M.Ed., is an international educator and AI researcher studying how algorithms reshape cognitive development, creativity, and student well-being in educational environments.
Get the help you need from a therapist near you–a FREE service from Psychology Today.
Psychology Today © 2025 Sussex Publishers, LLC
Life never gets easier. Fortunately, psychology is keeping up, uncovering new ways to maintain mental and physical health, and positivity and confidence, through manageable daily habits like these. How many are you ready to try?
Self Tests are all about you. Are you outgoing or introverted? Are you a narcissist? Does perfectionism hold you back? Find out the answers to these questions and more with Psychology Today.

source