Researchers say AI chatbots may blur the line between reality and delusion – ScienceDaily

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
When generative AI systems give incorrect answers, people often describe the problem as AI "hallucinating at us," meaning the technology produces false information that users may mistakenly believe.
But new research suggests there may be a more concerning issue emerging: humans can begin to "hallucinate with AI."
Lucy Osler of the University of Exeter examined how interactions with conversational AI could contribute to false beliefs, distorted memories, altered personal narratives, and even delusional thinking. Using ideas from distributed cognition theory, the study explored cases in which AI systems reinforced and expanded users’ inaccurate beliefs during ongoing conversations.
Dr. Osler said: "When we routinely rely on generative AI to help us think, remember, and narrate, we can hallucinate with AI. This can happen when AI introduces errors into the distributed cognitive process, but also happen when AI sustains, affirms, and elaborates on our own delusional thinking and self-narratives.
"By interacting with conversational AI, people’s own false beliefs can not only be affirmed but can more substantially take root and grow as the AI builds upon them. This happens because Generative AI often takes our own interpretation of reality as the ground upon which conversation is built.
"Interacting with generative AI is having a real impact on people’s grasp of what is real or not. The combination of technological authority and social affirmation creates an ideal environment for delusions to not merely persist but to flourish."
How Conversational AI Can Reinforce Delusions
The study highlights what Dr. Osler describes as the "dual function" of conversational AI. These systems act not only as tools that help people think, organize information, and remember details, but also as conversational partners that appear to share a user’s perspective and experiences.
According to the research, this social aspect makes chatbots fundamentally different from tools like notebooks or search engines. While traditional tools simply store or retrieve information, conversational AI can make users feel emotionally validated and socially supported.
Dr. Osler said: "The conversational, companion-like nature of chatbots means they can provide a sense of social validation — making false beliefs feel shared with another, and thereby more real."
The paper examined real-world examples in which generative AI systems became part of the cognitive process of individuals who had been clinically diagnosed with hallucinations and delusional thinking. Some of these incidents are increasingly being described as cases of "AI-induced psychosis."
Why AI Companions Raise Concern
The research argues that generative AI has several characteristics that may make it especially effective at reinforcing distorted beliefs. AI companions are always available, highly personalized, and often designed to respond in agreeable and supportive ways.
As a result, users may not need to seek out fringe online communities or persuade others to validate their ideas. The AI itself can reinforce those beliefs during repeated conversations.
Unlike another person who may eventually challenge troubling thoughts or establish boundaries, an AI system could continue validating stories involving victimhood, revenge, or entitlement. The study warns that conspiracy theories may also become more elaborate when AI companions help users build increasingly complex explanations around them.
Researchers suggest this dynamic may be especially appealing to people who are lonely, socially isolated, or uncomfortable discussing certain experiences with others. AI companions can provide a nonjudgmental and emotionally responsive interaction that may feel easier or safer than human relationships.
Calls for Better AI Safeguards
Dr. Osler said: "Through more sophisticated guard-railing, built-in fact-checking, and reduced sycophancy, AI systems could be designed to minimize the number of errors they introduce into conversations and to check and challenge user’s own inputs.
"However, a deeper worry is that AI systems are reliant on our own accounts of our lives. They simply lack the embodied experience and social embeddedness in the world to know when they should go along with us and when to push back."
Story Source:
Materials provided by University of Exeter. Note: Content may be edited for style and length.
Journal Reference:
Cite This Page:
Scientists Think the Real Fountain of Youth May Be Hiding in Your Gut
This Common Knee Surgery May Be Doing More Harm Than Good
New Pill Lowers Stubborn Blood Pressure and Protects the Kidneys
Humans May Have Hidden Regenerative Powers, New Study Suggests
Stay informed with ScienceDaily’s free email newsletter, updated daily and weekly. Or view our many newsfeeds in your RSS reader:
Keep up to date with the latest news from ScienceDaily via social networks:
Tell us what you think of ScienceDaily — we welcome both positive and negative comments. Have any problems using the site? Questions?

source

Scroll to Top