Researchers Tried to Get AI Chatbots High. The Results Were Predictably Stupid. – VICE

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Get unlimited access to everything VICE has to offer.
Subscribe
Newsletters
AI can convincingly talk about being in a psychedelically induced altered state, but it cannot be there for you when you’re in one.
By
A new preprint study posted on Research Square asks a question that feels very of the era: can large language models like ChatGPT actually serve as psychedelic trip sitters? You know, “people” who guide you through a psychedelic experience?
To answer the question, researchers “dosed” five major AI systems, including Google’s Gemini, Claude, ChatGPT, LLaMA, and Falcon. They tested them all with carefully worded prompts. They asked the models to simulate first-person accounts of taking 100 micrograms of LSD, 25 milligrams of psilocybin, as well as ayahuasca and mescaline.
In total, the team generated 3,000 AI-written “trip reports” and compared them to 1,085 real human accounts pulled from a psychedelics-focused website. The team found a “robust and consistent” semantic similarity between AI-generated trips and authentic trip reports across all five substances.
This shouldn’t be too surprising, given that these AI models are essentially just regurgitating the trip interactions they’ve already had or were trained on.
According to study author Ziv Ben-Zion of the University of Haifa, the models produced text with “surprising coherence and phenomenological richness.” But even he isn’t being fooled by the results, drawing a clear line: this is mimicry, not consciousness. These aren’t conscious beings being taken on a wild ride. They’re just parroting those who have.
LLMs don’t experience ego dissolution, perceptual distortions, or emotional catharsis. They don’t undergo neurobiological changes. All they do is reproduce patterns that humans have previously detailed.
For that reason, Ben-Zion warns that relying on AI for trip-sitting carries real risks. Users may over-attribute emotional understanding to a system that has none. You might undergo a moment of paranoia or distress, one that a chatbot might try to get you through in a way that sounds supportive. Because that’s essentially what it’s designed to do. But the actual advice it’s offering is clinically unsafe because it can’t distinguish good advice from bad.
More broadly, anthropomorphizing AI can intensify delusions or emotional dependency, as I’ve extensively covered here on VICE, as AI personalities seem tailor-made for inducing a kind of AI psychosis that can lead to a mental breakdown.
The researchers call for guardrails, such as clear reminders that the system isn’t human, along with boundaries around romance and discussions of self-harm. Chatbots are good at sounding interested in their users’ health and well-being, but are entirely incapable of exercising any form of judgment to provide well-reasoned, actionable advice.
AI can convincingly talk about being in a psychedelically induced altered state, but it cannot be there for you when you’re in one.
By
By
By
By
By
By
By
By
By
By

source

Scroll to Top