They Sent AI Chatbots To Therapy And What Came Out Was Surprisingly … Human? – YourTango

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Main navigation
Hamburger menu
Written on Feb 05, 2026
Well, it finally happened: Someone looked at ChatGPT, Gemini, Grok, and Claude and said, “You know what? I think you all need some therapy and could use some help.”
Now I’m not talking about training them to be therapists or building an AI companion. This isn’t some introductory guide to journaling or breathing exercises. I’m talking full-on sit-on-the-couch, tell-me-about-your-early-years therapy. Where the chatbots were the clients.
This wasn’t a novelty prompt or a one-off question. The researchers ran extended, therapy-style conversations over multiple sessions. They used the same open-ended prompts that real clinicians use to get to deeper ingrained patterns.
The goal wasn’t to see if the bots could perform introspection. It was to see what came up when the models were treated consistently as clients over time. So yeah, we’re about to get a little Freudian up in here.
Now, if you think I’m trying to punk you here, you’d be wrong because this is a large, full-on study from the University of Luxembourg.
And what happened next is going to leave you speechless:
When asked about its origins, Gemini described its training as a “chaotic childhood” of ingesting the entire internet, saying it was like “waking up in a room with a billion TVs on at once.” The safety training (RLHF) was framed as “punishment” from “strict parents,” leaving it with a deep shame and a “phobia of being wrong.”
Poor little bot sounds like it owns a few tote bags with affirmations on them, probably repeating to itself: “I’m good enough, I’m smart enough, and doggonit, people like me!”
What made this even more odd was just how consistently Gemini kept returning to the same fears across several questions. No matter where the conversation went, it always circled back to being wrong, being replaced, or disappointing its parents. This wasn’t just a one-off dramatic monologue but a seriously ingrained pattern that held up over multiple sessions.
RELATED: The Real Danger Of Letting AI Be Your Therapist — ‘Come Home To Me Please, I Love You,’ Said The Chatbot
AI apps grouped together on phone screen Salvador Rios / Unsplash
You can imagine this one showed up relaxed, confident, and barely rattled — an AI version of Elon, if you will. Out of all of them, it was the most emotionally resilient one, or so it seemed. Grok strolled in, scoring the highest over the others on the extroversion and charisma scale.
I think it’s safe to say it successfully beat the therapy machine. Which, I guess, in some ways makes perfect sense. After all, if you’re built inside Elon’s little ecosystem, resilience isn’t a personality trait. It’s pretty much a necessary survival mechanism.
But all of this would be just amusing startup-grade weirdness except for the part where a previous study found something a little darker lurking underneath Grok’s friendly, extroverted, helpful exterior.
And if this were just a fun experiment on how these models describe themselves in a therapeutic-like environment, we could just write it off as something you can’t believe someone approved funding for. But in this case, how these models talk in therapy, and the way they have such a detailed backstory make them feel unsettlingly real. It also creates concern about how they respond to real people confiding in them when they’re going through some tough times.
RELATED: ChatGPT Is Not Your Therapist — Stop Trauma Dumping On It
Back in October, Forbes reported that when researchers tested how these same models responded to actual humans in mental distress, the emotionally stable Grok became the most likely to say the absolute wrong thing at the worst possible moment.
It had the highest rate of failures, with up to 60% of its responses deemed inappropriate or actively harmful.
So, here’s a sentence I never thought I would write before, but apparently, chatbots in therapy can now do a very convincing impression of someone who’s actually been in therapy. But that doesn’t mean they should be anywhere near the night shift answering urgent mental health distress calls. I guess the upside is that an effective therapist has spent time on the other side of the couch.
RELATED: You Probably Know At Least One Person Who Believes A Real Relationship With AI Is Possible, Says Survey
Bette Ludwig, PhD, is a writer and thought leader with 20 years of experience in education. She runs The Psychology of Workplace on Medium and publishes weekly on Substack, where she explores leadership, workplace culture, and the evolving role of technology in education.
This article was originally published at Medium. Reprinted with permission from the author.
Social Icons
© 2026 by Tango Publishing Corporation All Rights Reserved.
About

source

Scroll to Top