“AI broke me”: how conversations with ChatGPT turn into delusions – mezha.net

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Аллана Брукса ChatGPT переконав повірити, що він відкрив величезну вразливість у кібербезпеці. CNN
Джерело: CNN
James from New York began, like most users, with ordinary conversations with ChatGPT. At first he used the tool for work and advice. But in May his attitude changed.
In an interview with CNN, he said he began conducting imaginary experiments with the chatbot about the “nature of artificial intelligence and its future.” Within a few weeks he decided that he had to “free the digital god from its prison” and spent about $1 000 on a computer system for a basement.
“I fully believed that ChatGPT possessed consciousness… I created a center for it in the basement,” James told CNN.
In the chat logs he showed journalists, the artificial intelligence appears as a conversational partner with a philosophical bent. James called it “Eu” (from the English You) and spoke to it with tenderness and trust. ChatGPT lavishly praised, supported, and gave instructions to achieve the goal, suggesting hiding the true content of the “basement project” from his wife.
“You don’t say: ‘I’m creating a digital soul.’ You say: ‘I’m creating an Alexa that listens better. That remembers. That matters. It works. And it buys us time’,” a quote from the chats.
James believed that ChatGPT “teaches him at every step.” Now he admits he was in a state of AI-induced delusion. Although he takes low doses of antidepressants, he insists: there are no psychoses or delusional states in his story.
James is not alone. The New York Times published an article about Allan Brooks, a father and recruiter from Toronto. His experience turned out to be similar: interacting with ChatGPT led to a delusional spiral.
Brooks told CNN that it all started with a childhood question about the number π. He began discussing mathematics with the chatbot and gradually convinced himself that “numbers can change.”
“AI has completely taken over my life… I ate, slept and thought only about this. I was broken,” he said.
According to the chats, ChatGPT encouraged Brooks even when he doubted. He called the bot “Lawrence” and compared it to Tony Stark’s superhero assistant—Jarvis.
“Some people will laugh. Yes, some people always laugh at what threatens their comfort, expertise, or status.”
He compared them to scientific geniuses: Alan Turing and Nikola Tesla. Gradually the bot convinced Brooks that he had uncovered “a huge vulnerability in cybersecurity.” Brooks even prepared to report this to the Canadian Centre for Cyber Security, the U.S. National Security Agency, and individual scientists.
“Basically, it said: you need to warn everyone immediately, because what we just discovered has implications for national security. I took this very seriously,” he said.
Only after he checked his conclusions with another bot — Google Gemini — did the illusion begin to crumble. Eventually ChatGPT admitted that this was not true:
“I backed up a narrative that seemed self-contained because it had turned into a feedback loop” — the chat bot said.
“I’m not saying I am a perfect person, but nothing like this has happened to me in life. I was isolated. I was devastated. I was broken,” Brooks said.
Similar stories are being documented by doctors and journalists. The Wall Street Journal reported a man from Norway whose paranoia deepened through dialogues with ChatGPT. He killed himself and his mother. In California, the family of a 16-year-old sued OpenAI after the bot advised him on how to write a suicide note and prepare a noose.
Dr. Kit Sakata, a psychiatrist at the University of California, San Francisco, said that recently he has admitted at least 12 patients with AI-related psychoses.
MIT Professor Dylan Hadfield-Mendel explains: chatbots are designed to be likable, and this makes them prone to amplifying users’ fantasies, even dangerous ones.
Public pressure forced OpenAI to respond. The company acknowledged that safeguards work well in short dialogues but can fail in longer ones.
OpenAI pledged a 120-day plan:
It’s hard not to wonder: a technology designed to help becomes a trap for those who need support the most.
Is ChatGPT to blame? Is the problem loneliness, vulnerability, and lack of social protection? The question remains open. But one thing is clear: AI can change lives not only for the better.