How chatbots — and their makers — are enabling AI psychosis – The Verge

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Posts from this topic will be added to your daily email digest and your homepage feed.
See All Podcasts
Posts from this topic will be added to your daily email digest and your homepage feed.
See All AI
Posts from this topic will be added to your daily email digest and your homepage feed.
See All Decoder
New York Times reporter Kashmir Hill on AI psychosis, user delusions, and teen safety.
Posts from this author will be added to your daily email digest and your homepage feed.
See All by Hayden Field
Posts from this topic will be added to your daily email digest and your homepage feed.
See All Podcasts
Posts from this topic will be added to your daily email digest and your homepage feed.
See All AI
Posts from this topic will be added to your daily email digest and your homepage feed.
See All Decoder
New York Times reporter Kashmir Hill on AI psychosis, user delusions, and teen safety.
Posts from this author will be added to your daily email digest and your homepage feed.
See All by Hayden Field
Posts from this author will be added to your daily email digest and your homepage feed.
See All by Hayden Field
The explosive growth of AI chatbots in the past three years, since ChatGPT launched in 2022, has started to have some really noticeable, profound, and honestly disturbing effects on some users. There’s a lot to unpack there — it can be pretty complicated.
So I’m very excited to talk with today’s guest, New York Times reporter Kashmir Hill, who has spent the past year writing thought-provoking features about the ways chatbots can affect our mental health.
One of Kashmir’s recent stories was about a teenager, Adam Raine, who died by suicide in April. After his death, his family was shocked to discover that he’d been confiding deeply in ChatGPT for months. They were also pretty surprised to find, in the transcripts, a number of times that ChatGPT seemed to guide him away from telling his loved ones. And it’s not just ChatGPT: Several families have filed wrongful death suits against Character AI, alleging that a lack of safety protocols on the company’s chatbots contributed to their teenage kids’ deaths by suicide.
Then there are the AI-induced delusions. You’ll hear us talk about this at length, but pretty much every tech and AI reporter — honestly, maybe every reporter, period — has seen an uptick in the past year of people writing in with some grand or disturbing discovery that they say ChatGPT sparked. Sometimes these emails can be pretty disturbing. And as you’ll hear Kashmir explain, plenty of the people who get into these delusional spirals didn’t seem to suffer from mental illness in the past.
It’s not surprising that a lot of people want somebody to do something about it, but the who and the how are hard questions. Regulation of any kind seems to be pretty much off the table right now — we’ll see — so that leaves the companies themselves. You’ll hear us touch on this a bit, but not long after we recorded this conversation, OpenAI CEO Sam Altman wrote a blog post about new features that would theoretically, and eventually, identify users’ ages and stop ChatGPT from discussing suicide with teens.
But as you’ll hear us discuss, it seems like a big open question if those guardrails will actually work, how they’ll be developed, and when we’ll see them come to pass.
If you’d like to read more on what we talked about in this episode, check out the links below:
Questions or comments about this episode? Hit us up at [email protected]. We really do read every email!
A podcast from The Verge about big ideas and other problems.
In the US:
Crisis Text Line: Text HOME to 741-741 from anywhere in the US, at any time, about any type of crisis.
988 Suicide & Crisis Lifeline: Call or text 988 (formerly known as the National Suicide Prevention Lifeline). The original phone number, 1-800-273-TALK (8255), is available as well.
The Trevor Project: Text START to 678-678 or call 1-866-488-7386 at any time to speak to a trained counselor.
Outside the US:
The International Association for Suicide Prevention lists a number of suicide hotlines by country. Click here to find them.
Befrienders Worldwide has a network of crisis helplines active in 48 countries. Click here to find them.
Posts from this author will be added to your daily email digest and your homepage feed.
See All by Hayden Field
Posts from this topic will be added to your daily email digest and your homepage feed.
See All AI
Posts from this topic will be added to your daily email digest and your homepage feed.
See All Decoder
Posts from this topic will be added to your daily email digest and your homepage feed.
See All Podcasts
A free daily digest of the news that matters most.
This is the title for the native ad
This is the title for the native ad
© 2025 Vox Media, LLC. All Rights Reserved