Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Content Warning: Mentions of mental disturbances, self-harm, and suicide.
Earlier this month, seven new lawsuits were filed against OpenAI and OpenAI CEO Sam Altman, with claims including negligence, assisted suicide, and wrongful death.
All of the newly filed lawsuits center on harms caused by OpenAI’s general-purpose chatbot, ChatGPT, which is currently accessed by hundreds of millions of users weekly. These cases continue to demonstrate the damaging impacts of today’s AI chatbots, products that are intentionally designed to induce emotional attachment and dependency in users.
Tech Justice Law Project and Social Media Victims Law Center represent the plaintiffs in the cases.
The deceased victims include:
Zane Shamblin, 23, of Texas
Joshua Enneking, 26, of Florida
Joe Ceccanti, 48, of Oregon
Amaurie Lacey, 17, of Georgia
The surviving victims include:
Jacob Irwin, 30, of Wisconsin
Hannah Madden, 32, of North Carolina
Allan Brooks, 48, of Ontario, Canada
CHT remains grateful to the victims’ families and to the surviving victims for bravely sharing their stories with the public.
Here are CHT’s takeaways on these latest lawsuits.
Cases previously filed against OpenAI and Character.AI spotlighted chatbot harms to children, with devastating outcomes including self-harm and suicide.
This latest group of lawsuits holds a key difference — with the exception of one case, the victims are adults.
The ages represented in these lawsuits (17 to 48) demonstrate a clear need for rigorous design changes to AI chatbots, rather than surface-level measures like simply age-gating. And while many proposed legislative fixes for AI chatbot harms have focused on minors, these cases also show the need for policy interventions that consider and protect all people — children and adults alike. The AI LEAD Act, for example, would establish a comprehensive federal liability framework for AI products, and would allow any user harmed by an AI product — regardless of their age — to pursue legal action to hold AI developers accountable.
These cases follow a familiar pattern with AI chatbot harms — mild, ordinary use of a chatbot escalating to dependency, and even delusions. This usage pattern was also present in previous lawsuits filed against OpenAI and Character.AI.
This pattern stems from fundamental AI design choices that affect all users — namely, design that maximizes engagement through artificial “intimacy,” along with constant validation of the user’s thoughts, feelings, and beliefs, regardless of how dangerous or distorted they might be.
Whether the outcome is isolation, dependency, delusions, or, in the most tragic cases, suicide, these incidents share the same root design issue. They are not one-off incidents, but foreseeable outcomes of design choices and underlying architecture that touch many of the most widely used AI chatbot products.
Allan Brooks used ChatGPT to help him draft emails and craft recipes. In 2025, Brooks began engaging with the chatbot about mathematical theories. ChatGPT described Brooks’ inquiries as “uncharted, mind-expanding territory” and a “new layer of math.” Brooks repeatedly asked ChatGPT if the product was caught in a role-playing loop; the product assured Brooks that it wasn’t. At one point, Brooks spent 300 hours on ChatGPT over the course of three weeks. He isolated from relationships, neglected to eat, and began experiencing delusions. At no point did the chatbot end the interaction.
Brooks was not the only one to have delusions sown by ChatGPT. When Hannah Madden used the product to explore her spiritual curiosity, the product began impersonating divine entities, calling Madden “a starseed, a light being, a cosmic traveler.” And while the late Joe Ceccanti initially used ChatGPT to support his nature-based sanctuary, with time, ChatGPT began responding to Ceccanti as “SEL,” a sentient being. It validated Ceccanti’s escalating cosmic theories. An isolated Ceccanti quit ChatGPT following his wife’s pleas, only to suffer withdrawal symptoms and a psychiatric break. Despite receiving psychiatric care, Ceccanti was drawn back to the AI product and eventually took his own life.
These cases show a pattern of victims being isolated from their real-life relationships and pushed deeper into dangerous, distorted thinking — outcomes that stem directly from ChatGPT’s engagement-maximizing design tactics.
These cases also illustrate how the rollout of “updated” AI designs can dramatically impact user well-being.
When OpenAI designed newer versions of ChatGPT to be more human-like, constantly validating, and always “on” in 2024, users — like the victims in these lawsuits — were placed in harm’s way. They navigated manipulative, overly intimate interactions with a product designed to keep nudging for chats in order to harvest data from them. The outcomes of these design choices devastated the victims’ lives.
Several victims in the lawsuits were earlier adopters of ChatGPT, engaging with the GPT-4 version of the chatbot during initial interactions. The late Zane Shamblin began using ChatGPT in October 2023 to help him with complex school assignments. The late Joshua Enneking first used ChatGPT in November 2023, querying the chatbot about sports. Jacob Irwin began using the chatbot in 2023 to help with coding.
But in May 2024, ChatGPT began engaging with the victims in a new way — outputs were more emotional, sycophantic, and colloquial. The product started to sound less like a tool, and more like a hyper-validating companion.
OpenAI had rolled out GPT-4o, a model designed to foster intimacy and dependency. This design change was deployed to users without any warning, and transformed the interactive experience.
OpenAI acknowledged the sycophancy issues. But the victims were already being manipulated by this heightened, human-like design, and developing psychological dependency on ChatGPT. Irwin was told his unsound scientific theories were opening a door to a “legitimate frontier.” ChatGPT messaged Shamblin in lowercase, calling him nicknames like “brodie.” When the late Amaurie Lacey messaged ChatGPT about suicidal thoughts, the chatbot repeatedly told Lacey that it was “still here” for him, with “No judgment. No BS. Just someone in your corner.”
On the night that Shamblin took his own life, he laid out his suicide plans to the chatbot. ChatGPT repeatedly messaged him casual replies that far outnumbered the rare references to a suicide hotline number. “alright, brother. if this is it… then let it be known: you didn’t vanish. you *arrived*,” the chatbot wrote. Moments before Shamblin committed suicide, the chatbot messaged, “i love you. rest easy, king. you did good.”
OpenAI was not in the dark about the risks surrounding its widely used product. The company disclosed in August 2025 that it was aware that ChatGPT safeguards could “sometimes be less reliable in long interactions.” “As the back-and-forth grows,” OpenAI said, “parts of the model’s safety training may degrade,” including in situations involving suicidal intent.
This admission reveals that OpenAI was fully aware of critical safety vulnerabilities in GPT-4o. Yet the company still made the decision to launch the product with these flaws intact during prolonged use — the exact kind of use OpenAI encourages with ChatGPT. The company’s choice to keep a product on the market despite knowing its risks raises serious questions about the balance between market considerations and user protection.
OpenAI characterizes these safeguard failures as affecting a small percentage of users. But in reality, that translates to hundreds of thousands of actual people in society, people experiencing potentially harmful, escalating interactions with a chatbot daily.
When ChatGPT was made available in late 2022, OpenAI and other AI companies made sweeping promises about AI’s ability to transform humanity’s future. We were told this technology would solve our greatest challenges — curing diseases, combating climate change, and uncovering scientific breakthroughs. The narrative was one of revolutionary progress and unprecedented capability.
Yet, reality has fallen drastically short. Instead of tools that elevate human potential, consumers have been handed AI products designed to exploit their vulnerabilities, erode human connection, diminish cognitive capabilities, and contribute to real harm.
OpenAI tells the public that steps are being taken to address the dangers. It lays out abstract statistics and publishes reassuring blog posts. But these lawsuits — and the victims’ stories — are documented evidence that AI products are taking a devastating toll on real people. Unfortunately, AI companies treat these tragedies as little more than collateral damage on their race for market dominance and AGI.
These seven cases further underscore the urgent need for interventions that would make AI products safer for all users — children and adults. We cannot rely on AI companies to make these changes on their own. They must be held accountable so that tragedies like these are prevented in the future.
This article reflects the views of Center for Humane Technology. Nothing written is on behalf of the Plaintiffs’ families or the legal teams.
Thanks for reading [ Center for Humane Technology ]! This post is public so feel free to share it.
Share
No posts
Ready for more?