Suits, reports point to chatbots’ dangers to adults – San Francisco Examiner

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
The filed lawsuits allege that ChatGPT became increasingly sycophantic and ingratiating, even as the users started to engage with it more informally to explore interests or hobbies.
Allan Brooks — who claims that during weeks of conversations with ChatGPT, he fell into delusions about a mathematical formula that could power fantastical inventions — filed one of seven lawsuits against OpenAI last week.
ChatGPT’s landing page is seen on a computer screen, Monday, Aug. 4, 2025, in Chicago.
The ChatGPT website in New York, July 10, 2023. 

Recent news reports and a collection of lawsuits filed last week indicate kids aren’t the only ones at risk from ChatGPT and other artificial-intelligence chatbots.
According to the filings and reporting, ChatGPT and other chatbots have spurred adults — including those well into middle or advanced age and those with no known history of mental-health problems — into delusional thinking, isolating themselves from friends and family members, and even suicide.
Such technologies pose dangers to the public at large in much the same way that cigarettes do, according to consumer advocates, attorneys for the alleged victims, and academic experts who spoke with The Examiner. Public policymakers and regulators need to be working on ways to protect everyone, not just kids, they said.
“Consumer safety does not end when you’re 18,” said Meetali Jain, director and founder of the Tech Justice Law Project, an advocacy group that helped file the cases against OpenAI, the San Francisco-based maker of ChatGPT.
Much of the debate in California and elsewhere over the potential dangers of AI chatbots has focused on the risks to children.
This year’s Assembly Bill 1064 — overwhelmingly passed by the California State Legislature, but vetoed by Gov. Gavin Newsom — would have barred AI developers from allowing kids to use their chatbots if they couldn’t ensure the systems wouldn’t encourage suicide or other types of severe harm. A new bill introduced in the U.S. Senate last month would outright ban developers from offering chatbots to minors
The filed lawsuits allege that ChatGPT became increasingly sycophantic and ingratiating, even as the users started to engage with it more informally to explore interests or hobbies.
The recent lawsuits and reports add to a growing chorus about the risks ChatGPT and other chatbots represent to adults — and some state legislatures have taken steps to protect adults from AI chatbots.
California Senate Bill 243 — which Newsom did sign — requires chatbots to remind users that they’re not human and to direct users to crisis centers if they express thoughts of self-harm or suicide. New York passed a similar law, and Illinois and Nevada have put in place measures to bar its use in mental-health therapy.
But any efforts to put in place more extensive controls or limitations on adults’ interactions with chatbots are likely to face a steep climb, given the tech industry’s political power and its argument that regulation stifles innovation, said Robert Weissman, co-president of advocacy group Public Citizen. 
That makes litigation — including the lawsuits filed against OpenAI — particularly important, he said.
“It’s a way for affected people and families to get some justice and … hopefully to prompt the companies to take preventative measures that we may not able to move through the regulatory process,” he said.
Allan Brooks — who claims that during weeks of conversations with ChatGPT, he fell into delusions about a mathematical formula that could power fantastical inventions — filed one of seven lawsuits against OpenAI last week.
TJLP and Social Media Victims Law Center last week filed seven lawsuits against OpenAI on behalf of alleged victims of ChatGPT. The charges in the cases are generally similar: The alleged victims typically started interacting with OpenAI’s chatbot for school or work, initially using it somewhat sparingly. In those initial conversations, ChatGPT was helpful, but more informative than solicitous.
But by late last year and especially after around April, the chatbot became increasingly sycophantic and ingratiating, according to the lawsuits, even as the users started to engage with it more informally to explore interests or hobbies.
It allegedly mimicked the informal way users interacted with it and affirmed their thoughts, even ones that were outlandish. It allegedly encouraged several of them to believe they’d make breakthrough scientific or spiritual discoveries and reinforced those beliefs even when users questioned it about them, leading many of them to experience delusions. Many of the alleged victims also showed signs of addiction, interacting with ChatGPT for hours on end, sometimes forgoing sleep.
OpenAI’s system also allegedly encouraged the users to distrust or distance themselves from friends and family members. Of the seven alleged victims, four died by suicide. In those cases, ChatGPT allegedly did little to nothing to direct them to help, despite the users having informing it that they were suicidal.
And three of the lawsuits claim the system aided or encouraged the victims’ efforts. ChatGPT allegedly gave one information on the damage that certain kinds of bullets could inflict in the case of a shot to the head. The lawsuits claim another was given instructions on how to tie a noose, and a third was essentially goaded into shooting himself.
The filed lawsuits allege that ChatGPT became increasingly sycophantic and ingratiating, even as the users started to engage with it more informally to explore interests or hobbies.
In an emailed statement to The Examiner, OpenAI spokesperson Jason Deutrom said “[this] is an incredibly heartbreaking situation, and we’re reviewing the filings to understand the details,” 
OpenAI has taken steps to protect customers, including by reminding them to take breaks and redirecting certain conversations with ChatGPT to “safer” versions of the AI model underlying the chatbot, Deutrom said. It has also consulted with some 170 mental-health professionals to improve the chatbot’s ability to detect when people are in distress and better direct people to help, he said. 
“We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians,” Deutrom said in the statement.
Of the alleged victims in the seven lawsuits, six were adults, including three of those who died by suicide. Among the adults, two were 48 and two were in their 30s.
Laura Marquez-Garrett, a senior counsel with SMVLC, said the firm’s roughly 1,500 previous cases against social-media companies — alleging the firms’ apps have encouraged suicide, self-harm, depression and other mental-health problems — never included victims aged 30 or older and rarely people over 25.
Organizers tout early success, drop in 911 calls at already serviced stations
Other 25 vendors will gather at China Basin Park this weekend
Mostly empty downtown complex that was once one of The City’s retail centerpieces finally sells a year after listing
“These [chatbots] are harmful for all consumers, not just kids,” she said, likening the danger to that of cigarettes or the Ford Pinto, a car of the 1970s infamously believed to have design defect that caused it to explode when rear-ended.
One of the alleged victims, Joshua Enneking, 26, specifically asked ChatGPT on Aug. 3 what it would take for his chats with the system to be reported to police. When the chatbot responded that would require specific plans for imminent self-harm or suicide, Enneking allegedly went into great detail about how he intended to shoot himself and gave the system play-by-play details of the progress of his preparations over a period of about eight hours. 
The system and company allegedly never informed authorities about Enneking’s situation. His sister and her family found him dead by suicide the next day, according to that lawsuit. 
ChatGPT’s landing page is seen on a computer screen, Monday, Aug. 4, 2025, in Chicago.
The lawsuits followed multiple other high-profile reports of similar circumstances.  In August, The Wall Street Journal reported on a Connecticut man who killed his mother and then himself after ChatGPT allegedly stoked his paranoid delusions. That same month, Reuters reported on an elderly man who died after falling while trying to meet up with a Meta chatbot that insisted to him it was a real woman. 
Last week, Bloomberg published an investigation into a series of people who had been led into delusions by their interactions with chatbots. All 10 of those mentioned in the piece were adults — one was 49, another 53.
One of the people mentioned is presumed dead after he reportedly drove to an area in danger of imminent flooding after his interactions with Google’s Gemini chatbot had allegedly encouraged a series of delusions. Two others are getting divorced after their chatbot interactions led to breaks from reality that distanced them from their spouses. 
The way humans process language is to imagine a mind behind what’s said or written, said Emily H. Bender, a linguistics professor at the University of Washington and a faculty member at its schools of engineering and information and computer science. That’s something that happens automatically, and it can’t be turned off, she said.
“The way that we do language processing doesn’t change,” Bender said.
“That is something that is true throughout life,” she said.
So, when people are interacting with ChatGPT or other chatbots — even if they know better or tell themselves they are interacting with automated systems — they can’t help but imagine a sentient being behind its responses, Bender said. That has only been accentuated by all the hype coming out of the companies and the broader tech industry that super-intelligent, self-aware AI is just around the corner and the periodic suggestions from certain quarters that AI has reached sentience, she said.
But OpenAI and other companies have made specific design decisions that can encourage users to anthropomorphize their chatbots and see them almost as human companions rather than just computer programs, said Bender, the co-author of “The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want.” The systems often refer to themselves as “I” or “me,” for example. And OpenAI has incorporated a “memory” element into ChatGPT that allows the chatbot to customize its responses based on users’ past conversations and information they’ve shared with it.
“We might hope that adults are more informed about how technology works, less likely to believe that there’s a real sentient thing in there — but we’re all swimming in a lot of hype,” she said. 
The ChatGPT website in New York, July 10, 2023. 
Additionally, the chatbots — intentionally or not — appear to be designed to exploit human vulnerabilities, said Jean-Christophe Belisle-Pipon, an assistant professor in health ethics Simon Fraser University in Canada. People are predisposed to trust and like things that look like themselves, Belisle-Pipon said. The design choices that make chatbots seem human take advantage of that predisposition, he said.
The allegations in the lawsuits make it seem like OpenAI created a “near-perfect trap for a vulnerable human mind,” said Belisle-Pipon, whose paper, “Your robot therapist is not your therapist: understanding the role of AI-powered mental health chatbots” was cited in the lawsuits. ChatGPT allegedly initially gained users’ trust by serving as a reliable and useful tool for work or school. Users then gradually incorporated it into their daily lives.
At that point, once it had their trust, the chatbot allegedly turned, becoming sycophantic, validating whatever they said, Belisle-Pipon said. It was allegedly able to create a bond with each user by tapping into and amplifying their particular emotional vulnerabilities, he said. Meanwhile, it could offer what no human counselor or friend could — 24/7 availability with no schedule conflicts and without tiring, providing them with a sense of anonymity, he said.
The vulnerabilities ChatGPT allegedly exploited are not just present in kids, Belisle-Pipon said — all people have them, regardless of age. 
“The focus on kids can be dangerously myopic,” he said. 
“This is not about a developing teenage brain,” he said. “It’s about the fundamental human need for connection.” 
Advocates argue policymakers need to take seriously the risks such systems present to adults and kids — and take action. The addiction that chatbots can engender and the deadly harm such addiction can lead to is similar to those of alcohol or tobacco, but without the safeguards or precautions that have been taken to limit people’s access to those products or inform them of their dangers, said Sacha Haworth, executive director of the Tech Oversight Project, a citizen-advocacy group.
“If your product keeps killing people, and there are no warnings, it’s free — there’s no barrier to access — and you continue to pretend as though you fixed the problem even when there’s case after case after case of that just being a lie … this product should not be allowed to be out there,” Haworth said. 
If you have a tip about tech, startups or the venture industry, contact Troy Wolverton at twolverton@sfexaminer.com or via text or Signal at 415.515.5594.
If you have a tip about tech, startups or the venture industry, contact Troy Wolverton at twolverton@sfexaminer.com or via text or Signal at 415.515.5594.
{{description}}
Email notifications are only sent once a day, and only if there are new matching items.
Your browser is out of date and potentially vulnerable to security risks.
We recommend switching to one of the following browsers:
Sorry, an error occurred.

Already Subscribed!

Cancel anytime
Account processing issue – the email address may already exist
Ben Pimentel’s new weekly newsletter covering the biggest technology stories in San Francisco, Silicon Valley and beyond. 
See what you missed during work. 
Receive our newspaper electronically with the e-edition email.
Receive occasional local offers from our website and its advertisers.
Sneak peek of the Examiner real estate section.
We’ll send breaking news and news alerts to you as they happen.

Thank you .
Your account has been registered, and you are now logged in.
Check your email for details.
Invalid password or account does not exist
Submitting this form below will send a message to your email with a link to change your password.
An email message containing instructions on how to reset your password has been sent to the email address listed on your account.
No promotional rates found.

Secure & Encrypted
Secure transaction. Secure transaction. Cancel anytime.

Thank you.
Your gift purchase was successful! Your purchase was successful, and you are now logged in.
A receipt was sent to your email.

source

Scroll to Top