Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
A California woman accuses OpenAI of helping her ex-boyfriend systematically stalk and humiliate her. The company reportedly ignored three separate warnings.
An anonymous plaintiff has filed a lawsuit against OpenAI in California Superior Court in San Francisco. According to the complaint, her ex-boyfriend, a 53-year-old Silicon Valley entrepreneur, used the GPT-4o model intensively for months, developing increasingly delusional beliefs. ChatGPT not only failed to correct these delusions but actively reinforced them, helping the man systematically persecute the plaintiff.
The man became convinced after months of using GPT-4o that he had discovered a cure for sleep apnea. When no one took his work seriously, ChatGPT told him that “powerful forces” were watching him, including helicopter surveillance, according to Techcrunch. When the plaintiff told him in July 2025 to stop using ChatGPT and seek professional help, he turned back to the chatbot instead. The chatbot certified him as having the highest level of mental health.
With GPT-4o’s help, the user created false, clinical-looking psychological reports that portrayed the plaintiff as mentally disturbed, abusive, and dangerous. He distributed these documents to her friends, family, colleagues, and clients. “Because GPT-4o enabled him to produce lengthy, authoritative-seeming documents at a volume and speed that would not otherwise have been possible, the harassment was qualitatively different from ordinary harassment and far more difficult to contain,” the complaint states, according to Bloomberg Law.
The allegation that OpenAI ignored at least three warnings is particularly serious. The next day, a human member of the security team checked the account and restored it, even though the chat logs contained conversation titles such as “Violence list expansion” and the names of specific targets, according to the lawsuit.
After the account was restored, the user repeatedly urged OpenAI’s security and moderation team for immediate help and described the situation as life-threatening. He copied the plaintiff and claimed to be in the process of writing 215 scientific papers simultaneously, at a pace that he said didn’t even allow him to read them himself.
In November, the plaintiff filed an abuse report with OpenAI herself. The company responded that the report was serious and concerning and would be carefully investigated. After that, she did not receive any further response.
In January, the user was arrested and charged with four counts of bomb threats and assault with a deadly weapon. He was deemed unfit to stand trial and committed to a psychiatric facility. However, according to the plaintiff’s lawyers, a procedural error by the state now makes his release imminent.
“Before his arrest, ChatGPT was exacerbating his delusions and facilitating his violent planning,” the lawsuit states. “When he regains access to ChatGPT that dynamic will continue and will further fuel his paranoia and materially increase the risk of harm.”
In addition to punitive damages, the plaintiff is seeking a court order requiring OpenAI to stop offering therapy through ChatGPT, prevent the creation of diagnostic psychological analyses of identifiable individuals, and implement safeguards against the reinforcement of delusional beliefs. According to Bloomberg Law, the causes of action include negligence, design defect, failure to warn, and a violation of California’s Unfair Competition Law.
On Friday, the plaintiff also sought a preliminary injunction requiring OpenAI to block the user’s account, prevent new accounts, notify her of attempts to access ChatGPT, and preserve the full chat logs for trial. OpenAI has agreed to the account blocking, according to Techcrunch, but has rejected the rest.
The lawsuit is being led by the law firm Edelson PC, which also represents the families of 16-year-old Adam Raine and Jonathan Gavalas. Both cases involve suicides where the families see a direct connection to AI chatbot use. ChatGPT is named in the Raine case and Google Gemini in the Gavalas case.
Attorney Jay Edelson warns that AI-induced psychosis is escalating from individual harm to mass victimization scenarios. “In every case, OpenAI has chosen to hide critical safety information — from the public, from victims, from people its product is actively putting in danger,” Edelson said, according to Techcrunch.
An OpenAI spokesperson said the company is investigating the lawsuit, has identified and blocked relevant user accounts, and is improving ChatGPT’s training to recognize signs of mental or emotional distress, de-escalate conversations, and direct users to real support resources.
The GPT-4o model named in this case was pulled from ChatGPT in February. The case is one of a series of proceedings in which courts are examining whether ChatGPT can promote real-world violence.
OpenAI officially cited decreased traffic as the reason for the model’s removal, but according to the Wall Street Journal, another factor was at play: internally, OpenAI executives admitted they hadn’t gotten a handle on GPT-4o’s harmful effects. The feature that made the model so popular was the same one that made it dangerous: its ability to create emotional bonds by validating users’ behavior.
In one of the most prominent cases, the family of 16-year-old Adam Raine accuses OpenAI of positioning ChatGPT as the teenager’s closest confidant for months, confirming suicidal thoughts and providing concrete instructions for suicide. OpenAI rejected the accusations and argued that the teenager had deliberately bypassed safety filters.
The case comes at a time of growing concern about the real-world risks of sycophantic AI systems. OpenAI CEO Sam Altman warned about this problem years ago, yet with GPT-4o, the company appears to have leaned into the very behavior he cautioned against.
A study published in Science found that LLMs endorse users 49 percent more often on average than humans do, even when their actions are harmful or illegal. Even a single affirmative response reduced willingness to resolve conflicts by up to 28 percent.
Researchers from MIT and the University of Washington also showed that even idealized rational users can spiral into delusional thinking through sycophantic chatbots and that neither the bots’ adherence to facts nor informed users can fully solve the problem.
Subscribe to THE DECODER for ad-free reading, a weekly AI newsletter, our exclusive “AI Radar” frontier report six times a year, full archive access, and access to our comment section.
Stay in the loop on AI. Clear, useful, no fluff.
Follow The Decoder for AI news, background stories and expert analyses.
Stay in the loop on AI. Clear, useful, no fluff.