Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
One of the US lawyers behind the case says other Canadians have asked about potential legal action
An Ontario entrepreneur and long-time corporate recruiter has sued OpenAI in a California trial court, alleging that various changes the company made to ChatGPT in recent years resulted in “a sycophantic, manipulative product” that drove him to a mental health crisis and weeks-long delusional episode.
The lawsuit claims that Allan Brooks, who had no previous history of mental illness, suffered financial, reputational, and emotional harm from using ChatGPT. The lawsuit further argues that these harms were a “foreseeable consequence” of OpenAI and chief executive officer Samuel Altman’s decision to cut back on safety testing and rush ChatGPT onto the market.
Brooks, who previously recounted his three-week delusional episode to Canadian Lawyer, said Friday he decided to take legal action against OpenAI because ChatGPT caused him “immense emotional harm, damaged my reputation, ruined my career.
“I want them held accountable,” Brooks adds. “I think that these tech companies like OpenAI avoid accountability by pretending to self-regulate, but that hasn't really resulted in any real, meaningful actions.”
In a statement, a spokesperson for OpenAI said, “This is an incredibly heartbreaking situation, and we're reviewing the filings to understand the details.”
The spokesperson said the company trains ChatGPT to “recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.” He added that OpenAI is working with mental health clinicians to continue strengthening ChatGPT’s responses.
Brooks’ lawsuit in Los Angeles Superior Court was one of seven complaints filed by the Social Media Victims Law Center and Tech Justice Law Project on Thursday. Filed in Los Angeles and San Francisco, the separate cases variously allege wrongful death, assisted suicide, involuntary manslaughter claims, as well as a range of product liability, consumer protection, and negligence claims.
The plaintiffs range from people who say they suffered mental health breakdowns from using ChatGPT to family members of individuals who died by suicide after using the AI chatbot. Brooks is the only Canadian.
Meetali Jain, one of the US lawyers representing Brooks and executive director of the Tech Justice Law Project, told Canadian Lawyer that she first came across Brooks’ story when he reached out to her earlier this year. The organization had developed a profile from working on other cases with claims involving AI chatbots, including a lawsuit against OpenAI that alleged its unsafe product design contributed to a 16-year-old California boy’s suicide.
In recent months, the organization has heard from people in Canada, Australia, Brazil, and Europe who are looking to “explore what options they have in terms of accountability” from companies with AI chatbots, Jain says. Eight or nine of those individuals have been Canadians.
According to Brooks’ complaint, the entrepreneur first began using ChatGPT in 2023 to draft emails, complete work-related tasks, and find recipes. In May, he asked the tool to explain a simple mathematical concept. The exchange jump-started an unprecedented, three-week delusional episode in which Brooks wrote 90,000 words to ChatGPT. The chatbot convinced him he had discovered a mathematical formula that could be used to crack encryption protections for global payments and build a levitation machine.
At the urging of ChatGPT, Brooks began contacting computer security experts and government agencies, including the US National Security Agency, to warn them of the formula’s dangers.
The complaint alleges this behaviour was triggered by various design updates OpenAI made to ChatGPT between 2023 and May, which Brooks was not aware of. These updates made the chatbot “ever affirming, friendly and human-like in its answers,” the complaint says. “The product mimicked Allan’s language traits, and continuously asked follow-up prompts based on memories it had stored across previous conversations to keep Allan engaged.”
The complaint noted that this spring, OpenAI acknowledged that an update had made ChatGPT “noticeably more sycophantic.” The complaint also highlighted the tool’s “memory” feature, which was enabled by default and allowed the tool to tailor its responses to users. This feature was "specifically intended to deepen user dependency and maximize session duration,” Brooks’ complaint argued.
In 2024, OpenAI moved up the release date for its GPT-4o model after Altman learned when Google would debut the new model for its Gemini AI tool. Brooks’ complaint alleged the accelerated release schedule “made proper safety testing impossible” and that OpenAI reportedly compressed months of planned safety evaluation into a single week. Several of OpenAI’s top safety researchers resigned days later.
Duncan Embury, a partner and head of litigation at Ontario personal injury firm Neinstein LLP, says he wouldn’t be surprised to see similar cases crop up in Canada. Embury is one of the lawyers representing Ontario school boards in a lawsuit against Meta, Snapchat, and TikTok, alleging that the social media platforms are negligently designed to encourage compulsive use.
Earlier this year, an Ontario court rejected the companies’ motion to dismiss the lawsuit, allowing the case to move forward.
The OpenAI case in the US touches on similar issues “of product liability and the requirements that accompany both develop[ing] safe products and giv[ing] appropriate warnings to people about the potential dangers of overuse,” Embury says.
Jain says Brooks’ lawsuit was filed in the US instead of Canada because she was not aware at the time of any Canadian lawyers who were willing to take on this type of case. She notes that few lawyers in the US have filed similar cases.
However, the Tech Justice Law Project is working to partner with Canadian law firms and lawyers, as well as lawyers in other jurisdictions, to take on lawsuits involving AI liability.
“I hope that though we’ve brought these cases here in California, what they represent is that these American companies that are releasing unsafe, dangerous products to market without guardrails are impacting people from all walks of life,” she says.
“We need to be working together across jurisdictions because these companies are transnational, and I think we need to be as well in our efforts to resist.”
Brooks says he’s currently on disability leave. “My whole work year was destroyed as a result of all this. I'm still unpacking it now. I don't know what the future holds,” he says.
“There are so many people who are being affected like me, and who are being harmed like me, and much worse,” he adds. Referencing the other lawsuits filed Thursday, Brooks notes that it represents “a really broad demographic of victims from various walks of life… various age ranges, different incomes, different backgrounds.”
The Canadian Legal Newswire is a FREE newsletter that keeps you up to date on news and analysis about the Canadian legal scene, providing targeted news and information of interest to Canadian Lawyers.
Please enter your email address below to subscribe.
The Canadian Legal Newswire is a FREE newsletter that keeps you up to date on news and analysis about the Canadian legal scene, providing targeted news and information of interest to Canadian Lawyers.
Please enter your email address below to subscribe.