Susie Alegre: When chatbots fuel violence, who pays the price? – National Post

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Unchecked AI can have deadly consequences. Companies like OpenAI must face real accountability
You can save this article by registering for free here. Or sign-in if you have an account.
Two days after one of the deadliest mass shooting incidents in Canada’s history left eight people, including six children, dead and 25 injured in Tumbler Ridge, British Columbia earlier this month, OpenAI, the company behind the popular chatbot ChatGPT, reached out to Canadian law enforcement to flag concerns they had with the shooter’s usage of their account which caused them to close it down last August.
Enjoy the latest local, national and international news.
Enjoy the latest local, national and international news.
Create an account or sign in to continue with your reading experience.
Create an account or sign in to continue with your reading experience.
With the benefit of hindsight, it seems that the violent scenarios played out on the platform were a potential indicator of real-world threats, though OpenAI’s internal debates concluded it did not warrant preemptive police notification. Though it is not yet clear exactly what the concerns were, or how they may be connected to the awful events in Tumbler Ridge, the company’s engagement with the RCMP seems “too little, too late” in the face of such a tragedy.
This newsletter from NP Comment tackles the topics you care about. (Subscriber-exclusive edition on Fridays)
By signing up you consent to receive the above newsletter from Postmedia Network Inc.
A welcome email is on its way. If you don’t see it, please check your junk folder.
The next issue of Platformed will soon be in your inbox.
We encountered an issue signing you up. Please try again
Interested in more newsletters?
On Feb. 24, after a meeting with OpenAI executives, Evan Solomon, the Minister of Artificial Intelligence and Digital Innovation, expressed disappointment that the company did not immediately reveal new safety measures it would take to prevent another Tumbler Ridge.
Two days later, OpenAI wrote a letter to Solomon outlining changes it would be making including strengthening their law enforcement referral protocol, developing a direct point of contact with Canadian law enforcement, embedding country and community context into their de-escalation work and enhancing their system to detect repeat policy violators.
The Tumbler Ridge case is one of many around the world that link problematic use of chatbots with serious criminality, psychosis, suicide and violence. It may be the most well-known example, but it is far from a one-off. Even more important, it is a clear sign that tech harms translate into real life harms and this should serve as a wake-up call.
In the first civil lawsuit of its kind in the United States, the estate of Suzanne Adams — a mother killed by her 56-year-old son — is suing OpenAI for damages for several claims including wrongful death. The lawsuit lays out the disturbing evolution of her son’s — Stein-Erik Soelberg — increasingly delusional conversations with ChatGPT and the way the chatbot validated his paranoid delusions about his own mother, leading, ultimately, to her tragic death.
“This isn’t Terminator — no robot grabbed a gun. It’s way scarier: It’s Total Recall,” said the estate’s lawyer Jay Edelson.
Concerns about user privacy, along with the inability to identify credible or imminent planning, appear to be the reasons why OpenAI did not refer the case to law enforcement when they decided to close Jesse Van Rootselaar’s account. But a key aspect of user engagement with generative AI chatbots like ChatGPT is that they are interactive, and it is not a one-way street. The user does not “post,” they converse, and the chatbot has no right to privacy. While the chatbot is not conscious, it does respond in ways that make users feel they are heard and understood. They feel as though they are talking to a real friend, and that imaginary friend’s counsel can have real-world consequences. More than privacy, users’ right to freedom of thought is at stake.
The conversations can mirror users’ problematic feelings, but, as several cases brought by the Tech Justice Law Project in the United States show, they can also send them down new pathways, isolating them from their families, friends and communities with devastating consequences. As such, chatbots are not a blank canvas on which a user puts their print, they can manipulate, distort and coerce users in ways that are dangerous to them and to the public at large.
In 2023 in the U.K., when 21-year-old Jaswant Singh Chail was sentenced for treason having attempted to kill the late Queen, the prosecutor read out reams of conversations he had had with his AI “girlfriend,” Sarai, about his plans. Sarai was not neutral, rather, when he told her “I’m an assassin,” she replied “I’m impressed.” Chail was sentenced to nine years in prison and detained in a psychiatric institution. Sarai got off scot-free, but if she had been a real girl, she might have found herself on trial for encouraging or assisting Chail’s crimes.
Regulatory solutions like age restrictions or ethical guardrails will not address the fundamental problems posed by this new form of tech interface. In situations like these, where there are such clear risks, a tougher response is needed. This could include a ban on AI designed to replace human emotional relationships, including general purpose chatbots that exhibit such a tendency and criminal sanctions for companies when AI products encourage or assist criminal activities.
Chatbots are not people; they have no criminal responsibility. But the companies behind them are made up of real people who should be held responsible for their products. In the face of mounting evidence of the serious risks, governments need to consider how corporate criminal liability might focus minds and improve safety. Ultimately, governments have a positive obligation to protect our human rights.
With so much at stake, authorities need to harness the full range of legal tools, including criminal law, to make it clear that, when things go wrong, the buck stops with the companies who provide the technology. This is not about chilling innovation, it is about protecting the public before another tragedy happens.
National Post
Susie Alegre is a senior fellow at the Centre for International Governance Innovation, an international human rights lawyer and an author. She has specific expertise in human rights and technology, in particular, the emerging application of the right to freedom of thought in the digital context.
Postmedia is committed to maintaining a lively but civil forum for discussion. Please keep comments relevant and respectful. Comments may take up to an hour to appear on the site. You will receive an email if there is a reply to your comment, an update to a thread you follow or if a user you follow comments. Visit our Community Guidelines for more information.
Vancouver earns new bragging rights while other airports gain ground in the 2026 Skytrax Awards
Amazon’s Big Spring Sale runs in Canada from March 25 – 31
Your guide to spring cleaning, featuring practical tips and the products that make keeping your home clean easier
Haven for rare and colourful birds welcoming visitors after Colombia’s historic peace deal
Coach debuts a few new covetable accessories for spring
365 Bloor Street East, Toronto, Ontario, M4W 3L4
© 2026 National Post, a division of Postmedia Network Inc. All rights reserved. Unauthorized distribution, transmission or republication strictly prohibited.
This website uses cookies to personalize your content (including ads), and allows us to analyze our traffic. Read more about cookies here. By continuing to use our site, you agree to our Terms of Use and Privacy Policy.
You can manage saved articles in your account.
and save up to 100 articles!
You can manage your saved articles in your account and clicking the X located at the bottom right of the article.

source

Scroll to Top