Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Good morning.
This morning, representatives of OpenAI will be meeting with representatives of Canada’s AI minister to explain the company’s protocols for alerting authorities when one of their users has interactions with OpenAI’s chatbot that could suggest a risk of causing serious harm to someone.
The meeting, which AI Minister Evan Solomon described Monday as a summoning, comes after several days of efforts on the part of OpenAI to defend its actions in the wake of the horrific school shooting in Tumbler Ridge two weeks ago.
It also comes as governments around the world, including Canada, consider how to regulate the powerful, fast-developing technology.
On Friday, the Wall Street Journal reported that while using ChatGPT in June, the shooter “described scenarios involving gun violence over several days,” which were flagged by an automated review system.
About a dozen employees debated taking action, with some interpreting the writings as an indication of potential for committing real-world violence and urging leaders to alert Canadian law enforcement, the WSJ reported.
The company ultimately did not contact authorities, though it did suspend the shooter’s account.
Two weeks ago, Jesse Van Rootselaar killed her mother and half-brother at the home they all shared before heading to the school where she killed five young students and a teacher’s aide. The shooter then killed herself.
After the WSJ story was published, OpenAI issued a statement saying the company determined that the postings did not meet its threshold for referring to law enforcement. For that to have happened, the posts must indicate “an imminent and credible risk of serious physical harm to others.”
Last June, the statement said, the company did not identify “credible or imminent planning.”
The statement went on to say that the risk of overreporting to law enforcement can cause “distress” to a young person and family if officers show up unannounced.
ChatGPT is trained to discourage posters from intending harm and “to avoid providing advice that could result in immediate physical harm to an individual,” the statement said.
The day after the shooting, OpenAI had a previously planned meeting with B.C. government officials to discuss the possibility of opening a satellite office in the province. The company did not mention its chatbot’s interactions with the shooter.
Instead, the next day, OpenAI reached out to the government asking for a contact with the RCMP. The Mounties are now including the posts as part of their probe into the shooting.
“From the outside, it looks like OpenAI had the opportunity to prevent this tragedy, to prevent this horrific loss of life, to prevent there from being dead children in British Columbia,” B.C. Premier David Eby said Monday, according to The Canadian Press. “I’m angry about that.”
In a statement Monday, OpenAI said it is supporting investigators in their work and that its senior leaders will discuss at Tuesday’s meeting the company’s “overall approach to safety, safeguards we have in place, and how we continuously work to strengthen them.”
Ottawa has backed away from legislation specifically focused on AI, but plans to introduce bills focusing on privacy and online harms.
Taylor Owen, an associate professor at McGill University and a member of the federal task force advising Ottawa on its upcoming AI strategy, has said that online-harms legislation should address AI platforms.
“AI systems pose significant risks,” he wrote in his submission to government last fall.
He noted that studies have shown chatbots “fail to respond appropriately to users experiencing mental health crises, reinforce cognitive distortions through mirroring language, and cultivate a false sense of emotional reciprocity.”
Jay Edelson, a U.S. lawyer representing two families who are suing OpenAI, alleging ChatGPT helped drive a 16-year-old to suicide and an adult man to murder his mother. The allegations have not been proven in court.
Mr. Edelson said OpenAI’s decision not to disclose what it knew earlier about the Tumbler Ridge shooter is alarming.
“We are very convinced that this is a widespread problem,” he said.
But Candice Alder, a B.C.-based psychotherapist and AI ethics consultant with Synthetica.io, cautioned against relying on AI platforms to become “informal extensions of law enforcement.”
Doing so risks compromising important Charter-protected rights like privacy and free expression, she said.
She added that AI platforms are not a replacement for professional mental health services and are not equipped to do clinical risk assessments.
Ms. Alder noted that in the Tumbler Ridge case, police and mental health professionals were already involved with the shooter. The shooter had been in psychiatric treatment, and weapons had been seized from the home.
“We should be cautious about retroactively shifting responsibility onto an AI platform when established legal and law enforcement mechanisms were already in motion,” Ms. Alder wrote.
This is the weekly British Columbia newsletter written by B.C. Editor Wendy Cox. If you’re reading this on the web, or it was forwarded to you from someone else, you can sign up for it and all Globe newsletters here.
Report an editorial error
Report a technical issue
Editorial code of conduct
Authors and topics you follow will be added to your personal news feed in Following.
© Copyright 2026 The Globe and Mail Inc. All rights reserved.
Andrew Saunders, President and CEO