AI minister says meeting with OpenAI executives will not delve into details of Tumbler Ridge shooter’s posts – The Globe and Mail

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Artificial Intelligence Minister Evan Solomon in Ottawa on Tuesday.Justin Tang/The Canadian Press
Artificial Intelligence Minister Evan Solomon says the details of the Tumbler Ridge shooter’s alarming interactions with an AI chat platform will not be discussed with the company’s safety executives at a meeting this evening, as the massacre is under investigation by the RCMP.
However, he plans to ask the tech company to explain its safety protocols, and specifically what it does to protect Canadians from harm.
Mr. Solomon said he expects OpenAI’s top safety representatives, who he has summoned from the U.S. to Ottawa for a meeting, to disclose in detail their protocols for escalating worrying interactions to the police.
AI minister summons OpenAI safety chiefs over Tumbler Ridge shooting
Canadian Identity Minister Marc Miller, whose department is working on an online harms bill for an expected introduction later this year, will also be at the meeting.
“I want them to tell us how they are going to protect children and Canadians,” Mr. Solomon told reporters in Ottawa, saying the Tumbler Ridge shooting was “one of the worst tragedies you could possibly imagine.”
Mr. Solomon called in OpenAI’s top safety executives to Ottawa after it was revealed that the tech giant had flagged, but not reported to authorities, troubling interactions between the Tumbler Ridge shooter and its ChatGPT chatbot months before the deadly attack.
He said the OpenAI safety officials would be expected to give him “more details about their safety protocols, their escalation thresholds and how they keep Canadians safe.”
He said he also wants to hear from them how they respond when they perceive a threat, including “what the technology does and what the human process does.”
But he said the details of the case – including troubling posts involving gun violence by the 18-year-old shooter Jesse Van Rootselaar made months before the mass killing – will not be discussed at the meeting because of the ongoing RCMP investigation.
He said he is discussing the matter with other ministers, including Mr. Miller, who in a recent interview with The Globe and Mail indicated that AI chatbots’ interactions with young and vulnerable people were likely to be addressed by the online harms bill currently being drafted.
Mr. Solomon said when it comes to legislation, “all options are on the table.”
Taylor Owen, founding director of McGill University’s Centre for Media, Technology and Democracy, and a member of the federal task force advising Ottawa on its forthcoming AI strategy, wrote to Mr. Solomon and Mr. Miller on Tuesday, warning that the failure to report the shooter’s posts exposes a gaping hole in Canadian regulations of AI chatbots.
“This tragedy has become another example of real-world harms caused by AI systems,” he wrote.
Mr. Taylor said summoning OpenAI to explain its safety protocols was “the right instinct” by Mr. Solomon, but he said this should not have been necessary.
“Had Canada established an online safety regulator that included chatbots in its scope, the government would already know how these companies flag dangerous content, what their escalation thresholds are, how they handle cross-border referrals, and whether their systems are adequate,” he wrote.
“The consequences of inadequate chatbot safety protocols are felt in schools and in communities,” he added.
He said “consumer-facing AI chatbots fall outside of any existing Canadian regulatory framework.”
Design and safety decisions are made by American companies with direct consequences for Canadian users, Mr. Taylor said, but “no Canadian regulator had knowledge of these protocols, nor the authority to scrutinize those decisions, ensure adherence to best practices, require mitigation measures, or enforce accountability before or after the fact.”
The government response should not be to require AI companies to monitor and report private conversations with chatbots to law enforcement, as this raises serious privacy concerns, he added.
“What is needed is a broader regulatory framework that addresses the upstream design decisions and safety architectures that allowed these situations to arise in the first place.”
He said Canada is a “clear laggard” when it comes to regulating online platforms. Britain’s Online Safety Act has been in force since 2025, and every EU member of the G7 is covered by the Digital Services Act.
OpenAI confirmed Friday that the shooter’s account was banned last June for violating the company’s usage policy, but said that her activity did not meet the company’s threshold for notifying law enforcement. A user’s messages to the chatbot would have to indicate an “imminent and credible risk of serious physical harm to others” for that threshold to be met, OpenAI said in a statement.
The shooter killed her mother and half-brother at the home they shared before heading to Tumbler Ridge Secondary School, where she killed five students and a teacher’s aide. The shooter, who had a history of mental health problems, then killed herself at the school as police responded to the scene.
The Wall Street Journal reported Friday that, while using ChatGPT in June, the shooter “described scenarios involving gun violence over several days,” which were flagged by OpenAI’s automated review system. About a dozen employees debated taking action, with some interpreting the writings as an indication of potential for real-world violence and urging leaders to alert Canadian law enforcement, the Journal reported. The company ultimately did not contact authorities.
“From the outside, it looks like OpenAI had the opportunity to prevent this tragedy, to prevent this horrific loss of life, to prevent there from being dead children in British Columbia,” B.C. Premier David Eby told reporters on Monday. “I’m angry about that.”
In a statement on Monday, OpenAI said it is supporting Mounties in their work, and that its senior leaders will discuss at Tuesday’s meeting in Ottawa the company’s “overall approach to safety, safeguards we have in place, and how we continuously work to strengthen them.”
With files from Andrea Woo
Report an editorial error
Report a technical issue
Editorial code of conduct
Authors and topics you follow will be added to your personal news feed in Following.
© Copyright 2026 The Globe and Mail Inc. All rights reserved.
Andrew Saunders, President and CEO

source

Scroll to Top