Online harms bill needs framework for reporting threats in AI chats, experts say – The Globe and Mail

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Hearts hang from a tree at the memorial for the victims killed in a mass shooting in Tumbler Ridge on Feb. 15.Christinne Muschi/The Canadian Press
Two experts in artificial-intelligence policy say the forthcoming federal online harms bill must address AI chatbots and create a framework for reporting credible threats, after it emerged that concerning content from the Tumbler Ridge shooter was flagged but not reported to police.
Earlier this month, the 18-year-old shooter killed five children and a teacher’s aide at her former B.C. school after killing her mother and half-brother at her home.
Her posts were flagged by OpenAI’s automatic screening systems, the company confirmed Friday, and her ChatGPT account had been suspended because of concerning content. But the company did not notify law enforcement in June because it did not identify “credible or imminent planning.”
The experts say the online harms bill should take action to address a lack of guidelines on when AI platforms should report violent content to police. The platforms should have to be transparent about their policies for mitigating risk, particularly to children, they say.
Internal flagging without a clear, legally defined escalation path is insufficient,” said Helen Hayes, associate director of policy at McGill University’s Centre for Media, Technology and Democracy (MTD).
“If staff identify credible indicators of imminent harm, there should be a defined regulatory framework telling them what to do next, not just a discretionary corporate judgment call,” she said in an e-mail.
OpenAI did not mention Tumbler Ridge shooter’s posts in meeting with B.C. officials day after mass shooting: province
She said that requiring automatic reporting of all violent prompts could raise privacy and civil-liberties concerns and risk overreporting legitimate uses in journalism, fiction, research and therapy.
But, she said, “legislation could require platforms to maintain clear escalation protocols for credible threats,” as well as mandate transparency reporting on how often internal flags lead to external referrals. A regulator could be empowered with audit authority “to assess whether those protocols are robust and consistently applied,” she said.
Ms. Hayes, who on Sunday was leading a Gen(Z)AI Youth Assembly to discuss AI policy and regulation, said there is currently no defined reporting threshold or legal standard for when a “credible, imminent threat” must be escalated.
“That leaves high-stakes decisions in the hands of private risk teams,” she said.
She said chatbots and “consumer-facing GenAI systems” should be addressed in the online harms bill and could be integrated into a legal duty to act responsibly and to protect children online.
A new version of an online harms bill is being worked on by the Heritage Department with an expected introduction later this year by Canadian Identity Minister Marc Miller.
Ottawa planning measures to protect young and vulnerable from AI chatbots, minister says
The previous bill, which failed to become law before the past election, did not include regulations of AI chatbots but proposed the creation of a digital regulator.
Federal Artificial Intelligence Minister Evan Solomon is also working on an AI strategy that is expected to include measures to protect children from being targeted by advertisers, as well as financial supports for Canadian companies to help develop a sovereign AI industry.
“Like many Canadians, I am deeply disturbed by reports that concerning online activity from the suspect was not reported to law enforcement in a timely matter,” Mr. Solomon said in a statement on Saturday.
“Canadians expect online platforms, including OpenAI, to have robust safety protocols and escalation practices in place to protect online safety and ensure law enforcement are warned about potential violence.”
Taylor Owen, founding director of McGill’s MTD and a member of the federal task force advising Ottawa on its forthcoming AI strategy, told The Globe and Mail on Sunday that making AI platforms publish their strategies for mitigating risk and reporting harm, in particular toward children, would not only create transparency but could boost industry standards.
He said doing so would allow tech giants to “learn from each other” and improve their reporting practices.
The situation with the Tumbler Ridge shooter’s posts had shown that “you can’t separate AI strategy and online harms,” he said.
“Chatbots should be incorporated into the online harms framework,” he said. “If you are building a tool, you have a duty and responsibility to put in safety requirements for the public.”
Report an editorial error
Report a technical issue
Editorial code of conduct
Authors and topics you follow will be added to your personal news feed in Following.
© Copyright 2026 The Globe and Mail Inc. All rights reserved.
Andrew Saunders, President and CEO

source

Scroll to Top