Ottawa warns of legislation if OpenAI doesn’t make changes after chat history raised red flags – The Globe and Mail

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Justice Minister Sean Fraser, responding to questions from journalists before a caucus meeting on Wednesday, was among the ministers who met with OpenAI executives.Justin Tang/The Canadian Press
Justice Minister Sean Fraser indicated Wednesday that legislative changes on regulating artificial intelligence could be brought in to prevent reporting failures to police, unless OpenAI, the company that failed to alert law enforcement about alarming posts by the Tumbler Ridge shooter, makes swift improvements to its protocols.
Speaking to reporters in Ottawa, Mr. Fraser, who was among the ministers to meet on Tuesday evening with executives from OpenAI, which operates the AI chatbot ChatGPT, said the government wants to see rapid proposals for improvements.
“The message that we delivered, in no uncertain terms, was that we have an expectation that there are going to be changes implemented, and if they’re not forthcoming very quickly, the government’s going to be making changes,” he said.
David Eby calls on Ottawa to regulate when AI providers must report their users to police
OpenAI came under scrutiny after the Wall Street Journal reported last week that employees at the company wanted to warn law enforcement about the shooter’s interactions with ChatGPT, including descriptions of scenarios involving gun violence, but that they were rebuffed.
Prime Minister Mark Carney, who earlier this month visited Tumbler Ridge, B.C., where an 18-year-old shot five children and an educator at her former secondary school as well as her mother and half-brother before shooting herself, said he had sat with families and first responders and “saw the horrors of what happened.”
“Obviously, anything that anyone could have done to prevent that tragedy or future tragedies, must be done,” he told reporters in Ottawa Wednesday.
The standing Senate committee on Social Affairs, Science and Technology is preparing to look this week at the governance and security of AI, including chatbots
The office of its chair, Senator Rosemary Moodie, said that questions on the content, use and security of AI chatbots – in light of the Tumbler Ridge tragedy – would be discussed, with a specific focus on governance and access to AI in Canada.
OpenAI did not mention Tumbler Ridge shooter’s posts in meeting with B.C. officials day after mass shooting: province
Canadian Identity Minister Marc Miller, who was also at the meeting with OpenAI, is currently working on an online harms bill to be introduced later this year.
Artificial Intelligence Minister Evan Solomon is also working on an AI strategy for the government.
Speaking to reporters Wednesday, Mr. Solomon reiterated his disappointment that the OpenAI executives had not brought proposals for improvements to Tuesday’s meeting, such as on the thresholds that are met before alarming exchanges about violence with AI chatbots are reported to police.
“We are looking forward to some concrete proposals. We are disappointed that by the time they came up here, they did not have something more concrete to offer,” he said.
Asked if he was considering banning ChatGPT in Canada, Mr. Solomon replied, “I would say all options are on the table.”
Peter Wall, Mr. Solomon’s spokesperson, said afterwards in a text message that banning ChatGPT from Canada was definitely not an option being considered by the AI minister.
The discussions at the meeting focused on how an “imminent and credible risk” is identified by the tech company as a threshold for reporting alarming posts to police.
Mr. Solomon said there had been a failure in the measures taken, which had led to a terrible tragedy. He said he expects the company to come back with “hard proposals” and “concrete action” soon.
But Michael Geist, the University of Ottawa’s Canada Research Chair in internet law, said there needs to be greater transparency about the standards that AI companies apply for reporting to the police.
“The public should know how their content is monitored, the standards used for action such as account bans or police reporting, and data on how frequently these actions occur,” he said in a text message.
“The standard that OpenAI adopted – an ‘imminent and credible risk of serious physical harm to others’ – sounds reasonable since a high standard should be used before reporting to police. But whether that standard was met in this case depends on information that isn’t publicly available.”
Public Safety Minister Gary Anandasangaree said the meeting with OpenAI executives was a “critical first step with OpenAI.”
“There’s still a lot of unanswered questions, and there’s certainly a sense of frustration, and, frankly, a sense that tech companies overall are not doing enough to address the issues around information that they hold,” he told reporters.
Conservative frontbencher Michelle Rempel Garner told reporters she was “concerned about the government’s pace” on addressing issues posed by AI, saying her party would be willing to collaborate on “smart policy” and discussions on the topic.
With a report from Emily Haws
Report an editorial error
Report a technical issue
Editorial code of conduct
Authors and topics you follow will be added to your personal news feed in Following.
© Copyright 2026 The Globe and Mail Inc. All rights reserved.
Andrew Saunders, President and CEO

source

Scroll to Top