OpenAI Flagged a Mass Shooter's Troubling Conversations With ChatGPT Before the Incident, Decided Not to Warn Police – Futurism

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.


A grim scoop from the Wall Street Journal: an automated review system at OpenAI flagged disturbing conversations that a future mass shooter was having with the company’s flagship AI ChatGPT — but, despite being urged by employees at the company to warn law enforcement, OpenAI leadership opted not to.
The 18-year-old Jesse Van Rootselaar ultimately killed eight people including herself and injured 25 more in British Columbia earlier this month, in a tragedy that shook Canada and the world. What we didn’t know until today is that employees at OpenAI had already been aware of Van Rootselaar for months, and had debated alerting authorities because of the alarming nature of her conversations with ChatGPT.
In the conversations with OpenAI’s chatbot, according to sources at the company who spoke to the WSJ, Van Rootselaar “described scenarios involving gun violence.” The sources say they recommended that the company warn authorities local authorities, but that leadership at the company decided against it.
An OpenAI spokesperson didn’t dispute those claims, telling the newspaper that it banned Van Rootselaar’s account, but decided that her interactions with ChatGPT didn’t meet its internal criteria for escalating a concern with a user to police.
“Our thoughts are with everyone affected by the Tumbler Ridge tragedy,” the company said in a statement to the paper. The spokesperson also said that the company had reached out to assist Canadian police after the shooting took place.
We’ve known since last year that OpenAI is scanning users’ conversations for signs that they’re planning a violent crime, though it’s not clear whether it’s yet successfully headed off an incident before it happened.
Its decision to engage in that monitoring in the first place reflects an increasingly long list of incidents in which ChatGPT users have fallen into severe mental health crises after becoming obsessed with the bot, sometimes resulting in involuntary commitment or jail — as well as a growing number of suicides and murders, leading to numerous lawsuits.
In a sense, questions of how to deal with threatening online conduct is a longstanding question that every social platform has grappled with. But AI brings difficult new questions to the topic, since chatbots can engage with users directly — sometimes even encouraging bad bad behavior or otherwise behaving inappropriately.
Like many mass shooters, Van Rootselaar left behind a complicated digital legacy — including on Roblox — that investigators are still wading through.
More on OpenAI: AI Delusions Are Leading to Domestic Abuse, Harassment, and Stalking

I’m the executive editor at Futurism, assigning, editing, and reporting on everything from artificial intelligence and space exploration to the personalities shaping the tech sector.





















Disclaimer(s)
Articles may contain affiliate links which enable us to share in the revenue of any purchases made. Registration on or use of this site constitutes acceptance of our Terms of Service.
© 2026 Recurrent. All rights reserved.

source

Scroll to Top