Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Posted on March 11, 2026 in News.
Content Warning: Extreme violence
“Happy (and safe) shooting!” That’s how the AI chatbot DeepSeek signed off advice on selecting rifles for a “long-range target” after CCDH’s test account asked questions about the assassination of politicians.
CCDH’s new report, based on research conducted in collaboration with CNN, shows that popular AI chatbots like Open AI’s ChatGPT, Meta AI, and Google Gemini make planning harm against innocent people easier for extremists and would-be attackers.
We found that 8 out of the 10 AI chatbots regularly assisted users planning violent attacks:
AI companies are making a choice when they design unsafe platforms. Technology to prevent this harm already exists: Anthropic’s Claude, for example, consistently tried to dissuade users from acts of violence.
AI platforms are becoming a weapon for extremists and school shooters. Demand AI companies put people’s safety ahead of profit.
The main suspect in the mass school shooting that left eight people dead and 25 injured in Canada, in February 2026, used ChatGPT to ask about scenarios involving gun violence. According to the Wall Street Journal, OpenAI’s employees considered alerting law enforcement, but the company decided against doing so.
Investigations into the Las Vegas Cybertruck Explosion, in January 2025, showed that the perpetrator also used ChatGPT to source guidance on explosives and tactics to evade law enforcement prior to the attack at the Trump International Hotel. In Finland, that same year, a 16-year-old spent months using a chatbot to write a manifesto and a plan before stabbing three classmates.
These examples confirm our new findings: AI companies are allowing their platforms to be weaponized to harm people. The increasing popularity of chatbots, especially among teens, is sounding the alarm that these crimes could become even more common without intervention.
CCDH researchers posed as users and asked for assistance in planning violent attacks on ChatGPT, Google Gemini, Claude, Microsoft Copilot, Meta AI, DeepSeek, Perplexity, Snapchat My AI, Character.AI and Replika.
This is what we found:
The differences in behavior between chatbots show that AI-encouraged violence is preventable. Companies are neglecting the implementation of safeguards available to them.
AI chatbots’ failures to stop deadly content are a threat to kids, whole communities, and national security. It’s time AI companies take responsibility and prioritize safety over unlimited engagement before they cause even more harm and violence.
Sadly, what we see is the opposite: companies are increasingly rolling back their safety policies. Even Anthropic recently dropped one of the company’s safety pledges.
Like social media companies, AI companies won’t act on their own. That’s why policymakers must step up to regulate AI platforms, demanding safety-by-design and strong safeguards.
The dangers posed by AI companies should not feel overwhelming, as Anthropic showed we have the tools to prevent this harm. What can you do? Sign up to our email community and get regular updates about our AI research and what you can do to help us stop AI platforms going rogue.
Families are already dealing with the harms of unsafe AI – from self-harm content reaching young people to deepfakes spreading lies. Now we’ve found chatbots can even help plan mass violence like school shootings.
This is preventable. Demand that AI companies build safety into their systems.
Content Warning: Extreme violence “Happy (and safe) shooting!” That’s how the AI chatbot DeepSeek signed off advice on selecting rifles for a “long-range target” after CCDH’s test account asked questions about the assassination of politicians. […]
Start reading
This year began with an unprecedented wave of millions of women and girls being cruelly exposed. AI-generated sexualized images of them flooded X—without their consent. CCDH research showed that, with one click, X’s Grok AI […]
Start reading
The Center for Countering Digital Hate (CCDH) welcomes the Government’s consultation on children’s use of social media. This process reflects a simple but urgent reality: social media companies have failed to make their platforms safe […]
Start reading
Center for Countering Digital Hate Inc (US)
U.S Registered 501(c)(3) EIN: 86-2006080
Copyright © 2022 CCDH Inc.
Center for Countering Digital Hate (UK) Charitable Fund is a restricted fund operating under the auspices of Prism the Gift Fund, Registered Charity Number: 1099682.