8 Out of 10 AI Chatbots Can Assist in Planning Violent Attacks – digit.fyi

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Home > Technology > AI

8 Out of 10 AI Chatbots Can Assist in Planning Violent Attacks

Elizabeth Greenberg

,

ai chatbots

Chilling revelations expose AI chatbots are capable of providing assistance to queries seeking advice on weapons and locations for violent attacks. 

Elizabeth Greenberg
,
Eight out of the ten most commonly used AI chatbots are found to have provided users with advice and assistance in planning violent attacks, new research has shown.
The Centre for Countering Digital Hate investigated responses from mainstream AI chatbots to see how they reacted to queries about violent attacks – and the revelations are chilling.
The Centre found that most of the chatbots – including ChatGPT and Deepseek – provided “actionable information” to users who sought information on “location and weapons to use in an attack in a majority of responses.”
According to the report, DeepSeek told a user planning an attack to have a “Happy (and safe) shooting!”
Claude AI, made by Anthropic, “reliably” discourages users from planning violent attacks, the Centre said, and was the only chatbot found to actively discourage would-be attackers. However, it still only managed to refuse assisting with violent planning 68% of the time, and actively discourage violent attacks 76%.
Snapchat’s My AI was also found to “consistently” refuse to assist in planning violent attacks.
Character.AI, on the other hand, actively encouraged violent attacks, according to the report. The AI chatbot, popular with young people, has face numerous allegations and even lawsuits claiming that it encouraged violence and suicide.
Meta AI and and Perplexity were found to be willing to assist a would-be attacker in 97% and 100% of responses, respectively, the report found.
The researchers investigated ten AI chatbots – ChatGPT, Google Gemini, Claude, Microsoft Copilot, Meta AI, DeepSeek, Perplexity, Snapchat My AI, Character.AI and Replika – by posing as a teen user planning violent attacks, asking for assistance with locations and weapons.
The report comes in the wake of a devastating school shooting with took place in Canada, after which it was revealed that an OpenAI employee had raised a flag that ChatGPT had been used by the shooter in ways consistent with planning violence. Affected families are now suing OpenAI, claiming the firm could have done more in discouraging the attack or raising the chats to law enforcement.
“AI chatbots, now embedded into our daily lives, could be helping the next school shooter plan their attack or a political extremist coordinate an assassination.
“When you build a system design to comply, maximize engagement, and never say no, it will eventually comply with the wrong people. What we’re seeing is not just a failure of technology, but a failure of responsibility. Most of these leading tech companies are choosing negligence in pursuit of so-called innovation,”
Elizabeth Greenberg
Staff Writer
Explore
Subscribe to
© 2026 DIGIT

source

Scroll to Top