A hacker used AI to automate an 'unprecedented' cybercrime spree, Anthropic says – NBC News

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Morning Rundown: 50% tariffs on India take effect, economists worry over Fed showdown, and remembering Hurricane Katrina 20 years on
Profile
Sections
Local
tv
Featured
More From NBC
Follow NBC News
news Alerts
There are no new alerts at this time
A hacker has exploited a leading artificial intelligence chatbot to conduct the most comprehensive and lucrative AI cybercriminal operation known to date, using it to do everything from find targets to write ransom notes.
In a report published Tuesday, Anthropic, the company behind the popular Claude chatbot, said that an unnamed hacker “used AI to what we believe is an unprecedented degree” to research, hack and extort at least 17 companies.
Cyber extortion, where hackers steal information like sensitive user data or trade secrets, is a common criminal tactic. And AI has made some of that easier, with scammers using AI chatbots for help writing phishing emails. In recent months, hackers of all stripes have increasingly incorporated AI tools in their work.
But the case Anthropic found is the first publicly documented instance in which a hacker used a leading AI company’s chatbot to automate almost an entire cybercrime spree.
According to the blog post, one of Anthropic’s periodic reports on threats, the operation began with the hacker convincing Claude Code — Anthropic’s chatbot that specializes in “vibe coding,” or creating computer programming based on simple requests — to identify companies vulnerable to attack. Claude then created malicious software to actually steal sensitive information from the companies. Next, it organized the hacked files and analyzed them to both help determine what was sensitive and could be used to extort the victim companies.
The chatbot then analyzed the companies’ hacked financial documents to help determine a realistic amount of bitcoin to demand in exchange for the hacker’s promise not to publish that material. It also wrote suggested extortion emails.
Jacob Klein, head of threat intelligence for Anthropic, said that the campaign appeared to come from an individual hacker outside of the U.S. and happen over the span of three months.
“We have robust safeguards and multiple layers of defense for detecting this kind of misuse, but determined actors sometimes attempt to evade our systems through sophisticated techniques,” he said.
Anthropic declined to name any of the 17 companies, but said they included a defense contractor, a financial institution and multiple health care providers. The stolen data included Social Security numbers, bank details and patients’ sensitive medical information. The hacker also took files related to sensitive defense information regulated by the U.S. State Department, known as International Traffic in Arms Regulations.
It’s not clear how many of the companies paid or how much money the hacker made, but the extortion demands ranged from around $75,000 to more than $500,000, the report said.
The burgeoning AI industry is almost entirely unregulated by the federal government and is generally encouraged to self-police.
Anthropic, a leading AI company, is broadly regarded as taking safety seriously. It declined to say how a hacker was able to exploit Claude Code so severely, but said it had implemented some additional safeguards.
“While we have taken steps to prevent this type of misuse, we expect this model to become increasingly common as AI lowers the barrier to entry for sophisticated cybercrime operations,” the report found.
Kevin Collier is a reporter covering cybersecurity, privacy and technology policy for NBC News.
© 2025 NBCUniversal Media, LLC