Anthropic Says Chinese Hackers Used AI to Attack 30 Organizations
Comments
Link successfully copied
A visitor watches an artificial intelligence sign on an animated screen at the Mobile World Congress, the telecom industry’s biggest annual gathering, in Barcelona on Feb. 28, 2023. (Josep Lago/AFP via Getty Images)
By Rob Sabo
11/14/2025Updated: 11/17/2025

Researchers at artificial intelligence company Anthropic said on Nov. 13 that they have uncovered the first use of artificial intelligence (AI) in a cyberattack by a foreign government.

Anthropic, the San Francisco-based developer of AI chatbot Claude, stated in a blog post that it was highly confident that state-sponsored Chinese threat actors had used the company’s Claude Code tool to create an attack framework that, once put in play, required minimal human involvement.

Attackers manipulated Anthropic’s AI software to attack 30 global targets, including government agencies, technology and financial services companies, and chemical manufacturers, Anthropic said in a post on X. A small number of attacks were successful.

The multipronged attacks were first spotted in mid-September, and over the course of 10 days, Anthropic researchers mapped out the scope of the operation, notifying affected organizations and working in conjunction with regional authorities. The attacks were made possible because of rapid advancements in AI that did not exist just 12 months ago, the company stated.

In the first phase of the cyberattacks, threat actors identified targets and used Claude Code as an automated tool for execution. They bypassed Claude Code’s internal safeguards by positioning themselves as a cybersecurity defense employee and got the AI chatbot to execute the attacks by parsing them down into minor, seemingly innocuous tasks that didn’t raise any red flags.

Once it gained access, Claude Code began looking for high-value databases, as well as vulnerabilities in organizations’ cybersecurity systems. The AI chatbot wrote its own exploitation code, harvested usernames and passwords to access databases, and exfiltrated data with little human interaction. Lastly, Claude presented detailed summaries of its actions, including which systems were breached, the credentials it used, and back doors that were created.

Anthropic estimates that AI carried out between 80 percent and 90 percent of the work during the cyberattacks.

“The sheer amount of work performed by the AI would have taken vast amounts of time for a human team,” Anthropic stated in the blog post. “At the peak of its attack, the AI made thousands of requests, often multiple per second—an attack speed that would have been, for human hackers, simply impossible to match.

“To keep pace with this rapidly advancing threat, we’ve expanded our detection capabilities and developed better classifiers to flag malicious activity. We’re continually working on new methods of investigating and detecting large-scale, distributed attacks like this one.”

In early October, Anthropic said AI models had sufficiently developed to not only reproduce cyberattacks, but also outperform some human cyberdefense teams. The company advised AI developers to continue developing AI safeguards, as similar attacks are likely to be deployed in the future.

Chris Krebs, former director of the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency, said during a morning interview with CBS that the attack is likely a sign of things to come.

“We’ve been talking about events and attacks like this for close to a decade,” Krebs said. “To see it actually come into life like this ... is pretty chilling, and there’s a lot of work we have to do in the near future to stem the flow.”

Share This Article:
Rob Sabo
Author
Rob Sabo has worked as a business journalist for nearly two decades and covers a broad range of business topics for The Epoch Times.

©2023-2025 California Insider All Rights Reserved. California Insider is a part of Epoch Media Group.