Criminal Hackers Used AI to Find, Weaponize Software Flaw, Google Says
Comments
Link successfully copied
The Google logo outside the company’s offices in London on June 24, 2025. (Carlos Jasso/Reuters)
By Bill Pan
5/11/2026Updated: 5/11/2026

Google said it has disrupted a criminal hacking group that appeared to use artificial intelligence to find and weaponize a previously unknown security hole, marking a moment that cybersecurity experts had long warned would come.

In a blog post on May 11, Google Threat Intelligence Group said it identified a threat actor attempting to use a “zero-day” exploit in a mass attack. A zero-day is a software or hardware vulnerability that is unknown to the developers and defenders, leaving them with zero days to fix it before attackers can take advantage of it.

“We have high confidence that the actor likely leveraged an AI model to support the discovery and weaponization of this vulnerability,” the Google team stated, noting that it was the first time they had seen a threat actor using AI in that way.

There were signs that AI helped write the malicious code. For instance, the code was structured in a way Google described as “highly characteristic” of AI-generated output, including “textbook” use of Python coding language and “detailed help menus” that are not typically seen in human-written attack tools.

The exploit would have allowed hackers to bypass two-factor authentication on “a popular open-source, web-based system administration tool,” though they still would have needed valid credentials such as usernames and passwords to succeed, according to the post.

The tech giant did not disclose further details, including when the thwarted attack would have taken place, whose systems the hackers were targeting, or which AI platform may have been used. Google said, however, that the hackers most likely did not use its own Gemini chatbot.

The report also said hackers tied to China and North Korea have shown “significant interest” in using AI for vulnerability discovery.

Security experts have warned for years that malicious hackers could eventually use AI models to comb through computer code, find undisclosed flaws, and turn them into powerful cyberweapons before defenders have time to respond. Until now, that concern was largely theoretical.

“We believe this is the tip of the iceberg,” John Hultquist, chief analyst at Google’s threat intelligence arm, wrote on X. “Other AI-developed [zero-day vulnerabilities] are probably out there.”

Hultquist said there are “good reasons” Google is not disclosing more details about this particular incident, and urged the public to focus instead on the broader implications.

“If criminals are doing it, then state actors with significant resources probably are too,” he wrote.

There have already been reported cases of AI-empowered, state-backed hacking. Anthropic, the company behind the Claude chatbot, stated in November 2025 that state-sponsored Chinese hackers had used its technology in an attempt to infiltrate the computer systems of about 30 companies and government agencies around the world. The infiltration succeeded in a few cases.

At the same time, AI models are becoming increasingly capable of finding previously unknown security flaws. In April, Anthropic announced Claude Mythos Preview, a cybersecurity-focused model that the company said had identified thousands of zero-day vulnerabilities “in every major operating system and every major web browser,” including some security holes that had existed for decades.

Anthropic said the model was so powerful that it would initially be shared only with a limited number of companies and government agencies in the United States and the United Kingdom.

OpenAI has also rolled out a specialized cybersecurity version of ChatGPT. It is similarly only available to “defenders responsible for securing critical infrastructure,” with the goal of using the model to find and patch cyber vulnerabilities and analyze malware.

Earlier this month, the Commerce Department’s Center for AI Standards and Innovation said it had signed agreements with Google, Microsoft, and SpaceX’s xAI to allow the department to evaluate their new AI models before public release. The agency stated that the agreements build on partnerships it made with OpenAI and Anthropic during the Biden administration in August 2024.

Share This Article:
Bill Pan
Author
Bill Pan is an Epoch Times reporter covering education issues and New York news.