Google says it likely prevented a cyberattack in which hackers used AI to develop a zero-day vulnerability. According to the Google Threat Intelligence Group, the incident demonstrates how generative AI is increasingly shifting from a tool to an active component of cyberattacks.
The researchers state that this is the first time they have observed an exploit that AI likely helped develop. It involved a vulnerability in a popular open-source web tool for system administration. This vulnerability allowed two-factor authentication to be bypassed, provided attackers already possessed valid login credentials.
According to Google, cybercriminals intended to use the vulnerability for a large-scale attack campaign. The company says it intervened together with the vendor before the exploit was actively abused. Google has not disclosed which group was behind the attack. However, the company says it has no evidence that Gemini was used.
According to Google, attackers are increasingly using AI for vulnerability research and exploit development. Groups linked to China and North Korea, in particular, are reportedly actively experimenting with AI models to detect software flaws.
According to Google, attackers have models pose as security researchers or firmware experts to perform analyses on embedded systems and protocols. They also use datasets containing historical vulnerabilities to help models better reason about security flaws.
In addition, Google observes that attackers are deploying agentic tools to partially automate research and exploit validation. This shifts AI from a passive assistant to a system that independently executes parts of offensive workflows.
Malware is becoming more autonomous
The report also describes malware that uses AI for obfuscation and autonomous task execution. Some malware families generate extra code with no direct function to make detection more difficult. Other variants dynamically adapt scripts or payloads to evade security software.
One example is PROMPTSPY, an Android backdoor that leverages Gemini functionality. According to Google, the malware can read a device’s user interface, send information to a model, and then receive instructions to perform actions, such as clicking or swiping on specific screen elements.
In addition to AI abuse, Google also observes that AI ecosystems are increasingly being targeted. Attackers focus on libraries, plug-ins, API connectors, and other components related to AI platforms.
The report references, among other things, attacks on software projects such as LiteLLM and BerriAI. Through supply-chain attacks, criminals attempted to gain access to cloud credentials, GitHub tokens, and other sensitive data. According to Google, such attacks can lead not only to ransomware or data theft but also to the misuse of internal AI systems.
The publication comes at a time when AI companies are discussing the risks of powerful security models more explicitly. Anthropic recently postponed the rollout of its Mythos model due to concerns about misuse by criminals. The model is now available to a limited group of testers.
Google emphasizes that the same AI technology can also be used defensively. The company points to Big Sleep, an AI agent from Google DeepMind and Project Zero that searches for unknown vulnerabilities, and to CodeMender, an experimental system designed to automatically help repair vulnerabilities.