The latest analysis from the Google Threat Intelligence Group shows that malicious actors are no longer just exploring artificial intelligence but are actively integrating it into their operations.
In a post on the Google Cloud website, the team describes how attackers are experimenting with model distillation, AI-assisted phishing, and automated malware development.
The overview builds on the Google Threat Intelligence Group’s earlier report titled Adversarial Misuse of Generative AI, which was the first systematic description of how state actors and cybercriminals use generative AI. Where that report mainly provided an inventory of misuse scenarios and patterns, this update focuses on further refinement, experiments with model distillation, and the growing integration of AI into attack chains.
According to the researchers, a clear shift is visible. Whereas AI was previously seen primarily as a tool for defensive purposes or academic research, state actors and cybercriminals are now making concrete use of generative models to increase their efficiency. This ranges from analyzing targets and collecting open source data to drafting convincing spearphishing emails in multiple languages.
Cloning AI models via distillation attacks
An important part of the report revolves around distillation attacks. In these attacks, parties attempt to reproduce or approximate an existing AI model by repeatedly querying it and systematically analyzing the responses. This allows the functionality of a commercial model to be replicated without direct access to the underlying technology. According to the researchers, this is not only a security risk, but also a threat to intellectual property.
In addition, the team notes that AI is increasingly being used to generate and modify code. This does not mean that fully autonomous malware campaigns are taking place, but it does mean that developers can produce variants more quickly and attempt to circumvent detection mechanisms. Models are used to rewrite scripts, analyze documentation, and interpret error messages. In this way, AI acts as a productivity enhancer for attackers.
At the same time, Google emphasizes that most of the activities are still experimental. Many actors are testing the limits of models, investigating security mechanisms, and trying to circumvent restrictions. According to the report, this leads to a cat-and-mouse game between model providers and attackers, with detection and monitoring being tightened.
The researchers argue that AI is primarily a reinforcing factor within existing attack chains, rather than a new threat category. The use of artificial intelligence lowers the threshold for certain activities and increases scalability, but does not replace the traditional techniques and infrastructure that have been in use for years.
For organizations, this means they must adapt their security strategy to a threat landscape where AI is used on both sides. The report emphasizes that visibility, monitoring of AI service abuse, and threat intelligence sharing remain crucial. According to the researchers, the integration of AI into offensive operations will continue to deepen, keeping this topic high on the agenda in 2026.