2 min Security

Cybercriminals are using AI in increasingly sophisticated ways, says Google

Cybercriminals are using AI in increasingly sophisticated ways, says Google

The threat posed by artificial intelligence in the hands of malicious actors is becoming increasingly real. This is according to a new report by the Google Threat Intelligence Group (GTIG), which describes how both cybercriminals and state actors are using AI across various phases of their attacks.

Where AI was initially used primarily to speed up processes or reduce errors, the technology has now also become a tool for offensive tactics.

According to GTIG, a growing number of attackers are using large language models to write, modify, or conceal code. For example, LLMs are used to rewrite malware during execution, reducing the likelihood that detection systems will recognize malicious files. The report cites examples of malware families that use AI for so-called just-in-time code generation, in which the malicious component is compiled only at execution time.

Researchers also note that cybercriminals are learning how to circumvent AI security measures. For example, they pose as students researching security vulnerabilities or as participants in a hacking competition. Through this deception, they can sometimes circumvent AI models’ built-in security measures while still receiving help in developing attack techniques.

AI used in complete attack cycle

Meanwhile, a mature market for AI-supported tools is emerging in the digital underworld. Forums now offer ready-made services that use AI to conduct phishing, generate malicious scripts, or automate social engineering campaigns. This expands the use of AI across the entire attack cycle: from reconnaissance and victim recruitment to data exfiltration.

GTIG cites specific examples to illustrate the threat. For example, malware can use language models to rewrite itself in Visual Basic, making the code unrecognizable to security software. Other variants use AI to dynamically generate commands for stealing documents, rather than following fixed instructions.

The researchers emphasize that they are actively combating abuse by improving security filters and detection mechanisms. According to Google, DeepMind is also working on sophisticated classification systems that can more quickly detect abnormal use of AI models.