4 min

Research by Mandiant shows that cybercriminals are eager to turn to AI, even if its usefulness is limited for now. Fake photos, audio and coding help for malware are among the possibilities, with varying degrees of success for the hackers in question.

In late April, RSA CEO Rohit Ghai said that AI is being used by criminals and security experts alike. It is therefore important to use the technology for credible purposes as much as possible. However, security chief at Splunk Mike Horn recently told us that the key is to be “one small step” behind attackers: in doing so, AI is a tool, but also a danger.

There are many aspects to the deployment of AI, and cybercriminals are a prime example of that being the case. After all, machine learning allows attackers to sift through large amounts of data and write more sophisticated phishing emails with the help of ChatGPT.

Information battle

In the end, however, it’s mainly important to note the potential that AI brings, Mandiant’s report shows. In the future, for example, generative AI could help form false identities on social media, complete with believable content that is automatically tracked. The scope of an attack could also be more significant because translations of phishing emails could appear more credible with AI’s help. Hacktivists can also mimic voices, spreading misinformation about politicians or celebrities by doing so. The rate of success in this regard will vary: text and photos are more likely to be simulated authentically than video and speech, but this gap may narrow over time.

In other words: AI content from cybercrime will vary greatly based on the medium chosen. In terms of visual fakery, Mandiant highlights two different forms of AI being applied. Generative Adversarial Networks (GANs) create believable profile pictures of fictional people, while generative text-to-image models can produce all kinds of images based on text suggestions. In the latter case, Mandiant cites that the pro-Chinese group Dragonbridge deployed this technology to create fake photos of political leaders in the US. They used Midjourney for this purpose. A salient detail is that Midjourney blocks AI images of Chinese politicians even though the company is of American origin. “Political satire in china is pretty not-okay. The ability for people in China to use this tech is more important than your ability to generate satire,” CEO David Holz told The Washington Post in late March. This raises the rather absurd suggestion that AI visuals are only used for satirical purposes in the West and ignores the fact that it can do a lot of damage.

Deepfakes, a technology known for years, have also been used to spread misinformation. Hackers backing Russia, for example, produced a deepfake of Ukrainian President Volodymyr Zelensky in March 2022 to spread confusion.

AI assistance for decoy material, but malware help far off

A remarkable innovation in the cybercrime landscape emerged in July: WormGPT. It is a subscription-based generative AI tool, but unlike ChatGPT, it has no ethical restraints. For example, OpenAI’s chatbot works against you if you ask it to write a phishing email, although with a little creativity you still receive cooperation. WormGPT, however, specializes in this area, so assistance for scamming is just a little easier there. Therefore, tactics that fit with extracting data via a BEC (Business Email Compromise) campaign spearhead such a tool.

One functionality that WormGPT shares with some chatbots is assistance in writing programming code. Mandiant cites an article by Cyberscoop noting that malware code basically differs little from legitimate software. So for that reason, it is also possible to employ conventional AI tools. In other words, hackers can choose a “best of breed” vendor when it comes to coding just as much as companies. The downside (for cybercriminals): LLMs tend to produce insecure and nonsensical code.

Yet criminals advertise LLM solutions that could build malware, Mandiant reports. It suggests scammers are ripping each other off with promises surrounding AI. One user did manage to subvert an EDR solution by malware with help from ChatGPT, for which one received a bug bounty.

AI is already a threat, but not so much LLMs

Mandiant expects generative AI to become increasingly popular for cybercriminals. Its chances of success may well be high in terms of misinformation. Deepfake videos aren’t always convincing for now, but fabricated photos, social media posts and emails may already be more believable with the help of AI.

The less “trendy” form of AI, machine learning, makes it possible to analyze vast amounts of data in a way that would take humans years. There are still many opportunities there for criminals: for example, it could use automated scanners to find Log4Shell vulnerabilities or install backdoors on Citrix servers.