State-sponsored hackers from several countries actively use AI and LLMs to support their attack campaigns. Microsoft and OpenAI share how they are leveraging their research results to improve the security of AI tools.
According to Microsoft and OpenAI, state-sponsored hackers increasingly use AI and the underlying LLMs to support their attacks. In their research, both tech giants found that state-sponsored hackers mostly come from China, Iran, North Korea and Russia.
Five state-sponsored hacker groups
OpenAI recently identified five malicious state-sponsored hacker groups that abused its generative AI tools and LLMs to carry out offensive attacks.
Specifically, this involved two Chinese state-sponsored hacker gangs: Charcoal Typhoon and Salmon Typhoon. The researchers came across one gang for the other countries. Namely, the Iran-linked gang Crimson Sandstorm, the North Korea-linked gang Emerald Sleet and the gang with Russian ties Forrest Blizzard.
Limit state hackers, increase security
Microsoft and OpenAI have since taken steps to limit these state hackers’ use of their generative AI tools and underlying LLMs. In addition to immediately blocking the above hacker gangs, both parties have also established a set of other measures.
For the longer term, both parties are now going to monitor state hacker groups and invest in technology to enable this. They will also work more closely with the entire AI ecosystem to share more information about these hacker groups and how they abuse AI.
In addition, both parties will use the lessons learned from these actions to improve the security of their AI tooling and LLMs. This should yield increasingly secure AI systems over time.
Finally, both companies say they want to increase public transparency around state-sponsored misuse of AI. Information about this misuse will be shared more. This should better prepare stakeholders and the public for this type of threat and thus provide a growing “collective defence” against this type of attack.
Code and phishing content creation
The Chinese gang Charcoal Typhoon used OpenAI’s generative AI services for research, among other things. WIth it, they searched for companies to be hacked, cybersecurity tools, and public information about several intelligence agencies. The tool was further used for code debugging, script generation and content creation for possible use in phishing campaigns.
Salmon Typhoon used the services to translate technical documents, retrieve public information on intelligence agencies and regional “threat actors,” help with coding and research how to hide processes on a system.
The Iranian gang Crimson Sandstorm abused OpenAI’s services for help in creating scripts for app and web development, generating content for possible spearphishing attacks and researching how malware can bypass detection.
Emerald Sleet’s North Koreans researched defence organizations and experts in the region, researched known software vulnerabilities, sought help with basic scripting tasks, and compiled content for phishing campaigns.
Finally, Forrest Blizzard’s hackers used AI tools to better understand satellite communication protocols, radar technology and scripting for various forms of malware.
Also read: Iranian state hackers carry out destructive attacks on Israel