OpenAI has intensified its efforts to combat the misuse of artificial intelligence. In a new report, the company reveals that it has dismantled several international networks in recent months that were using its models for cyberattacks, scams, and political influence.
The analysis shows how malicious actors are becoming increasingly sophisticated in their use of AI, while OpenAI is simultaneously expanding its defense mechanisms. According to the report, threat actors are not trying to use artificial intelligence to develop entirely new methods of attack, but rather to accelerate existing tactics. OpenAI’s models are linked to traditional attack chains, making phishing, malware development, and propaganda more efficient.
Russian groups were caught refining malware components, including remote-access Trojans and credential stealers. Korean-speaking hackers worked on command-and-control systems, while suspected Chinese-affiliated actors used AI to improve phishing campaigns targeting Taiwan, US universities, and political organizations. In all these cases, OpenAI’s built-in security mechanisms blocked the direct malicious requests.
Large scam industry in Asia and Africa
A notable section of the report covers the growth of organized scam networks in Cambodia, Myanmar, and Nigeria. These groups don’t just use ChatGPT to translate messages; they also utilize it to generate content. They also use it to create convincing profiles and set up fraudulent investment campaigns.
Some operators even asked the AI to remove text characteristics that could betray their use of generative models. According to researchers, the scale of this scam industry is such that it significantly impacts a substantial portion of Cambodia’s economy.
Despite these attempts at abuse, it appears that AI is used more often for defensive purposes than for offensive ones. Data from OpenAI shows that the models are used three times more often to detect fraud than to facilitate it. Millions of users engage ChatGPT to analyze suspicious messages, investment proposals, and websites. This demonstrates that AI has become a crucial tool in the fight against digital crime.
OpenAI also reports multiple attempts by users with suspected ties to authoritarian regimes. They use AI for domestic surveillance and propaganda. Accounts that requested support in monitoring social media, profiling dissidents, or generating propaganda were immediately blocked.
Security expert Cory Kennedy of SecurityScorecard says the report shows how threat actors combine multiple AI models to scale up their operations. According to him, this highlights the importance of collaboration among technology companies, governments, and civil society organizations. OpenAI emphasizes that the misuse of AI is not a temporary risk, but a constantly evolving threat that requires ongoing vigilance.