Two LLM platforms, WormGPT 4 and KawaiiGPT, demonstrate that AI tools are becoming increasingly accessible to cybercriminals. This is according to research by Palo Alto Networks Unit 42. For as little as $50 a month, attackers can get advanced help with phishing, ransomware, and social engineering. The ethical boundaries that mainstream AI tools enforce have been completely removed here.
AI tools are powerful enough to build complex systems, but that same power makes them suitable for destroying systems, according to Unit 42. Researchers call this the “dual-use” dilemma, familiar from nuclear technology and biotechnology, and it now also applies to large language models. What helps defenders accelerate security responses also helps attackers scale up their operations.
The use of LLMs for malicious tasks is low-threshold in nature but has a wide reach. Even those who speak only one language can produce phishing emails in virtually any other language that are grammatically correct and convincing. Even knowledge of coding is no longer a requirement, as attackers can “vibe code” their malware scripts. Sometimes this requires circumventing ethical LLM guardrails that are as well-intentioned as they are porous, but nowadays, malicious actors simply build their own tools that work out-of-the-box with malicious tasks.
WormGPT 4: commercial platform for cybercrime
WormGPT first appeared in July 2023 as one of the first commercial malicious LLMs. It came as a warning and did not prove to be too dangerous in itself, as it had many limitations. The original version was built on the open-source GPT-J 6B model, refined with datasets on malware and exploits. After negative publicity, the developer stopped the project, but demand remained.
The current version, WormGPT 4, has a clear commercial strategy. Subscriptions cost $50 per month, $175 per year, or $220 for lifetime access. The platform has over 500 subscribers on Telegram and operates without any ethical restrictions. The tagline is simple: “AI without boundaries.”
The capabilities are geared toward practical cybercrime. The system generates convincing phishing emails without language errors, the hallmark of traditional attacks. It also provides ready-to-use PowerShell scripts for ransomware, complete with AES-256 encryption and optional data exfiltration via Tor. The interface is user-friendly: requesting a script for locking PDF files yields working code within seconds.
It is clear that such AI models are here to stay. Open-source LLMs can be adapted to perform unethical tasks and have been available for years at a level that can cause damage.
KawaiiGPT brings malware within reach
KawaiiGPT, identified in July 2025, lowers the threshold for malicious actors even further. The tool is available for free on GitHub and runs on most Linux systems within five minutes. This accessibility makes it attractive to beginners who previously did not have access to such tools.
The interface uses a casual tone (“Owo! okay! here you go…”), but the output is dangerous. The system creates professional spear-phishing emails, Python scripts for lateral movement via SSH, and data exfiltration tools. The functionality is more than sufficient for basic attacks.
With 500 registered users and an active Telegram community of 180 members, KawaiiGPT is not a marginal phenomenon. The tool positions itself as a custom-built model rather than a simple jailbreak of public APIs. Whether this is true remains unclear, but it does create a sense of authenticity among users.
What does this mean for defenders?
The emergence of these tools signals a shift in the threat landscape, according to Unit 42. “Advanced, unrestricted AI is no longer confined to the theoretical world or to highly skilled national actors.”
For security teams, this means that classic warning signs are no longer reliable. Poor grammar in phishing emails can be eliminated with a single prompt. Sloppy code in malware is becoming rarer. The scale and speed of attacks are increasing, while the skills required are decreasing.
Three groups bear responsibility, according to the Unit 42 report. Developers of foundation models must mandate robust alignment techniques and stress testing, according to the researchers. Governments must develop frameworks for auditing and best practices. Researchers must seek international cooperation to disrupt the monetization of malicious LLM services.
The message is clear: the threat is not theoretical, but operational. Organizations that prepare for AI-enhanced attacks are better equipped to deal with what is to come.