3 min Security

GhostGPT: rogue chatbot aids cybercrime

GhostGPT: rogue chatbot aids cybercrime

A recently introduced AI chatbot called GhostGPT provides cybercriminals with a handy tool for developing malware.

Like previous similar chatbots such as WormGPT, GhostGPT is an uncensored AI model. This means it is tuned to bypass the usual security measures and ethical limitations of common AI systems such as ChatGPT, Claude, Google Gemini and Microsoft Copilot.

Malicious parties can use GhostGPT to generate malicious code. And to receive unfiltered responses to sensitive or malicious queries that traditional AI systems would typically block. This is what researchers at Abnormal Security write in a blog post.

GhostGPT is promoted for a range of malicious activities. These include coding, malware creation and exploit development. One can also use it to write persuasive emails for corporate email fraud.

Fifty dollars for a week

Abnormal Security discovered GhostGPT in November on a Telegram channel. Since then, the rogue chatbot seems to have gained a lot of traction among cybercriminals. This is what an Abnormal researcher told Dark Reading. The creators are offering three prices for the large language model. They range from $50 for a week’s use and $150 for a month to $300 for three months.

For that price, users get an uncensored AI model that promises quick answers to questions. And that can be used without jailbreak commands. The malware’s author(s) also claim(s) that GhostGPT does not keep user logs or record any user activity. This makes it a desirable tool for those who want to hide their illegal activities.

Malicious chatbots growing problem

Malicious AI chatbots such as GhostGPT are a new and growing problem for security organizations because they lower the threshold for cybercriminals. The tools allow anyone to quickly generate malicious code by entering just a few commands.

Moreover, they enable individuals with some programming skills to increase their capabilities, and to improve their malware and exploit code. They largely eliminate the need for anyone to spend time and effort jailbreaking generative AI models for malicious and malicious use.

WormGPT, for example, appeared in July 2023 as one of the first malicious AI models. It was developed explicitly for malicious use. Since then, there have been a handful of other models, including WolfGPT, EscapeGPT and FraudGPT.

Most of the models gained little traction. This is partly because they did not live up to their promises. Or they were simply jailbreak versions of ChatGPT with added wrappers to make them look like new, standalone AI tools.

According to an Abnormal researcher, GhostGPT is not very different from other uncensored variants such as WormGPT and EscapeGPT. However, the specific differences depend on which variant you compare it to.

EscapeGPT, for example, relies on jailbreak commands to bypass restrictions, while WormGPT was a fully customized large language model (LLM) designed specifically for malicious purposes. With GhostGPT, it is unclear whether it is a custom LLM or a jailbreak version of an existing model because the author has not released this information. This lack of transparency makes it difficult to definitively compare GhostGPT to other variants.

Developer of GhostGPT unknown

GhostGPT’s growing popularity in underground circles also seems to have made its creator(s) more cautious. The author or seller of the chatbot has deactivated many accounts created to promote the tool and appears to have switched to private sales, according to the researcher. Sales threads on various cybercrime forums have also been closed, further concealing the identity of the creators. Currently, there is no definitive information on who is behind GhostGPT.