In the security world, there is much talk about artificial intelligence’s impact. But what is AI doing on the attackers’ side? How is it being deployed within security products designed to make companies more secure? Is it also possible to secure the AI used within organizations? We will address those questions on Techzine.
We will address each question in a separate article featuring experts from the field. They participated in a roundtable discussion at the beginning of Cybersecurity Awareness Month. The participants are: André Noordam of SentinelOne, Patrick de Jong of Palo Alto Networks, Daan Huybregts of Zscaler, Joost van Drenth of NetApp, Edwin Weijdema of Veeam, Pieter Molen of Trend Micro, Danlin Ou of Synology, Daniël Jansen of Tesorion and Younes Loumite of NinjaOne. The roundtable participants discuss AI as a cyber threat in this first article.
How big is the threat?
As the discussion begins, it becomes immediately clear that AI significantly impacts refining cybercriminals’ campaigns. They know they can use technology to increase the impact of their attacks. This can be done in several ways, with hackers primarily seeking higher levels and frequency of attacks. Companies should prepare for the worst if they can use AI to perform better attacks and strike more frequently.
Weijdema of Veeam currently sees only a minor threat in using artificial intelligence in cyber attacks. “We thought we would see many more AI-powered attacks when AI emerged. However, if you look at the data from incident response, only one percent is AI-driven. The rest is mostly social engineering,” Weijdema said about the attack frequency not being as devastating as initially feared.
Amplified social engineering
However, attacks may not register as pure AI attacks, but AI may actually be used. For example, consider phishing emails that now look much more professional and credible, as opposed to the spam-like tone of before. Such a development significantly increases a campaign’s success rate, possibly by a factor of ten. Although an attack is not entirely AI-driven, it becomes much more effective through AI.
NetApp’s Van Drenth also agrees that the impact is not as great as predicted, but he expects it to change in the coming years. “The intelligence of attacks will increase, not only in social engineering but also in other campaigns. We have not yet seen the field hockey stick effect, where the volume or complexity of attacks grows exponentially. Still, the expectations for the next few years are different.”
No one is spared
The sophistication of social engineering means that basically anyone can be a target. Previously, phishing campaigns targeted the masses, hoping that someone would bite despite the often low level of sophistication. Now, highly targeted emails can be created, with cybercriminals pretending to know exactly their target’s situation. They simply gather publicly available information and use it to send a professional-looking email that is barely distinguishable from a legitimate message.
Several roundtable participants recognize this. Consider so-called CEO scams, where employees receive an e-mail to discuss urgent matters. These e-mails look genuine, with an attractive title and an outlined emergency situation. They often ask in such an e-mail to call a specific number, or sometimes even to transfer money. It is not inconceivable for an employee to act on those requests, unless they find out on their own that something is not right.
According to Ou, nowadays, more than ever, it is necessary to have a solid strategy to prevent criminals from accessing data. This is what CEO fraud is also about. With immutable backups for periods of 7 to 30 days, data is safe. “No one can access it. You can’t even delete it. So it is really protected, even against ransomware,” says Ou.
Sudden sense of language
Noordam of SentinelOne also encountered a situation that demonstrates how good attacks are. Sometimes, an e-mail has to be checked several times to be sure it is phishing. “This summer, I booked a hotel,” Noordam begins the anecdote. “I received an e-mail from the hotel. It was perfectly written, and even the address was correct. The e-mail asked me to provide my credit card information again to confirm the booking. I found that suspicious. I called the hotel, but it turned out not to be correct. But 99 percent of people would have clicked on the link.”
With this, Noordam emphasizes that in addition to targeted campaigns, there is a quality injection of language. Campaigns of years ago made victims with flawed wording, but now AI models have evolved so that the emails appear to come from real people. Even writing messages in local languages is getting better and better for AI. Because of this continued development, some AIs write Dutch better than many humans. Phishing emails can only be distinguished by recognizing suspicious calls to action.
Malicious code for everyone
So the stronger campaigns of cybercriminals do create new risks for organizations. The question is whether your organization and its employees are sufficiently resilient to them. And even if they are, can other types of advances be stopped? After all, with the possibilities offered by AI, many more people can walk the path of cybercrime. Countless new opportunities have emerged that lower the threshold for setting up and running campaigns.
Jansen of Tesorion rightly notes that this creates opportunities for less skilled people. “Among other things, in the field of malware development. If you know how to use AI, you can do a lot more today than you could a decade ago. By asking the right questions and giving the right input, generative AI can code for you,” Jansen said.
Some discussion about this arises at the table. After all, a generative AI application can write good code at first glance, but when you look deeper, it often has flawed pieces of code. This has to do with model input. However, that is likely to improve in the coming years. There is the danger: in the long run, the threshold for generating a devastating attack will become much lower.
Speed increases
Regardless of how flawed some parts of the code are, malware can be made much faster. Palo Alto Networks’ Young notes that the time required for ransomware development has decreased dramatically in just a few years. Whereas in 2021, it took at least 12 hours, by 2026, it is expected to be only three hours. Again, you can question the quality of the code, but as with the lower threshold, it potentially leads to more attacks.
De Jong, therefore, wonders what the impact of this accelerated ransomware development will be. “Suddenly, someone with no development experience can create ransomware within three hours. Then, when you as a company try to decrypt the files, it becomes very difficult because the software is full of bugs. You become a victim and have to negotiate to decrypt your data. Still, then it turns out that the software doesn’t work properly and lacks the necessary knowledge, which makes releasing files difficult,” De Jong predicts.
Perfect storm
In this regard, most of the security experts at the table agree that we are still at the beginning and that the potential of AI in cyber attacks is far from being fully exploited. Just as companies are mostly experimenting with artificial intelligence now to make the most of it in the years to come, so too are hackers. In underground forums on the Internet, criminals are trading rogue AI services. These can include chatbots that look much like ChatGPT, but with the ethical restrictions removed. The chatbot no longer has any moral boundaries. It collaborates in the development of malware.
Molen of Trend Micro sees a new business within the hacker community. Where ransomware-as-a-service emerged a few years ago, AI services that hackers can use as tools are now increasingly appearing. “These AI services support the execution of an attack. It is a trend within the community to use other people’s services, giving everyone their own specialty. It lowers costs and increases effectiveness,” Molen said. He adds, “In addition to using AI to support attacks, the use of AI also introduces security risks. One example is that an employee may put confidential information in the prompt when using a public generative AI service. That data is then effectively made public. A data breach occurs.”
Unique insights
This brings us to the point that AI, like hacking, offers both opportunities and risks for organizations. The experts have already provided examples of the new dangers, although there are other risks to consider. Employees create accounts for all new AI tools and then share data with them. Only one account needs to be hacked to give cyber criminals access to a mountain of new data that they can use for their attacks.
In fact, according to Huybregts of Zscaler, hackers are very happy that companies are using open-source models within their development teams. Teams are massively deploying GPT-like models to speed up the app development process. “It potentially gives malicious actors unique insights into the organization. After all, the AI always needs to know everything, which gives hackers opportunities to get the right data. And I think hackers have unique revenue opportunities, for example with malware-as-a-service and red teaming,” Huybregts foresees.
Also an internal danger
The extent of AI’s external influence on attacks cannot be separated from the existing internal threats. Loumite of NinjaOne is adamant about this: internal danger is greater than external. A bit of awareness is certainly needed, as access to AI has increased for every employee in a short period of time. But do employees know how to use AI responsibly? If one is not aware of what information is being processed and whether it fits within the organization’s rules, the potential productivity gains of AI may come at a hefty price tag. AI thus unintentionally becomes an internal threat.
“One organization I work with used AI to solve a firewall problem,” Loumite clarified with a real-life example. “The AI solved the problem perfectly by completely removing the firewall. When the organization was hacked a month later, they discovered the firewall was gone. People had no idea what the AI had done. So it should come down to understanding how to use AI safely and efficiently, otherwise AI can be very dangerous.”
Translating knowledge about threats into defense
Understanding AI’s external and internal influences provides a good basis for the steps you can take. After all, if you want to build a stronger defense, knowledge about the threats is essential. Based on the points raised by the roundtable participants, we may conclude that AI is already a threat. Some parties see it as a minor threat, while others already see it as a greater danger.
Either way, it seems we are only at the beginning. Or, as the title of this article suggests, the fire that may grow into a wildfire. A wildfire that can hopefully be contained with the right preventive measures and quickly eliminated when it gets out of control. Therefore, in our next article, we will take a closer look at the impact of AI on security tools.
This was the first story within our AI in/and cybersecurity series. In the next article, we will discuss AI in security solutions.
Also read: The security platform: what is it and what does it deliver?