The drumbeat around AI in the world of cybersecurity has been in full swing in recent years. This market is also eager to get a piece of the pie with the rise and popularity of AI in general since the widespread introduction of GenAI. But what can organizations expect? And what is important to know? We asked Robert Tom, systems engineer at Fortinet Netherlands.
We were in San Francisco earlier this year at the annual RSA Conference. It wasn’t very difficult to pinpoint an overarching theme for the show as a whole. That was unmistakably AI. We’ve been in this industry for a while now and AI has been a theme for quite a few years, but we haven’t seen it as obvious as it was this year. All parties present had something to say about it: how it is part of the security tooling that organizations can deploy, but also how it is used by attackers.
If you look at the above development from a somewhat cynical perspective, you could say that all this attention has primarily a commercial angle. AI scores, so everybody jumps on that train. That will undoubtedly be part of the story. However, there is also no denying that the developments in particular since the widespread deployment of GenAI did bring a lot of changes to cybersecurity in general.
How is AI changing cybersecurity?
What these changes are exactly and what roles AI plays in them is not hugely clear. That in itself is not surprising, as the development of AI is very difficult to assess. As Tom puts it, “AI doesn’t develop in a linear fashion.”
Still, it is very confusing when one vendor talks about a tidal wave of AI-based attacks that cause many casualties, while the other sees virtually nothing of it yet in its own SOC and points mainly to basic hygiene that is still often the main point of concern. The one does not necessarily exclude the other, of course, but it is confusing for organizations that want to get an update on the state of the market.
Tom takes a fairly down-to-earth look at the impact of AI on cyber attacks: “How do you know when an attacker is using AI? Perhaps deepfakes are an exception here but otherwise attribution is almost impossible. If I use AI to write malware, how do you know that was done with AI?”
One might further wonder how relevant it is to know whether AI had anything to do with the creation of a cyber attack. If AI primarily makes it easier and faster for attackers to write malware, that doesn’t necessarily say anything about its quality and thus risk. You may face more attacks, but they need not be more dangerous than attacks with malware written by a human. For deepfakes, detecting the use of AI is actually important. Especially as they get better and more dangerous, it is critical that attribution can be determined.
AI is not new, also not in cybersecurity
GenAI has ensured that it virtually everybody everywhere talks about AI, including in cybersecurity. It is then often more or less conveniently forgotten that AI in itself is nothing new. Tom also wants to make that point with regard to cybersecurity: “AI and Cybersecurity have been inextricably linked for 10+ years. AI is an umbrella term used casually since the explosion of GenAI and mostly causes a lot of confusion. Without making a history lesson out of this, Machine Learning/Neural Networking has been widely possible since the early 2010s. This has also been the adoption moment within a lot of cybersecurity solutions, including Fortinet.”
However, Tom also sees that the relatively recent introduction of GenAI has caused a marked change in the market. Many security providers equip their solutions with GenAI capabilities. That means they are becoming assistants to SOC analysts. It doesn’t stop here, however, he expects: “Looking a little further ahead, I expect that where we are now massively deploying AI as an agent/assistant, this is slowly going to change to AI that can eventually operate completely autonomously based on objectives.” This will also be necessary, we expect, because it will become increasingly difficult in the future to continue to accurately repel all attacks with the deployment of humans alone. There are simply too many of them for that.
What should organizations look out for when deploying AI?
It may be tempting for organizations to now always opt for as much AI as possible in security solutions. However, Tom has his reservations about this. “AI is not a seal of quality,” he clearly states. AI still needs to be trained and you need data for that.
According to Tom, it is important first of all to get clear what you want to achieve by deploying AI and to systematically test and assess the results. To make this clear, he uses the following analogy: “When you go out to eat in a restaurant, you don’t run into the kitchen to see which oven is being used before you eat. You judge the result, which is what is ultimately on your plate. That’s what counts.”
Furthermore, it is good to realize that AI is not and has not been a magic solution to everything, Tom points out. While it is very powerful, it does not necessarily help organizations solve all problems. “We are not vulnerable because we are using AI” is not the right attitude, according to him.
Finally, Tom elaborates on the use of external AI models, for example, the public LLMs that many of our readers are no doubt familiar with. “For business applications, it is important to have a good understanding of what happens to your data,” he indicates. That sounds like kicking in an open door, but it is quite fundamental. A neural network behind an LLM learns from data, but then never “forgets” that data. “It is important to understand that Neural Networks are not databases that you can erase some fields from,” he states.
How is Fortinet deploying AI?
In recent years, it seems like it’s mostly the young security companies that are demanding all the attention when it comes to AI. However, somewhat older companies like Fortinet should not be underestimated in this regard either. In fact, many of those companies have been using AI for much longer than those younger companies have existed at all. This is also what Tom points out. “Fortinet has been at the forefront of innovation in AI and ML for more than a decade,” he states.
Furthermore, he also wants to clarify that “virtually every Fortinet product, whether it’s a firewall, endpoint, cloud or other solution,” is powered in real time by the output of AI. Finally, Fortinet also deploys AI locally, in products such as EDR, NDR, Sandboxing, WAF “and many more,” according to him. The Security Fabric immediately springs to our mind in this regard regard. “The use of AI in the Fortinet Security Fabric is very important and helps in detecting ‘zero-day’ threats and also in recovering from advanced attacks, among other things,” he indicates.
The AI we discuss above is all “old” AI. That is, the AI we knew before GenAI became commonplace. That does not make it inferior or less important than GenAI, by the way. Developments will certainly continue in this field of AI. Especially as we move toward autonomous platforms within cybersecurity, the various forms of AI will actually have to continue to develop together.
In the field of GenAI, Fortinet has also already made considerable strides, Tom tells us. Improving ease of use, speed of action and improving fault tolerance are key. That in itself is not very surprising, because that is what GenAI adds in general, also in solutions from other vendors. At Fortinet, you’ll find it in the SIEM and SOAR products, assisting the SOC analyst. The scale of Fortinet, which now has more than 40 AI-driven solutions, Tom points out, makes the impact GenAI can have on the entire platform greater than is the case for many other players in the market.
Is the AI-driven future autonomous?
Finally, we take a brief look at the future with Tom. The future of cybersecurity in general and of Fortinet in particular. We have already dropped the term “autonomous” a few times in this article. Complete autonomy in cybersecurity is a door that AI has the potential to open as well. At the conferences we go to during the year, this is also often a topic of conversation.
Tom obviously sees this as well: “On the one hand, it is precisely the use of AI in cyber attacks which ensures that there is no more time to react manually before damage has already occurred. On the other hand, it is AI which ensures that the response to so-called “AI augmented attacks” is quick and effective and prevents damage.” This is a cat-and-mouse game that attackers and defenders already play autonomously today. Automation, he says, is no longer optional, but simply necessary. This will increase rapidly in the future.
However, autonomy also brings with it a very important precondition. This is only possible if the components of organization can integrate well with each other. In theory, you could set this up using many separate point solutions. However, a solid security platform should be able to do this better, and also make it easier. The entire market is moving in that direction. At Fortinet, this is the Security Fabric. That’s basically Fortinet’s security platform. With it, it wants to be able to protect environments in an integrated and automated way, using both GenAI and the AI it has been deploying and developing for more than a decade.
Also read: Modern organizations can no longer do without a security fabric