In the security world, there is much talk about artificial intelligence’s impact. But what is AI doing on the attackers’ side? How is it being deployed within security products designed to make companies more secure? Is it also possible to secure the AI used within organizations? Techzine takes a closer look.
We have addressed each question in a separate article featuring experts from the field. They previously participated in a Techzine roundtable discussion. The participants are: André Noordam of SentinelOne, Patrick de Jong of Palo Alto Networks, Daan Huybregts of Zscaler, Joost van Drenth of NetApp, Edwin Weijdema of Veeam, Pieter Molen of Trend Micro, Danlin Ou of Synology, Daniël Jansen of Tesorion and Younes Loumite of NinjaOne. This latest article focuses on AI for secure use within the organization. What can companies do to deploy AI safely and effectively without running major risks?
Also read our first story within the AI in/and cybersecurity series that provides an accurate picture on the state of attacks and our second story on the impact of AI on cybersecurity.
A double-edged sword
Organizations are finding AI only too interesting. Especially now that several companies are demonstrating that they are reaping its benefits. Often, they are still struggling with where to deploy it, but for example, it can be useful in customer support or simple daily tasks of business users. Moreover, the technology is expected to become even more valuable. Many companies are, therefore, choosing to implement AI quickly so as not to miss the boat. Because ignoring the technology – as well as something like not understanding it – can cause you to fall behind the competition. You don’t achieve the additional productivity and/or problem-solving required. On the cyber front, it can even make your business more insecure because you’re not using the latest technology.
“The biggest mistake an organization can make is ignoring AI,” Trend Micro’s Molen points out. At the same time, he says that as logical as embracing AI may seem, it comes with risks and challenges, just like other innovations. Molen therefore figuratively compares artificial intelligence to a double-edged sword. Or to draw the metaphor further, “If you use the knife correctly it’s perfect for cutting cheese, but if you use it incorrectly, it can cause serious injury.” You could draw Molen’s comparison broadly, including in cybersecurity. Using AI correctly within a security strategy provides significant benefits, but artificial intelligence also has the potential to make attackers more efficient.
Confidence in AI tools
When you look at where the current distress of implementation comes from, it is primarily caused by the new capabilities. AI has been applied across the IT landscape for decades. However, due to increased complexity, models can do more. The outcomes are much better than before because of the advanced nature. Even the smaller models contain billions of lines of code.
Veeam’s Weijdema acknowledges the complexity, which, as far as he is concerned, also requires another new approach. Indeed, he wonders how people can still understand models in detail. “Even a small AI model contains 6.4 billion lines. We need AI to understand AI,” Weijdema said. Because of the complexity, companies may use AI tools that they cannot fully understand, leading to an increased risk of error and misuse. An additional challenge is the trust companies must place in AI systems without the ability to verify how the tools work. Weijdema advocates a balanced approach: “Use AI but be careful. Trust, but verify.”
Realize proper digital hygiene
Building verification mechanisms and validation processes will be a technological issue, but a human factor is also involved. Human control remains essential. How can you ensure that the output of models is correctly interpreted and adjusted if necessary? This is especially true in security applications, where incorrect output can have disastrous consequences. Technical guardrails help, but employees are the last safety net to filter erroneous output.
At the table, André Noordam of SentinelOne notes that humans are a weak link. If employees exhibit unsafe behavior, you can doubt whether investing in AI systems makes sense. Noordam: “You can have a very secure AI, but if employees use unsecured AI tools at home, they can bypass all security.” With his remark, Noordam illustrates the need for companies to invest in technology and staff training and awareness.
Digital hygiene plays a key role here. Employees need to know the risks they face when sharing data and how to use AI tools safely. Without proper training, an employee may accidentally disclose sensitive data or use an unsecured AI tool, exposing the organization to cyberattacks.
The safety net: mix of tech and people
For Danlin Ou of Synology, it’s clear that we need to find the right balance. What can you do with AI, and where can technology set the boundary? And what responsibility lies with the employee? Ou points to the practical application in customer support. A chatbot can answer frequently asked questions in level 1 and level 2 support. Based on keywords, it quickly searches knowledge bases. A correct output usually follows, but it can also share irrelevant information. For now, it’s up to the employee to filter that out. But, Ou observes, plenty of control is possible at the technological level as well. “As with MFA, for example, we can apply pop-ups. Such a pop-up then shows that the user is sharing sensitive information. In the pop-up, he then verifies whether he wants to share the data,” Ou said.
Jansen of Tesorion also sees a vital role for human awareness in the safe application of AI within organizations. Technology can create protective barriers, but employees may unintentionally copy data unsafely without awareness or make wrong choices. He notes that there will always be employees who lack sufficient knowledge about AI risks or do not realize the consequences of a small mistake. “If employees don’t know what they can do safely, the risk remains,” Jansen warns. He emphasizes that education and training on AI safety are essential before technological measures become effective.
Organizations can create a culture of safety for this purpose. AI is then seen as a tool to be used carefully. For this, you can adopt a step-by-step approach where companies start with employee awareness programs, followed by the implementation of technical guardrails. This gives organizations a solid foundation for integrating AI safely without employees unknowingly increasing data risks. Training and technology must work together to ensure a safe environment in which AI contributes positively.
Sandboxing as a solution
Creating awareness may be a priority in this regard. At the same time, progress needs to be made on the technology front. Loumite of NinjaOne sees some merit in isolated, controlled sandboxing environments where data is used. Sandboxing prevents AI models from accessing or spreading sensitive information to unauthorized parts of the organization. “For confidential data, you need sandboxing, and you need to make sure that data is used in an environment with the least possible impact,” Loumite said.
The protected environment also shields data from potential vulnerabilities. This type of security can help AI work quickly and securely. That way, as a company, you don’t fall behind in the AI race. “Because if you don’t use AI, you lose out on efficiency to competitors who do use it well,” Loumite observes.
Precision through AI support
Taking the pitfalls into account, many roundtable participants agreed that plenty of benefits should be realized. So does Joost van Drenth of NetApp. He sees AI as ideally suited for applications where time savings directly increase efficiency, such as in radiology. “AI cannot replace a radiologist, but it does function at the level of a well-trained professional, saving them up to 70 to 80 percent of their time,” Van Drenth explains. He sees in this not only an efficiency gain but also a way to support specialized professionals in their tasks. AI allows professionals to handle more cases without compromising the quality of their work.
In addition, Van Drenth sees a similar trend in security and data management. After an incident in which data must be restored, IT teams may have to deal with hundreds of restore points that must be manually checked and restored. This is where AI can provide gains through suggestions and an automated recovery process. “Instead of spending hours or days making manual decisions, AI can support us with automated recovery options that match what we would otherwise do ourselves,” Van Drenth said. This shows the power of AI in industries where speed and precision are crucial. Humans are supported with AI to solve everyday problems more efficiently but remain ultimately responsible for the decisions themselves.
Where can we deploy it?
Deploying AI for protection purposes requires high accuracy and stability, especially within the context of cybersecurity. Palo Alto Networks’ De Jong shares the Precision AI system. In doing so, there is no room for hallucinations, as the model is trained from a rich set of security data. It can detect and prevent threats and ensure that the margin of error is minimal. This compares to usual generative AI, where results are sometimes different or incorrect. This carries risks, mainly when AI is used for critical decisions. “Precision AI must be very accurate, and margins of error are not permissible,” says De Jong. He points out that for protection against targeted attacks, no form of anomalous output is acceptable.
However, De Jong sees potential for generative AI in less critical support tasks, provided organizations know the limitations. “With generative AI applications, errors can occur, but this is manageable as long as companies deploy it primarily for tasks where creativity and flexibility are desired. Think of generating reports or supporting customer service,” De Jong concludes. As far as he is concerned, however, this application must remain well separated from critical business processes. The quality and reliability of the output remain crucial to consider.
Continuity and compliance
In the security world, then, plenty seems possible with AI. The security landscape constantly changes and often changes by the minute, which calls for a new kind of monitoring. Huybregts of Zscaler, therefore, points out that AI can monitor environments continuously. “Where previously there was a single check every few months, AI can continuously test against new standards and regulations,” Huybregts says. This keeps security up to date as much as possible without manual intervention. This becomes particularly important when companies exchange data between cloud environments and physical networks, where continuity is crucial.
AI can, therefore, be the solution for tracking these changes and taking immediate action when anomalies are spotted. As companies increasingly face automated environments and more complex attacks, artificial intelligence can help identify risks faster and keep organizations compliant. Continuous monitoring ensures that the organization complies with new standards, no matter how quickly the environment changes.
Finding balance
All in all, AI offers plenty of opportunities for businesses, but it also brings new risks. Applying AI safely requires a balanced approach focused on technological innovation, training, and awareness. As the experts clarified during the roundtable discussion, companies must embrace AI cautiously. The technology can do more harm than good without clear rules and guidelines.