Artificial intelligence is going to shake up security. RSA CEO Rohit Ghai stated at RSA Conference that it is becoming necessary to give AI more and more responsibility over securing digital environments. This particularly because IT security could otherwise be killed by that same technology.

At the annual RSA Conference, one can always hear about the latest developments around cybersecurity, but this time the security company expects a major transformation of the industry. Although new lines are drawn every year in the fight against cybercrime, the function of so-called identity tech has remained the same. Still, Ghai explains through three waves of technology how security engineers have seen their job change.

First, the security industry went through the Internet wave, where connectivity was obviously central. Then we went through the mobile cloud wave: the importance of “convenience” was magnified. Users today need to access the applications that define an organization on all kinds of devices and from all kinds of locations. Increasing digitalization has accelerated this change. Finally: the AI wave, which is currently appearing on the coast.

Many hats

As a security company, it is critical to respond to the changing world of cyber environments. We see this through the wide variety of AI applications. Palo Alto Networks country manager for the Netherlands Mark van Leeuwen (in Dutch, so translation may be necessary) already shed light on the value of artificial intelligence and machine learning in late 2022. Specifically, he said that the proper application of AI means that the Security Operations Center (SOC) will be able to provide security to an organization more efficiently and with fewer people. How does RSA view this?

RSA wears many hats. For its conference, it talks about the landscape of cyberattacks in 2022, explains the value of AI for its products and cites ties to government agencies. So it is not an advisory body or nonprofit, where one can judge the best security measures from a neutral position. However, it is a real point that AI is going to cause change in identity technology, requiring companies like RSA to be ready to protect organizations.

According to Rohit Ghai, it means a paradigm shift is coming. Multi-factor authentication (MFA) isn’t impervious to hacks. This approach is ultimately quite simple: a system asks itself at every access attempt: should I grant access here? That is outdated, Ghai said. Instead, this yes/no question should be replaced with a “yes, because…” or a “no, not now”. In short, security systems should grant access in a dynamic way. Why not put humans at the controls instead of hard-to-grasp AI algorithms?

‘Good AI’

Not surprisingly, cybercrime is licking its fingers at the concept of AI. The applications are broad and can be quite innovative. For example, phishing emails can sound more convincing thanks to ChatGPT’s help, or a piece of malware with AI will be able to figure out how a corporate network works faster. It’s more a matter of how creative criminals are rather than what their tools are theoretically capable of, because the possibilities of AI are hard to overlook. Especially since the technology is developing at a rapid pace.

Ghai shows with a simple example why a SOC cannot handle monitoring identity-tech attacks. After all, it takes an organization an average of 277 days to plug a data breach: those attacked on New Year’s Day will have things somewhat back in order by Oct. 4. In this, then, RSA sees a gap that can be filled by AI.

Cybercriminals on one side, security officers of an organization on the other. Both groups will increasingly use AI in the future. According to Ghai, the job of legitimate engineers is to develop “Good AI” that fights “Bad AI. To accomplish this, artificial intelligence should be more than a “copilot. This refers to the assistant function that AI is currently attributed in all kinds of applications. Think of helping put together computer code or using chatbots to come up with creative input for writing work. Ghai thinks humans will gradually retreat in this area, however. RSA and other security companies have an interest in ensuring that AI will continue to be used for protective purposes.

What next?

A key question remains: what do we do in the here and now? We are on a development curve of AI, but we don’t know where we are on that curve. That uncertainty fuels initiatives such as wanting to curb AI development until there is global legislation around the issue. The reality is that such developments in history rarely allow themselves to be curbed. Those who can profit from further innovation will do everything they can to accelerate that progress.

Yet the battleground has not yet moved entirely toward AI. Most successful cyber-attacks are the result of human error. Security vulnerabilities that have been overlooked that criminals take advantage of, or the “social engineering” that defines fooling an employee by email. Common security methods such as MFA are a prominent target. Therefore, RSA argues that MFA is just a good first line of defense that is backed up by stronger methods as needed. The company talks about an “identity lifecycle,” where areas between identity solutions are vulnerable. While security experts worldwide are working to plug every possible leak, mistakes remain human. See here the opportunity for AI to flourish.

In conclusion, RSA has taken a look at the future and seen that AI is going to cause a paradigm shift. Where the Internet and cloud called for connectivity and ease of use, the AI wave will center on security. The key here is to cut cybercriminals off at the pass by letting artificial intelligence make choices about who gets access to a system, but more importantly, why (not) and when (not)?

Also read: SentinelOne deploys generative AI on cyber detection platform