7 min Security

ReliaQuest brings autonomous cybersecurity step closer with AI Agent

Insight: Security

ReliaQuest brings autonomous cybersecurity step closer with AI Agent

More than 15 years of ReliaQuest’s experience comes together in AI Agent, which handles everyday Tier 1 and Tier 2 tasks.

AI agents are everywhere these days. In the past few weeks alone, we saw them popping up at Salesforce, Workday and Google. ReliaQuest is also adding one today. This AI Agent, however, is focused on Security Operations. That’s a lot more fundamental for organizations than agents that help with customer or employee contact. After all, this is all about the security and safety of an organization. Having autonomously operating agents make decisions about this is a big step for many organizations.

In an ideal world, AI agents like ReliaQuest’s would not be needed. After all, in a world like that there are so few cyber threats that humans can handle them just fine. Also, the infrastructure in such a world is very simple and it is completely clear where to look for the data needed to handle an alert quickly. However, we do not live in an ideal world. The number of threats continues to grow rapidly, there is a shortage of security people, and many organizations have quite complex infrastructures. So some degree of autonomy is increasingly necessary. For that, reliable AI agents are simply needed.

ReliaQuest AI Agent

With the addition of AI Agent to its own GreyMatter platform, ReliaQuest says it is the first to introduce such an AI agent. With the range of security solutions on the market, it is difficult for us to verify whether this is actually the case. We cannot say for sure that there is not a start-up somewhere in Israel that is also doing this. SentinelOne seems to be something along similar lines with the addition of Purple AI to its Singularity Platform. The two are actually rather different, but there is also some overlap, as far as we can tell right now. And conceptually, both are trying to solve the same problem.

Whether or not ReliaQuest has released something completely new is not really that relevant. What matters most in this article is exactly what the new AI Agent is. To find out, we spoke briefly with ReliaQuest’s Brian Foster. He is President of Product and Technical Operations at the company, which was founded in 2007.

Turning human expertise into AI

Foster cites the company’s 17 years of existence as a major reason for the development of the AI Agent. “Without the experience we have gained in that time, we could not have developed the AI Agent,” he states. AI Agent makes use of all the expertise built up over that time. This is expertise built up by people.

At ReliaQuest, they call the result of all this expertise the Cyber Analytics Methodology. This methodology underlies the AI Agent. Part of the methodology is a so-called planner. That has access to lots of small tools. Those tools each “solve a finite problem,” in Foster’s words. That is, ReliaQuest has divided Security Operations into lots of small problems that consist of a set number of components. For each problem, a tool is available to the AI Agent.

As an example of such a tool, Foster gives the search for artifacts of specific attacks. The AI Agent can search ReliaQuest’s GreyMatter platform for incidents similar to what it has found and draw conclusions from them. This search and inference itself, by the way, is certainly not all using hip stuff like GenAI and LLMs. It also uses basic queries. That makes perfect sense, because AI uses those too. That is, if you submit a query to a GenAI tool, it also converts the query into a specific query language.

Article continues under box

ReliaQuest

ReliaQuest is somewhat difficult to put into a standard pigeonhole. It is a Security Operations platform and focuses exclusively on large organizations. It offers MDR services, but it does not call itself an MDR provider. Further, the company does not use a central location where all security data must go before anything can be done with it. The data can remain in the security tooling it generates; ReliaQuest handles it in a so-called federated manner. Organizations can continue to use their existing security tooling. No huge new investments need to be made.

One of the main features that sets ReliaQuest apart from the rest, according to Foster, is that it is 100 percent transparent. That also goes for the MDR services the company offers. He says it makes ReliaQuest special in the market: “We are 100 percent transparent, not a black box, which the others do. You can see everything that goes on in the platform and you can intervene when you want to.” The same goes for the AI Agent we are talking about in the body of the article. That is also completely transparent, something that is crucial for the combination of cybersecurity and AI. If anything, trust is even more important here than in other environments.

Lighten the load for SOC employees

Breaking down large queries into many small steps is the main differentiator of AI Agent’s agentic approach, according to Foster. Only then is it possible for an agent to make decisions independently. Because that is ultimately what is intended. The analysis of the problems found must also be done by the agent. In other words, the AI Agent does not bother people with the analysis of something it has found. It can do that just fine by itself, is the idea. To do this, it not only uses analysis done in the past, but also has access to external threat intelligence, for example. In addition, people at ReliaQuest are also constantly actively searching for new threats. So it is not that the AI Agent can only detect existing and known cases.

With the AI Agent, ReliaQuest wants to take the Tier 1 and Tier 2 analyses out of the hands of the SOC employee. These can get to work with it after the AI Agent has done its job. Because this component is also completely transparent, employees can properly assess the AI Agent’s analyses. Then they can take immediate action.

Autonomy is the future

However, Foster also expects some more automated actions from the AI Agent early next year. That’s where the security industry as a whole needs to move more anyway. With only human SOC employees, it is not possible to keep organizations secure. That’s easier said than done, Foster also sees: “Companies are nervous about automated actions.” However, he also sees that the number of automated actions is also increasing 200 percent per quarter. So that attitude of organizations seems to be tilting.

Of course, there will be cases where it will be a long time before an autonomous agent/assistant/analyst is allowed and able to perform them. It is even debatable whether this is desirable for all parts. But there are plenty of everyday things that can be automated just fine. As an example, Foster gives the resetting of the password of a user who has clicked on a phishing link. In principle, that’s not hugely disruptive, but if you do that in an automated way, you can “take big steps,” he expects.

Big steps are needed for organizations and the security industry as a whole if they don’t want to be inundated with attacks and alerts. Viewed in that light, ReliaQuest’s announcement is a very interesting one. No doubt many security players will follow, so the AI agent race seems to have begun in this part of the market as well. We are especially curious to see when the autonomous component will really make its appearance. Technically, a lot already seems possible. Now, if the attitudes of organizations also fundamentally change, things could go fast. There are encouraging signs, but history teaches us that this could take a long time.