6 min Security

‘SOC of the future’ needs data management of the future (and today)

‘SOC of the future’ needs data management of the future (and today)

Splunk talks a lot about the ‘SOC of the future’. This is a radically different beast than the current SOC. One of the main reasons for this is the location of data within organizations. It is increasingly scattered across a variety of locations, from edge, on-prem to cloud. Proper management of that data may well be the key that enables an SOC of the future.

We’re attending .conf, Splunk’s annual event, this week. Naturally, there’s a lot of talk about Cisco’s recently completed acquisition of Splunk. We already published a meaty story last week about the first integrations between the two parties. We’re not going to do that again here.

However, there is certainly more to discuss this week than the Cisco-Splunk story. Splunk also announces some interesting new features and functionality to its own portfolio. A term Splunk uses a lot these days is ‘SOC of the future’. That gets a lot of attention this week too. Of course, AI and GenAI play a role in that, something we wrote an article about earlier. An important step when it comes to the SOC of the future is also the integration of Cisco Talos into Splunk’s security products, such as Splunk Enterprise Security, Splunk SOAR and Splunk Attack Analyzer. With this integration, Splunk adds a lot of threat intelligence to its products at once.

However, what we want to talk about in this article is mainly the role that data plays in the SOC of the future. It cannot be underestimated. Just as we regularly say that AI is primarily about data, the same can actually be said of cybersecurity. We are talking about securing the data, of course, but also about knowing where it is and accepting that it is no longer in a single location. In other words, data is on-prem, at the edge and in the cloud.

Not all data needs to go to a single location

Knowing and accepting is step one, it is also important, of course, to then be able to respond well to it. That is, it must be possible to do smart things with the data where it is. Continuously copying all data to a central location is not the answer in any case. That’s not only very expensive (something Splunk was always pretty notorious for from a SIEM perspective, by the way), but in general just not efficient from an operational perspective.

In terms of dealing with distributed environments and data locations, Splunk has made some interesting strides. Federated Analytics is definitely one of them. In a nutshell, this means that the data does not have to come to the analytics, but the analytics comes to the data. So it is possible to analyze data without moving it. Obviously within the Splunk ecosystem, but the idea is that this will also become possible outside of it. The first environment where this is possible is Amazon Security Lake. However, the ambition is to expand this further.

Data management plays an important role in SOC of the future

Federated Analytics is an example of how organizations handle data, for whatever purpose. In other words, it is all about data management. That plays a very important role in advancing Splunk’s security offering toward the platform for an SOC of the future.

In addition to Federated Analytics, however, Splunk has more in store in this area. These include so-called Pipeline Builders. These are designed to enable customers to filter, mask, transform and enrich data. This is with the goal of making the data coming out the other end of the pipeline as efficient as possible. The goal is to simplify the processing of the data and also reduce costs. You don’t have to continuously run all the data through a heavy analytics engine, just the relevant data. That’s what Splunk says it can offer with Pipeline Builders.

Pipeline Builders currently come in two flavors. There is the Edge Processor and the Ingest Processor. With Edge Processor, customers who want or need to keep full control of their data can set it up and run it all by themselves. The Ingest Processor is a Pipeline Builder hosted and managed entirely by Splunk. It is aimed at customers who are fully in the cloud or have that ambition. Ingest Processor should provide overarching data management for the Splunk Platform and the Splunk Observability Cloud.

We recently recorded a podcast with Tom Casey, SVP and GM, Products & Technology, in which we discussed the new Ingest Processor and the Edge Processor, among other things. We will publish this soon. So search for Techzine Talks on Tour in your favorite podcast app and check that out. We’ve already published about six episodes by now, so maybe there’s something that already appeals to you now.

Edge Processor is generally available worldwide. Ingest Processor is coming to multiple regions in July of this year. Federated Analytics will go into private preview in July 2024.

Better data management can make SOC of the future the SOC of the present

All in all, Splunk is taking some decent strides toward modernizing cybersecurity in general at organizations, and the SOC in particular. The ability to do all sorts of smart things with data first before possibly sending anything to an SOC for further analysis is definitely interesting. It makes an SOC a lot more efficient.

However, we are now reaching a point where the boundaries of what is a SOC and what is not are starting to blur somewhat. The analogy to the developments around what the perimeter is within cybersecurity comes to mind. Are the two Processors that Splunk now has part of the SOC, or not? Is the SOC of the future a distributed SOC? There is something to be said for that. If the SOC of the future resembles the SOC of today in virtually nothing, it will undoubtedly have implications for the rest of the organization as well. But that’s something for another time. First, let’s make sure organizations get data management right.