5 min

Juniper Networks is making great strides in AIOps and data centers for AI workloads. Marvis Minis provide proactive resolution of network problems. Marvis Virtual Network Assistant (VNA) is now expanded towards data centers. Juniper Validated Designs for AI Data Centers brings the company’s AI Data Center offerings another step closer for customers.

Juniper has big news today. It has shaken up its entire network stack and taken AI as the starting point in rebuilding it. The result is the AI-Native Networking Platform it announced today. This platform deploys a shared AI engine and Marvis VNA for campus, branch and data center. In doing so, the company dramatically simplifies the overall architecture, at least on paper. In part, you can also see it as the next (big) step in the rollout of Mist’s AI capabilities across Juniper’s portfolio.

AI-Native Networking Platform is more than a hollow phrase

To indicate right away that the term AI-Native Networking Platform is more than just a new name for a platform, today Juniper is also coming right out with new components for this platform. In doing so, it’s good to distinguish between AI in the form of AIOps and what it takes to optimally run AI workloads in a data center. The AI-Native Networking Platform touches both components.

Marvis Minis

First of all, there is the AIOps part. That, after all, is what Mist AI has become particularly known and famous for. By deploying AI in Mist’s access points, administrators can solve problems much faster.

With Marvis Minis, Juniper adds an important component to this. Where it was mostly reactive before, with this addition it also becomes proactive. Marvis Minis is, in fact, a Digital Experience Twin. It does exactly what you would expect a digital twin to do. It creates a copy of the configuration of the Mist access points. This copy can proactively test for problems. In this way, Marvis Minis emulates connectivity with the access points. It does not use sensors; it is integrated into the network.

This software, of course, is deployed from Mist’s cloud environment. The beauty of this for admins is that they don’t have to do anything about it at all. It deploys itself. This testing of the digital experience of connecting devices is not done continuously, by the way, as it obviously does affect the performance of the access points and thus the network. To perform the tests, the software itself chooses a time when it can do so. In organizations, this can be in the evening or at night, for example. If Marvis Minis picks up a problem, it performs a traceroute and proactively lets you know where the problem is. This way, admins can resolve connectivity issues before they get to an end user.

Marvis Minis may be able to proactively solve all kinds of problems, but of course there are limits. It will be especially useful for detecting configuration errors. Proactively resolving roaming issues or sticky AP issues will not be possible with this.

Marvis VNA for data centers

The second component the AI-Native Networking Platform that can be categorized under AIOps is the extension of Marvis VNA to the data center. This was already available for campus and branch, the parts Mist has traditionally been active in. So now it is also available for the data center.

Based on what Juniper has learned from the deployment of Marvis VNA in campus and branch, this may have a very big impact on the operational side of data centers. In campus and branch, Juniper has seen a reduction in tickets that exceeds 90 percent. The company expects to realize that in the data center as well. That would really be a huge win. When you consider that there are often large numbers of network devices active in the data center, this could mean a lot of savings.

Another good thing to mention here is that Marvis VNA is basically vendor-agnostic. That is, it also works with hardware from other vendors. In practice, there will be limitations to this, there can hardly be any other way. Vendors almost always have some proprietary technology in their hardware. But problems with cabling, configurations and connectivity can now be detected and resolved with Marvis VNA across all network equipment in the data center. Finally, Marvis CI allows IT teams to ask questions in human language, and then answers are presented from product documentation and the knowledgebase using generative AI.

The AI Data Center

The third and final component of Juniper’s new AI-Native Networking Platform is about how to build a data center for AI workload. To that end, Juniper has added an extension to Apstra, the intent-based networking component of the offering. It lets you optimize things like data center designs and configurations for specific workloads.

With today’s announcement, Juniper adds faster and more efficient processing of AI/ML traffic. This traffic is fundamentally different from much other traffic that passes through a data center. For example, the traffic primarily goes back and forth between GPUs, which requires the availability of high and consistent bandwidth. This is also all RDMA-based, not TCP-based traffic. Furthermore, AI/ML traffic is also very sensitive to packet loss and jitter, and nodes often operate in parallel. That means links can fill up quickly.

What’s interesting here is that Juniper runs everything over Ethernet. We see that more and more these days. Ethernet seems to be winning the battle against InfiniBand. That’s good news for customers because Ethernet is a lot cheaper to implement, while by now it should be able to achieve the same performance.

Finally, to make building an AI data center as smooth as possible, Juniper today also introduces Juniper Validated Designs for AI data centers. In addition, new PTX routers and QFX switches specifically for these designs are arriving. The whole thing naturally runs under the Apstra umbrella. Worth mentioning here is that there are new models among them that tap a capacity of 800GbE. It’s also possible to design an AI data center with other Juniper kit, though. That is at least what the people at Juniper tell us when we ask them about this. The specific hardware will differ depending on the specifics of the project.

All in all, the introduction of the AI-Native Networking Platform and all that goes with it is a big step forward for Juniper. On all fronts, it is putting AI at the center of the future with this. Today’s announcement makes the proposed acquisition by HPE even more logical. After all, that company also continuously talks about an AI-native architecture. Juniper’s announcement today fits seamlessly into that.

Also read: HPE and Juniper take aim at Cisco with network fabric