Cisco doubles down on AI infrastructure with AI POD and new UCS server

Renewed focus on compute offers new opportunities

Cisco doubles down on AI infrastructure with AI POD and new UCS server

The so-called AI-ready data center is one of the focus areas for Cisco at the moment. That means UCS, Cisco’s compute platform, is becoming more important to the company. It is additionally coming out with plug-and-play AI offerings in the form of AI PODs, turnkey infrastructure for AI.

When looking at the infrastructure for AI, compute is without a doubt the most important component, followed by the network. Storage follows those two at quite a distance, especially for relatively general purpose AI applications (so not the training of huge clusters). The network component has traditionally been in pretty good shape at Cisco, but in the area of compute this has not always been the case in recent years. That changes today with the announcement (at the annual Partner Summit) of the new UCS C885A M8 servers and AI PODs.

Cisco UCS C885A M8 and AI PODs

The UCS C885A M8 servers are designed for large AI training and conferencing workloads. They comprise of the Nvidia HGX platform (H100 and H200 GPUs). Also included in each server are Nvidia (Super-)NICs and BlueField-3-DPUs to provide much-needed acceleration for the GPUs.

Where the UCS C885A M8 is a new server, Cisco takes it a little further with its new AI PODs. These are full AI infrastructure stacks (appliances) that it builds for specific use cases and industries. They allow customers to purchase compute, network and storage for AI in a complete stack. The AI PODs are built according to Cisco Validated Designs principles, which means Cisco stands behind the performance of the new AI PODs. They should be a starting point for customers to get started with AI, whatever the specific AI requirement. AI PODs use the Nvidia AI Enterprise software.

Renewed focus on UCS/compute

In themselves, the announcements Cisco is making today are not necessarily new to the market in general. That is, other compute solutions for AI already exist, and full stacks for AI are not new either.

For Cisco, however, this is an important time. It has been working hard in recent years to boost the compute piece. That had lagged a bit. That was our impression at least. This sentiment is shared by Jeremy Foster, the SVP and GM of Cisco Compute, whom we spoke to at the Cisco Partner Summit this week in Los Angeles. “Compute was doing well and we had a lot of focus on other things, so we went from one generation to another in terms of UCS,” he sums up.

A few years back, however, Cisco realized something had to be done and started investing in this business. Today’s announcements are a result of that. For example, it also came out with a new AMD-based server and it has put a lot of time and effort into further developing Intersight. That is now really the primary way to manage compute for Cisco customers, he indicates.

When it comes to AI PODs, the emphasis is obviously very much on Nvidia, especially when it comes to the software Cisco provides with it. However, this does not mean that customers cannot deploy AMD-based AI PODs, Foster assures us. Cisco just doesn’t have Validated Designs for that as of yet. Foster indicates that they will certainly come as soon as there is sufficient demand for them and/or the supply becomes more mature.

Cisco has already proven that it can deliver complete stacks

For Cisco, offering complete AI stacks may at first glance seem a bit out of its comfort zone. However, that is not the case, we hear from Foster when asked. In fact, Cisco already has quite a bit of experience developing full stacks. Foster points to products such as FlexPod and FlashStack, which it has co-developed with NetApp and Pure Storage, respectively. Developing an AI stack like the AI PODs it announced today is really not much different, Foster said. The Nvidia part of the AI PODs is new, of course, but otherwise Cisco was able to carry over much of what it learned in developing the earlier stacks.

At the end of the day today’s introduction of the new AI products means Cisco can better compete in the AI infrastructure market. “For partners, until now we didn’t offer them a good opportunity in this market because we didn’t have it in our portfolio. Now we do,” Foster points out. Cisco can also offer partners a lot of support now, so they can also sell it better to parties in the market. Especially as there seems to be an increasing need for more specialized, verticalized solutions, sometimes by industry, these new products and stacks for organizations’ AI infrastructure could work out well for Cisco.

With the new AI PODs, Cisco is adding a new kind of AI stack to its portfolio. These new “appliance-like” stacks will sit alongside the Cisco Nexus HyperFabric AI Clusters.