Cisco and Nvidia introduce N9100 switch for AI data centers

Cisco and Nvidia introduce N9100 switch for AI data centers

Cisco and Nvidia have announced important steps to accelerate AI infrastructure. The most notable is the Cisco N9100 switch, built on Nvidia Spectrum-X Ethernet silicon. In addition, the company is strengthening the Secure AI Factory with new security integrations and presenting the first AI-native wireless stack for 6G.

This was just announced at the Nvidia GTC Washington conference. The Cisco N9100 is the first data center switch based on Nvidia Spectrum-X Ethernet switch silicon. The switch will be available before the end of the year and will offer a choice between Cisco NX-OS and SONiC as the operating system. This should give neocloud and sovereign cloud customers more flexibility when building AI infrastructure.

With the N9100 as its foundation, Cisco is delivering an Nvidia Cloud Partner-compliant reference architecture. The Nexus data center switch portfolio will thus have a uniform operating model via Cisco Nexus Dashboard, Silicon One, cloud-scale ASICs, and the new Spectrum-X Ethernet switch silicon.

Cisco President Jeetu Patel says that the infrastructure for agentic AI applications requires new architectures. “The infrastructure that will power the agentic AI applications and innovation of the future requires new architectures designed to overcome today’s constraints in power, computing, and network performance,” Patel said.

Secure AI Factory gets additional features

The Cisco Secure AI Factory with Nvidia, announced in March 2025, will gain new capabilities in security and observability. Cisco AI Defense now integrates with Nvidia NeMo Guardrails for more robust cybersecurity of AI applications.

The security solution is available for on-premises data-plane deployment, enabling security and AI teams to protect models and applications. Splunk Observability Cloud helps teams monitor the performance, quality, security, and cost of their AI application stack.

In terms of infrastructure, Cisco Isovalent has been validated for inference workloads on AI PODs. This enables high-performance Kubernetes networking. The new cloud-managed Cisco G200 Silicon One switch, which delivers high-density 800G Ethernet, can now be ordered as a deployment option in AI PODs.

Cisco UCS 880A M8 rack servers with Nvidia HGX B300 and Cisco UCS X-Series modular servers with Nvidia RTX PRO 6000 Blackwell Server Edition GPUs are also available as part of AI PODs. This supports high-performance GPU usage for various workloads, including generative AI fine-tuning and inference.

First US AI-RAN stack for mobile networks

Cisco, Nvidia, and telecom partners developed the first US AI-RAN stack for mobile networks. This stack integrates sensing and communication, with multiple pre-6G applications being demonstrated during Nvidia GTC DC.

This allows telecom providers to integrate AI into their mobile networks, starting with 5G advanced services. At the same time, it lays the foundation for 6G. The stack combines Cisco’s user plane function and 5G core software with the Nvidia AI Aerial platform. This creates a foundation for physical AI and integrated sensing with high efficiency and security.

The stack is designed to help telecom providers transition to AI-driven networks. With the growing demand for connected devices such as AR glasses, connected cars, and robotics, wireless networks are being challenged to support billions of connections.

Expansion of the ecosystem

The ecosystem around the Cisco Secure AI Factory is expanding with new partners and solutions. Nvidia Run:ai software is now available through Cisco and partners, enabling intelligent AI workload and GPU orchestration.

Nutanix Kubernetes Platform has been added as a supported Kubernetes platform. Nutanix Unified Storage is a supported storage option, and Nutanix Enterprise AI is an interoperable software component that simplifies containerized inference services.

Cisco is collaborating with Nvidia on the Nvidia AI Factory for Government, a complete end-to-end reference design for AI workloads in highly regulated environments. This is intended to help governments securely implement AI technology.

The reference architecture for neocloud and sovereign cloud customers includes the recently introduced Cisco 8223 based on Silicon One P200 for scale-across networks, Nvidia BlueField-4 DPUs, and NVIDIA ConnectX-9 SuperNICs.

Tip: Cisco doubles down on AI infrastructure with AI POD and new UCS server