Nvidia recently presented its networking platform Spectrum-X. This platform should provide high-speed Ethernet connectivity for AI workloads in multiple cloud environments.
According to Nvidia, the platform will facilitate the development of multi-tenant hyperscale AI cloud environments. This by improving the efficiency of the required Ethernet networks and how they move data.
Ultimately, Nvidia wants to allow cloud providers who use Ethernet connectivity instead of Infiniband to more easily train and run generative AI models in the cloud.
The Spectrum-X platform consists of Nvidia’s Ethernet switch technology, sourced from Mellanox, combined with BlueField-3 DPUs. This environment can accelerate the performance and energy efficiency of AI workloads in a stack by a factor of 1.7.
The Spectrum-4 switches are configurable in 64 800 GbE ports or 128 400 GbE ports. Furthermore, they facilitate a throughput of 51.2 Tbps of switching and routing.
Attractive to cloud providers
In addition, Nvidia’s Spectrum-X is interoperable with existing Ethernet stacks, making it more attractive for cloud providers to run AI workloads in their existing cloud infrastructure.
More specifically, data centers for cloud environments are supported for processing data and delivering more effective AI workloads, without compromising the Ethernet-based multi-tenant model already in place.
Major infrastructure providers such as Dell Technologies, Lenovo and Supermicro have also embraced Nvidia Spectrum-X.
Nvidia is currently developing a supercomputer in Israel that uses Spectrum-X. This supercomputer Israel-1 is set to become a test environment for Spectrum-X. The supercomputer consists of Dell PowerEdge XE9680 servers with eight Nvidia H100 GPUs. This should provide eight exaflops of AI performance and 130 petaflops for scientific computing workloads.
Also read: Nvidia to demand a premium for AI chips