Dell revamps AI Factory and introduces Nvidia Blackwell Ultra servers

Dell revamps AI Factory and introduces Nvidia Blackwell Ultra servers

Dell Technologies is significantly expanding its AI Factory. The company is introducing new servers, storage solutions, and network infrastructure for AI workloads. Dell aims to accelerate AI implementation at every stage, from experimentation to production.

During Dell Technologies World in Las Vegas, the company is making a number of major announcements, as is tradition. First, it is expanding the Dell AI Factory with NVIDIA (always written in capital letters in branding), which will be equipped with improved infrastructure and full-stack AI solutions. The second announcement includes broader enhancements to the entire Dell AI Factory portfolio, including new PCs, edge solutions, and servers for data centers.

Founder and CEO Michael Dell emphasizes the importance of these developments. He speaks of “a mission” to bring AI to all Dell customers. The technology must become more accessible, he says. Since its introduction in mid-2024, the AI Factory has become the vehicle for this at Dell.

Powerful new hardware

At the heart of this is the new generation of PowerEdge servers. Dell is today introducing the PowerEdge XE9780 and XE9785 servers in both air-cooled and liquid-cooled variants. These servers support up to 192 Nvidia Blackwell Ultra GPUs with direct-to-chip liquid cooling and can be expanded to 256 GPUs per Dell IR7000 rack. According to Dell, these platforms offer up to four times faster training for large language models compared to their predecessors. It should be noted that the liquid cooling betrays the fact that this is a higher wattage than ever before: a single B300 Ultra requires 1,400 watts (!).

Ironically, energy efficiency is better than ever. The new PowerEdge XE9712 with Nvidia GB300 NVL72 promises 50 times more AI reasoning inference output and 5 times better throughput. This is achieved through the use of Dell’s new PowerCool technology, which significantly improves energy efficiency.

The network infrastructure has also been updated with the PowerSwitch SN5600 and SN2201 Ethernet, part of the Nvidia Spectrum-X Ethernet platform. These switches deliver a throughput of up to 800 gigabits per second, essential for data-intensive AI workloads.

Data management for AI

AI systems are only as good as the data they work with. Dell has therefore also improved its AI Data Platform. Dell ObjectScale will get a more compact software-defined system that supports large-scale AI implementations while reducing costs and space requirements in the data center.

In addition, Dell is introducing a new solution built with PowerScale, Project Lightning, and PowerEdge XE servers. Leveraging KV cache and Nvidia’s NIXL libraries, this solution is specifically designed for large-scale distributed inference workloads.

AI for everyone

Dell’s second announcement emphasizes that AI is not just for large data centers. However, the scale is obviously much smaller in this case. The Dell Pro Max AI PC is being introduced as the first laptop with an enterprise-grade discrete NPU (neural processing unit). In addition, Dell is introducing the Pro Max Plus with Qualcomm AI 100 PC Inference Card, the first mobile workstation with an enterprise-grade discrete NPU. This NPU delivers around 870 AI TOPS, not the tens of TOPS that the first NPUs in AI PCs had.

AI is now essential for many organizations, according to Dell. Their research shows that 75 percent of organizations consider AI crucial to their strategy, while 65 percent have already brought AI projects into production. However, challenges such as data quality, security, and high costs continue to slow progress. Given the vague nature of what we do and don’t define as AI, we’re not entirely sure of the exact importance of GenAI, agentic AI, and the latest innovations in this field; yet AI, in the broadest sense of the word, is clearly here to stay.

Jeff Clarke, COO of Dell Technologies, explains: “This year has been a series of innovations—and we’re not stopping. We’ve added more than 200 updates to the Dell AI Factory. With our latest AI developments, we’re helping companies of all sizes implement AI faster.”

Cost-effective alternative to the cloud

An interesting aspect of Dell’s approach is its focus on cost efficiency. The company claims that the Dell AI Factory approach is up to 62 percent more cost-effective for running large AI models (LLMs) on-premises compared to the public cloud. This is an important selling point at a time when companies are struggling with rising costs, which are proving to be far less manageable in the cloud than once thought.