2 min

From AI benchmarks by machine learning consortium MLCommons, Nvidia hardware stands head and shoulders above the competition. However, Intel sees itself as a formidable opponent with better value for money thanks to its Gaudi2 architecture. In addition, it is uncertain whether Nvidia can meet the demand for its own GPUs.

The eight so-called MLPerf tests cover various AI workloads, including training Google’s BERT-Large model and a piece of OpenAI’s GPT-3. The tests, like all benchmarks, merely indicate the performance ratios between players in the market without necessarily being the definitive answer as to what to purchase as an organisation. Still, Nvidia’s sheer performance cannot be denied.

Specifically, better performance means that with Nvidia chips, customers complete an AI workload faster. Sometimes these are pretty significant differences: for example, in one of the benchmarks, the Nvidia-powered system managed to complete a BERT test in eight seconds, while the Intel Habana Labs solution took two minutes.

Nuance and scale

It is important to put performance in context. First, the MLCommons initiative depends on submissions by specific partnerships. For example, the Nvidia submission is in partnership with CoreWeave, which rents Nvidia GPUs in the cloud. Of course, those don’t run without a system around them: behind the 3584 H100 cards are 896 Intel Xeons. So although Nvidia itself also boasts about its impressive benchmark results, the CoreWeave solution is just as much an Intel-based system.

Intel’s Habana Labs was the only AI manufacturer to participate in the benchmarks. Each test had some subtle differences in terms of specifications and configurations. For this reason, none of the results is an actual drag race between two equivalent systems.

AI chief at Intel Jordan Plawner stated in conversation with ZDNET that the performance difference between Habana and Nvidia is small enough that many organizations would not be too concerned about it. After all, many companies will fall by the wayside if demand for AI chips remains high and Nvidia cannot deliver. Plawner therefore notes that these organizations may be more likely to purchase the Intel Gaudi2 architecture. Compared to the Nvidia A100, the predecessor of the H100, Gaudi2 is said to be competitive.

In addition, Intel promises to make strides thanks to better software. This is because Gaudi2 faced a “handicap” in submissions during the benchmarks. Namely, Gaudi2 used the BF-16 data format, while Nvidia used FP-8. The higher precision of the 16-bit format would have resulted in longer processing times. In September’s GPT-3 test, it expects to be on par with the supreme H100 in terms of price-quality. Ultimately, that ratio will be more important to most companies than the bare benchmarks would suggest.

Also read: Nvidia to demand a premium for AI chips