2 min

Tags in this article

, ,

During Supercomputing 2023 (SC23) in Denver, Colorado, Intel has made a number of announcements. In addition to sharing more details about its Aurora supercomputer, Intel stated that it is the only one to offer an AI chip competitive with Nvidia’s offerings.

In collaboration with the US-based Argonne National Laboratory, Intel is working on the so-called Aurora supercomputer. Having developed a GPT-3 LLM utilizing 1 trillion parameters, the two parties aim to make scientific breakthroughs. Intel showed benchmarks prior to SC23 aimed at drug discovery, particle physics and molecular simulation that showed the scalability of the architecture. Somewhat tellingly, only the comparison to Nvidia’s older A100 chip was made: on a card-per-card basis, Intel’s solution is stated to perform 45 percent faster on its self-designed large language model.

Tip: VMware makes its AI offerings as flexible as possible

HPC and AI workloads

Intel also boasted about the performance of its own Data Center GPU Max Series chips, this time against the Nvidia H100 in various HPC workloads. It is stated that the GPU Max 1550 is on average 36 percent faster than Nvidia’s top model. This H100 is currently the most coveted GPU in the world, with suggested prices of $10,000 each.

Previously, Intel stated that its own Gaudi 2 GPUs were competitive with the H100 in terms of AI inferencing, while it now looks ahead to Gaudi 3. This variant will be launched in 2024.

The company also previewed its fifth-generation Xeon processors, which for now have no clear release date. This HPC CPU will have more cores and built-in Advanced Matrix Extensions – meaning these chips will have specialized AI hardware on board.