2 min Devops

Nvidia and Google score well in AI benchmark MLPerf

Nvidia and Google score well in AI benchmark MLPerf

The benchmark for large AI computers, MLPerf, is very positive about Nvidia and Google. MLPerf is a series of tests to determine who is the best at carrying out AI tasks. It covers both software and hardware.

Nvidia’s chip, the A100, appears to be the very best when it comes to various machine learning tasks, such as setting up a neural network. Nvidia also scores the highest when it comes to commercially available systems. In terms of research projects, Google’s Tensor Processing Unit performed the best.

BERT neural network

Nvidia only needed 49 seconds to set up a version of a BERT neural network (with 2.048 A100 chips). A Google machine that is commercially available in Google’s cloud service, using 16 TPU chips, took almost 57 minutes for the same task. One of Google’s research projects with 4.096 TPU chips only took 23 seconds to train BERT.

Several competitors took part in the test. Intel has special Xeon processors, which are not commercially available yet but will be in the near future. However, Intel was too late to participate in the large benchmark. Huawei was also present with the Ascend 910 chip of which no final scores are yet available.

MLPerf benchmark

In the world of AI, the second phase of machine learning is when the trained networks are used to make real-time predictions. This is usually also part of the test, but the results will be published later this year because of COVID-19.

The benchmark also receives some criticism. Especially competitors Cerebras Systems and Graphcore continue to avoid the benchmark competition. They do not participate because they believe that the way the test is set up is not representative of the real thing, ZDNet writes. According to both companies, the test is not what customers are interested in: these companies prefer to focus on the real deal, instead of focussing on a competition.