World’s fastest supercomputer breaks AI records

World’s fastest supercomputer breaks AI records

IBM Summit, the world’s fastest supercomputer, has performed the sharpest AI calculation in history. Nvidia, IBM and Google have joined forces to break new records. The focus of the research is the evolution of the climate.

First of all an anthology about the IBM Summit. The system consists of 4,608 servers, each of which houses two IBM Power9 cpus with 22 computing cores each. The supercomputer has a total of over 200,000 processor cores and 10 petabytes of RAM at its disposal.

Each server also contains six Tesla V100 gpus from Nvidia, or 27,648 in total. The Tesla V100 is based on the Volta architecture and is designed for data centres. Thanks to Tensor Core, this chip is extremely suitable for machine learning. According to Nvidia, the gpus are responsible for 95 percent of Summit’s computing power.

Deep learning

Deep learning has never been combined with sun powerful performance, says Prabhat to Wired. Prabhat (he has only one name) is head of the research group at the National Energy Research Scientific Computing Center at Lawrence Berkeley National Lab.

The world’s most powerful supercomputer was charged with an AI task around climate change. Weather patterns were detected such as cyclones for climate simulations. The dataset consists of 100 years to three hourly forecasts about the Earth’s atmosphere.

The Summit experiment is crucial for the future of AI and climate science. The project demonstrates the scientific potential of deep learning with supercomputers. They were previously used to simulate traditional physical and chemical processes. Think of nuclear explosions, black holes or new materials.

TensorFlow

We didn’t know what would be possible on this scale, says Rajat Monga, Engineering Director at Google. He and his team have helped to adapt the open source TensorFlow machine learning software to the gigantic scale on which Summit operates.

A classic deep learning architecture uses data centres where servers work together. Problems are split up because they are relatively disconnected from each other. That’s not the case with supercomputers like Summit. They differ radically from architecture by means of super-fast connections that link thousands of processors in a single system.

Nvidia engineers have also had to lend a hand to ensure that the tens of thousands of graphics chips work together smoothly.

Applying Deep Learning to supercomputers is a new idea that comes at a good time for climate researchers, says Michael Pritchard, professor at the University of California at Wired. The slow evolution of improvements within classic processors causes supercomputers to embrace more and more graphics chips. We have reached the point where classical growth is no longer feasible. With gpus, we’re making another big leap forward.

Related: IBM unveils the world’s fastest supercomputer

This news article was automatically translated from Dutch to give Techzine.eu a head start. All news articles after September 1, 2019 are written in native English and NOT translated. All our background stories are written in native English as well. For more information read our launch article.