Google’s machine learning services get a boost from new TPU v4

Google’s machine learning services get a boost from new TPU v4

Sundar Pichai, Google’s CEO, spoke for only a minute and 42 seconds about the company’s latest Tensor Processing Unit (TPU), when he delivered his keynote at the Google I/O virtual conference this week.

However, it may have been the most awaited piece of news from the event. The new TPU’s, which are powered by the company’s v4 chip, will double the performance of its TPU hardware, compared to the performance offered by the TPU v3 chips.

Ten times better

The addition of v4 brings new power to machine learning in the cloud. During the keynote, which lasted about two hours, Pichai said that the company’s compute infrastructure drives and sustains the AI and ML advances, adding that the Tensor Processing Units are a big part of that. Pichai expressed his excitement to announce the next generation of TPU v4.

The new v4 TPU chips are connected to make supercomputers (pods). One v4 pod has 4,096 v4 chips and each pod has an interconnect with a bandwidth that’s ten times higher than the competition.

What changes now?

The computing power that a pod of combined TPUs can deliver is more than 1 exaflop or 10^18 in floating-point operations per second, according to Pichai. He explained the power level by asking the audience to picture 10 million people on their laptops. The computing power of those laptops would almost match the computing power of one exaflop.

Pichai called it a historic milestone for Google and said that the new TPU v4 infrastructure will be available to Google Cloud users later this year and will be the fastest ever deployed at the company.