2 min

Amazon Web Services (AWS) has introduced EC2 instances with Tesla T4 GPUs from Nvidia, reports Venturebeat. The instances will be available to users in G4 instances in the coming week. T4 will also be available through the Amazon Elastic Container service for Kubernetes.

The new instance will get Nvidia T4 processors and “is really designed for machine learning and to help our customers shorten the time it takes to inference on the edge of the network – where that response time really matters,” said Vice President of computer Matt Garman. In addition, the new instance must reduce costs.

The new instance can deploy a total of eight T4 GPUs simultaneously in the cloud.

T4

Nvidia’s Tesla T4 was presented in September. The successor to the P4 should increase its speech recognition performance fivefold and be three times faster in video analysis. The GPU focuses on deep learning systems in data centers, has 2,560 CUDA cores and 320 Tensor cores to perform AI calculations 40 times faster than a traditional CPU.

In addition, the GPU uses the new Turing architecture that is also used by the company’s new RTX cards. The T4 has a bandwidth of 320 GB/s, and because of the compact size of the card it fits in almost any type of server. The T4 uses about 75 watts.

In November last year, Google was the first cloud party to make the T4s available on its platform, then still in alpha. They have been available in the beta phase since January, for example in the Netherlands. The T4 comes with 16GB of memory and performance up to 260 TOPs, making the GPU ideal for conference workloads, according to Google. The T4 can also be used for training machine learning models.

Since the debut of the GPU, it has also been added to data centers of companies such as Cisco, Dell EMC and HPE.

This news article was automatically translated from Dutch to give Techzine.eu a head start. All news articles after September 1, 2019 are written in native English and NOT translated. All our background stories are written in native English as well. For more information read our launch article.