2 min

Built on the Hopper architecture, the all-new Nvidia H100 Tensor Core GPU brings performance, functionality and AI compatibility to devices.

Nvidia has recently launched its new H100 GPU. Based entirely on the Hopper architecture and featuring more than 8 billion transistors, the GPU provides excellent performance, scalability and security.

The H100 GPU offers faster and more effective operations on devices, making it suitable for the world’s most cutting-edge AI applications. According to Nvidia, it’s 40 times faster than traditional central processing units (CPUs). Additionally, it handles multiple workloads simultaneously with its second-generation secure multi-instance GPU technology.

“I see Hopper as “the new engine of AI factories” that will drive significant advances in language-based AI, robotics, healthcare, and life sciences”, Nvidia Chief Executive Jensen Huang said at the GTC 2022 event. “Hopper’s Transformer Engine boosts performance up to an order of magnitude, putting large scale AI and HPC within reach of companies and researchers.”

The world’s most notable IT companies

The world’s leading IT companies require GPUs that offer excellent data protection, ensure fast operations on their devices and are compatible with futuristic AI technologies.

Nvidia’s new GPU will be available in several third-party systems, including those manufactured by Supermicro, Giga-Byte Technology, Fujitsu, Lenovo, Dell, Cisco, HPE and Atos. Several cloud providers like Google Cloud, Microsoft Azure, Amazon Web Services, and Oracle Cloud will install the H100 GPU in their cloud systems for efficient user operations.

“We look forward to enabling the next generation of AI models on the latest H100 GPUs in Microsoft Azure”, said Nidhi Chappell, general manager of Azure AI Infrastructure. “With the advancements in Hopper architecture coupled with our investments in Azure AI supercomputing, we’ll be able to help accelerate the development of AI worldwide”

Furthermore, many education institutes will also use the powerful H100 GPU in their supercomputers, including the University of Tsukuba, Swiss National Supercomputing Centre, Texas Advanced Computing Center, Los Alamos National Lab and the Barcelona Supercomputing Center.

Companies are attracted to Nvidia’s GPU as they are offering a five-year license for Nvidia AI Enterprise. This way, companies can optimize their AI models by accessing AI frameworks, chat boxes, vision AI, recommendation engines and more.