Nvidia announced the general availability of Nvidia AI Enterprise, a software suite that allows companies to virtualize workloads on mainstream servers running VMware vSphere. During the GA release, Nvidia also said it is partnering with Domino Data Lab to integrate its MLOps platform on top of Nvidia AI Enterprise.
Nvidia AI Enterprise combines three components that include the Nvidia RAPIDS suite of software libraries, Nvidia Tensor (the open-source inference serving software), and frameworks like TensorFlow and PyTorch.
The software suite blends these elements, with full enterprise-level support included.
Utilizing what’s already there
The suite is designed to run on mainstream servers from OEMs like Lenovo, Dell, HP, and more. As Manuvir Das, head of Enterprise Computing at Nvidia, tells it, the idea here has been to utilize the same servers that have been racked and stacked in private clouds and enterprise data centers, for AI, with a small amount of GPU added to the server at “affordable, incremental costs.”
Nvidia decided to partner with VMware to bring the suite to market since vSphere is the default operating system of the enterprise data center.
Das outlined the opportunity Nvidia sees to accelerate AI adoption.
Adoption is in motion
Many businesses are already adopting AI for core strategic use cases. However, Das sees the opportunity in various functions that every enterprise customer performs daily, regardless of their business.
It could be human resources or managing a sales team. That is where the big wave of AI is to come from, in Nvidia’s eyes. Early adopters of the Nvidia AI Enterprise include dozens of companies in all sorts of industries, including Cerence, which is using it to develop intelligent in-car assistants, as well as the University of Pisa.