3 min

Now that VMware Cloud has been firmly modernized since earlier this year, the company is currently focused on building out an expanded AI offering. At VMware Explore in Barcelona, it has released a raft of announcements in this area. Additional compute power, storage and a series of partnerships give customers more flexibility to leverage the technology, whether on-prem, in the (public) cloud or within edge environments.

Firstly, VMware has made some significant changes to VMware Cloud Foundation, which has now been updated to version 5.1. This platform allows users to control their infrastructure uniformly. It boasts support for all kinds of combinations of on-prem and cloud environments. In August, it boasted about a 33 percent decrease in upgrade time, in addition to providing better scalability.

Tip: VMware Cloud up to date again with NSX+, vSAN Max and better security

Now, flexibility seems to be taking centre stage. The company is doubling the GPU capacity per VM brings the total to 16. This makes the running of AI workloads a lot more feasible than before. In addition, storage can now be ramped up independently from compute.

Private AI with Intel

Back in August, VMware announced a partnership with Nvidia called VMware Private AI Foundation. This integrated Nvidia’s software with VMware’s stack to train or customize AI models, among other things. Its aim: to allow companies to harness the potential of AI without compromising the security of enterprise data.

Read more: VMware and Nvidia launch Private AI Foundation: AI that keeps enterprise data secure

Now, Intel has also teamed up with VMware for a similar end product. That company’s AI software package will now be integrate with VMware Cloud Foundation and utilize the chipmaker’s own Xeon processors with AI accelerators and Max-GPUs. We should note that Nvidia is the current AI performance champ by a rather significant margin. However, Intel’s alternative solution will provide some much-needed flexibility for VMware customers. As was the case with the VMware-Nvidia get-together, users can prepare data, train models, refine and run inferencing on the Intel platform through VMware.

“For decades, Intel and VMware have delivered next-generation data center-to-cloud capabilities that enable customers to move faster, innovate more, and operate efficiently,” said Sandra Rivera, executive vice president and general manager of the Data Center and AI Group (DCAI) at Intel. “With the potential of artificial intelligence to unlock powerful new possibilities and improve the life of every person on the planet, Intel and VMware are well equipped to lead enterprises into this new era of AI, powered by silicon and software.”

Integration with IBM watsonx

The topic of Private AI is also key when it comes to VMware’s newly announced partnership with IBM. VMware Cloud Foundation, Red Hat OpenShift and IBM watson are now integrated solutions for shared customers. The latter is IBM’s platform to enable secure AI workloads based on enterprise data. The platform’s stated goal is as simple as it is enticing: to help employees get rid of a variety of mundane tasks.

Tip: IBM marries AI platform with as much data as possible: what’s Watsonx?

Going forward, watsonx can also run on-prem via VMware, which promises the same functionality as the cloud-based SaaS alternative. This can be done using Red Hat’s enterprise-focused Kubernetes platform.

This collaboration between VMware and IBM was established through the shared Joint Innovation Lab. It consists of a team from both companies to bring new solutions to market. They have been working on this since 2018. In the future, they hope to launch a “reference architecture” to train and refine AI models in hybrid cloud and on-prem environments. This should even meet highly restrictive compliance requirements so that, for example, a financial institution can use AI on a large scale without worry.

In short: VMware has significantly strengthened its own AI offerings, aided by several newly established collaborations. Private AI, according to the company, is an “architectural approach” that is not dependent on a specific vendor. Multiple AI platforms are joining in, with “flexibility” being the magic word for customers throughout.