2 min

Tags in this article

, , ,

HPE and Nvidia announced today at HPE Discover 2023 in Barcelona that they will jointly deliver an enterprise computing solution for generative AI. For the collaboration, both parties are integrating their software stack, allowing organizations to deploy AI at their own scale and for specific purposes.

The companies make the general assessment that GenAI can be used for a variety of applications, from enterprise data search and process automation to content creation. All of this requires rapid deployment and a high degree of adaptability. Therefore, HPE and Nvidia emphasize that customers can use the hardware and software developed by both companies.

Deploying GenAI requires a great deal of control over the data one uses to be as useful as possible to a business. Therefore, both parties emphasize that organizations start with a foundation model, after which proprietary data and fine-tuning tools can do the rest of the work to specify the use case. In this way, HPE and Nvidia hope to reduce the effort it takes to deploy AI significantly.

“Ready-out-of-the-box”

Nvidia CEO Jensen Huang says that companies will achieve “unprecedented productivity” thanks to the partnership. By choosing this solution, companies of all sizes can get started with a “ready-out-of-the-box” offering. The main example for applications remains AI chatbots.

Hewlett-packard enterprise e-mail solution stack.

In terms of hardware, HPE puts its ProLiant Compute DL380a servers to use, which include standard Nvidia hardware for “hyperscale AI.” The solution pairs 16 of these servers, allowing 64 L40S-GPUs from Nvidia together to scale Meta’s Llama 2 70B model.

Using the HPE Machine Learning Development Software, developers can turn to an “AI studio.” This tool allows developers to create prototype and test versions of LLMs and ML models.

The offering will be available in early 2024.

Partnering Nvidia in this way is a trend this year, as we have seen recently with AWS and previously with VMware, for example.