2 min

HPE has added solutions to its own portfolio that aim to fulfill all enterprise AI needs. Using one’s proprietary data for GenAI should be much easier from now on. In addition, HPE promises to support the future introduction of Nvidia’s Blackwell GPUs within its portfolio.

The announcements took place during Nvidia’s GTC 2024 event, where that company presented the next generation of leading AI hardware. For HPE, it provided an opportunity to touch on a number of solutions already announced, as well as to further expand its partnership with Nvidia.

Reference architecture

A completely new addition is the HPE Machine Learning Inference Software, available in technology preview form. The solution integrates with Nvidia’s NIM, which offers foundation models via pre-built containers. One offering that has launched immediately is HPE’s reference architecture for RAG (Retrieval-Augmented Generation). The tool enables companies to run AI workloads with real-time integration of proprietary data, without having to train an AI model beforehand. RAG offers an LLM the option of consulting external sources in addition to self-acquired knowledge, which can be continuously modified.

Tip: What is Retrieval-Augmented Generation?

The solution is seamlessly integrated with data from HPE Ezmeral and files residing in HPE GreenLake. HPE says the reference architecture provides a blueprint for organizations to deploy their own chatbots, generators or copilots. To prepare data, train AI and run inferencing workloads, the reference architecture provides a “full spectrum of open-source tools and solutions” including HPE’s own Ezmeral Unified Analytics and purpose-built AI software.

Look to the future

HPE once again touts its previous extensive AI offerings. At HPE Discover in Barcelona in November, it already announced a revamped enterprise stack in partnership with Nvidia, which we wrote about earlier:

Tip: HPE and Nvidia unveil “enterprise-class, full-stack” GenAI solution