Dell AI Data Platform gets new storage options

Dell AI Data Platform gets new storage options

Dell Technologies announces improvements to its AI Data Platform to accelerate AI workloads. With new integrations for Nvidia hardware, faster object storage, and GPU-accelerated vector search, Dell aims to help organizations extract more value from distributed data.

The integration with Nvidia cuVS brings GPU-accelerated hybrid search to the Dell AI Data Platform. This combines keyword and vector search for faster and more efficient results. According to Dell, IT teams get a fully integrated solution to deploy GPU-powered search immediately.

Dell positions its AI Data Platform as a solution for companies struggling with data in silos. Decoupling data storage from processing eliminates bottlenecks, and organizations gain the flexibility they need for AI workloads.

Faster storage for AI applications

PowerScale, Dell’s NAS platform, will be integrated with the Nvidia GB200 and GB300 NVL72 systems. According to Dell, the PowerScale F710 has achieved Nvidia Cloud Partner certification and can scale to more than 16,000 GPUs. The platform is said to require up to five times less rack space, use 88 percent fewer network switches, and consume up to 72 percent less energy than comparable solutions.

ObjectScale, which Dell describes as the fastest object platform in the industry, is now also available as a software-defined option on PowerEdge servers. This new variant is said to be up to eight times faster than the previous generation of all-flash object storage.

RDMA support for improved performance

A notable addition is support for S3 over RDMA, which will be available as a tech preview in December. Dell claims this delivers up to 230 percent higher throughput, 80 percent lower latency, and 98 percent lower CPU usage compared to traditional S3 implementations.

For large-scale implementations, ObjectScale promises up to 19 percent higher throughput and 18 percent lower latency when processing small 10KB objects. Integration with AWS S3 is further expanded with bucket-level compression, designed to give developers and data scientists better tools for storing and moving large volumes of data.

Search and analysis engines

In addition to the storage engines, Dell is introducing new data engines, developed in collaboration with partners such as Elastic and Starburst. The Data Search Engine, built in collaboration with Elastic, is designed to make it easier to search data using natural language. The system integrates with MetadataIQ, which allows billions of files on PowerScale and ObjectScale to be searched.

Developers can use the engine to build RAG applications using the LangChain tool. The system only adds changed files, which saves computing time and keeps vector databases up to date.

The Data Analytics Engine, developed in collaboration with Starburst, allows data queries across different sources, such as spreadsheets, databases, and cloud data warehouses. A new Agentic Layer converts raw data into usable products in seconds, automating documentation and generating insights using LLMs.

The various updates will be rolled out in phases. The PowerScale integration with Nvidia GB200 and GB300 NVL72 is now available, while ObjectScale S3 over RDMA will be released as a tech preview in December. The first release of the Data Analytics Engine Agentic Layer and the MCP Server will follow in February 2026. The Data Search Engine and Nvidia cuVS integration will appear in the first half of 2026.

Tip: What exactly is vector search and when should you use it?