NetApp launches EF50 and EF80 for AI and HPC workloads

NetApp launches EF50 and EF80 for AI and HPC workloads

The new high-performance storage systems are designed for AI training, HPC, and transactional databases in demanding environments. The EF50 and EF80 deliver over 110 GBps of read throughput and 55 GBps of write throughput. That is 250 percent more than the previous generation.

With a power efficiency of 63.7 GBps per kilowatt and 1.5 petabytes of storage in a 2U chassis, the hardware offers high rack density without introducing additional management complexity.

“As businesses contend with ever-increasing data volumes and performance-intensive applications such as AI model training, AI inferencing and high-performance computing, they need infrastructure that delivers speed, scalability and efficiency without added complexity,” said Sandeep Singh, SVP and General Manager of Enterprise Storage at NetApp.

The EF50 and EF80 work with the Lustre and BeeGFS parallel file systems. This makes the systems immediately deployable as high-performance scratch space for HPC simulations, ensuring GPUs remain optimally utilized.

EF Series in the broader NetApp portfolio

The EF-Series plays a different role than NetApp’s other product lines. While AFF and ASA systems offer unified file, block, and object storage, the EF-Series focuses exclusively on block storage for maximum performance. NetApp previously expanded its block storage portfolio with the ASA A20, A30, and A50. The EF-Series complements this for workloads where raw throughput is paramount.

In October 2025, NetApp introduced the NetApp AFX, a disaggregated storage solution designed for AI inference workloads. The new EF50 and EF80 target an additional category: high-throughput environments that do not require unified storage but do require maximum bandwidth for GPU clusters. Sovereign AI clouds are also among the listed applications.