2 min Analytics

Nvidia reveals compact server for AI tasks

Nvidia reveals compact server for AI tasks

Nvidia has announced the DGX Station A100. It is a compact server equipped with four A100 CPUs and up to 320GB of HBM2e memory. The machine is intended as a local device for performing machine learning tasks.

The server is compact in size. With a height of 639mm, width of 256mm and depth of 518mm, it is not much larger than a big computer case. Nvidia claims that it can be tucked away in any corner of an office or research lab. It doesn’t need to be in a specially cooled server room, but can be deployed as a server, including access for homeworkers.

More VRAM

Customers can choose between two versions of the Nvidia A100 CPU. In addition to the existing version of the A100, which is equipped with 40GB of video memory, Nvidia has now also made an 80GB version available. By combining four of the cards in one machine, users have up to 320GB at their disposal.

Specifications

The DGX Station A100 runs on an AMD Epyc 7742 processor, with 64 Zen 2-cores. The system has 512GB of DDR4 memory, a 1.92TB NVMe-SSD as the system drive and an additional U.2-SSD with 7.68TB of storage capacity. Also, two 10Gbit/s connections are available to connect to a network. In total, the system may consume up to 1500 watts of power.

Powerhouse for AI and ML

Nvidia is clearly marketing its A100-GPU as a powerhouse for artificial intelligence and machine learning. In October, the company bragged about the score achieved in MLPerf benchmarks. The company then claimed that a single DGX A100 server with eight A100 CPUs would be almost as fast as a thousand Intel-based servers.