Cisco Systems has created a system specifically for artificial intelligence. This is the UCS C480 ML M5, a four-rack unit server especially for deep learning workloads. The server should be available later this year.

UCS servers combine compute resources with networking and storage capabilities, as well as management automation software. Inside the Cisco server are two Intel Xeon Scalable central processing units and eight Tesla V100 graphics cards from Nvidia.

According to Nvidia itself, one of these cards already offers 47 times the performance of a traditional CPU for deep learning workloads. The V100 chips in Cisco’s server communicate with each other via a technology called NVLink, which has been specially developed for such systems.

The device can also be equipped with 24 storage disks or flash drives. In total, six of the outputs support USB sticks.

First on the market

The server can offer a lot of potential for Cisco. According to Chirag Dekate, researcher at Gartner, there are no other hardware providers offering a server box with eight GPUs yet. So that would make Cisco the first. Users of the system will now have the “ability to use deep learning, without introducing new diversity in their data centers”.

To ensure that the server works with the AI services of your choice, Cisco will start a partnership with Hortonworks. This should enable the machine to run the last 3.1 release of the Hadoop analytics platform. This version supports popular frameworks for deep learning.

In addition, there must be support for Kubeflow. This is an open-source tool that allows TensorFlow to collaborate with the Kubernetes software contrainer orchestration engine. This allows companies to easily shift workloads between different types of environments.

This news article was automatically translated from Dutch to give Techzine.eu a head start. All news articles after September 1, 2019 are written in native English and NOT translated. All our background stories are written in native English as well. For more information read our launch article.