Intel and Baidu develop Nera chip together for AI training

Intel and Baidu develop Nera chip together for AI training

Intel is working on a Nera chip dedicated to training AI algorithms. It is going to work with Chinese cloud giant Baidu for its development.

Intel is working on an expansion of its portfolio of Nera chips. Nervana processors are chips designed specifically for AI algorithms and are named after the acquisition of Nervana Systems by Intel in 2016. At the beginning of this year Intel launched the Nervana Neural Network processor for inference (NNP-I), now the chip giant is letting us know that a variant for training is in the making.

For the development of this NNP-T chip, Intel is partnering with Chinese cloud specialist Baidu. The chip will be optimized for Baidu’s PaddlePaddle training network. Think of a close cooperation between the hardware and the software. The accelerator chip will also integrate seamlessly with the Xeon Scalable servers on which the training network is already running today.

There is a difference between the training of algorithms to be able to do certain things (such as image recognition), and the use of that knowledge afterwards in applications. For training purposes, an algorithm has to process large amounts of training data and look for, for example, patterns and characteristics that serve as a guide to distinguish cats from dogs in a photograph. Once an AI algorithm knows the difference, it can be built into different applications that require less processing power. Such workloads no longer fall under training but under inference.

Higher gear

AI workloads are increasingly able to make use of custom-made accelerator chips. They usually work in tandem with classic processors in order to carry out specific AI calculations as quickly and efficiently as possible. The field is now so important that it is economically viable to introduce AI chips to the market on a large scale.

Intel is not the first nor the only one. For example, Google has its own AI processors with the Tensor Processing Unit, Huawei has been putting AI hardware into its phones for some time and MediaTek is working on AI chips for IoT applications. Of course, Nvidia also puts a lot of effort into AI components, since the whole accelerator story started with the use of more classic graphics cards with large parallel computing capacities to assist cpus.

This news article was automatically translated from Dutch to give a head start. All news articles after September 1, 2019 are written in native English and NOT translated. All our background stories are written in native English as well. For more information read our launch article.