2 min Analytics

Google’s new technology accelerates training AI models

Google’s new technology accelerates training AI models

Data Echoing, a new technique from Google, would dramatically accelerate the training of AI models in the early stages of the training process, reports VentureBeat. That’s what scientists at Google Brain, Google’s AI research department, report in a recent paper.

AI accelerators such as Intel’s Nervana Neural Network Processor or Google’s Tensor Processing Units are designed to speed up AI training, but early stages in AI training processes don’t benefit from the boost that this hardware provides. According to the researchers who wrote this paper, this is possible if data echoing is used, a way of reusing output data at an earlier stage of the process. In theory, this would result in an optimal use of processor power.

Utilizing maximum computing power

Normally, an AI training process means that an AI model reads the input, decodes it and then shakes it up again. The data is then provided with additional information and divided into batches. The parameters of an algorithm are then updated each time the batches are read again. In the end, the margin of error of an AI system becomes smaller and smaller. With data echoing, however, the output is reused before parameters are updated, but after the data is provided with additional information. Before updating the parameters, unused computing power is used.

To test the effectiveness of data echoing, Google researchers applied the tactics to different tasks of AI models. It concerned specifically two language modelling tasks, two image classification tasks and one object detection task. The model was trained with open source datasets. Next, the time it took to use data echoing was compared to the time it took without data echoing. The result was that data echoing in all but one case resulted in less AI training being required to achieve a certain margin of error. Thus, if a bottleneck occurs in the later stages of AI training, data echoing could be a way to achieve increased system capacity in earlier stages.

This news article was automatically translated from Dutch to give Techzine.eu a head start. All news articles after September 1, 2019 are written in native English and NOT translated. All our background stories are written in native English as well. For more information read our launch article.