2 min

Tags in this article

, , ,

European research team has found a way to use light instead of electricity to power AI processors.

IBM researchers have developed a way to dramatically reduce latency in Artificial Intelligence (AI) systems by using light, instead of electricity, to create ultra-fast computing.

The IBM team, along with scientists from the universities of Oxford, Muenster and Exeter, achieved this by using photonic integrated circuits that use light instead of electricity for computing.

The light-based tensor core could be used, among other applications, for autonomous vehicles.

A milestone in processing technology

IBM Researcher Abu Sebastian described the new milestone they have reached in a blog post this week.

“Our team, combined with scientists from the universities of Oxford, Muenster and Exeter as well as from IBM Research has developed a way to dramatically reduce latency in AI systems,” he wrote.

“We’ve done it using photonic integrated circuits that use light instead of electricity for computing. In a recent Nature paper, we detail our combination of photonic processing…demonstrating a photonic tensor core that can perform computations with unprecedented, ultra-low latency and compute density.”

Specifically, the IBM team has built a photonic tensor core. This is a type of processing core that performs sophisticated matrix math, and is particularly suitable for deep learning applications.

IBM’s new photonic tensor core runs computations at a processing speed higher than ever before, according to Sebastian. It performs key computational tasks associated with AI models such as deep neural networks for computer vision in less than a microsecond, he claims.

Performing trillions of operations per second

The light-based tensor core was used to carry out an operation called convolution, that is useful to process visual data like images.

“We demonstrated a photonic tensor core that can perform a so-called convolution operation in a single time step,” Sebastian boasts.

An operation for a neural network usually involves simple addition or multiplication,” he explains. “One neural network can require billions of such operations to process one piece of data, for example an image. We use a measure called TOPS to assess the number of Operations Per Second, in Trillions, that a chip is able to process.”

Using the new light-based tensor core, the IBM team obtained a “whopping” processing speed of two trillion operations per second, or 2 TOPS.