Neuromorphic computers prove suitable for supercomputing

Neuromorphic computers prove suitable for supercomputing

Research into alternative computer architectures is getting a new boost thanks to work by Sandia National Laboratories. Scientists are showing that neuromorphic computers, designed to mimic the human brain, are not only useful for AI, but also for complex computational problems that normally run on supercomputers.

This is reported by The Register. Neuromorphic computing differs fundamentally from the classic von Neumann architecture. Instead of a strict separation between memory and processing, these functions are closely intertwined. This limits data transport, a major source of energy consumption in modern computers. The human brain illustrates how efficient such an approach can be.

Until now, neuromorphic chips have mainly been used for neural networks and machine learning. The new research shifts the focus to numerical simulations, a core area within high-performance computing. To this end, the researchers developed software that makes existing mathematical methods suitable for neuromorphic hardware.

Central to this is an algorithm that applies the finite element method to spiking neuromorphic systems. This method is widely used in technical simulations, such as fluid dynamics, materials research, and electromagnetic models. Translating this approach to neuromorphic chips creates an alternative computing platform for simulations.

The experiments were conducted on systems with Intel’s Loihi 2 neurochips. These chips are designed for massively parallel processing with low energy consumption. According to Sandia’s measurements, the systems deliver higher efficiency per watt than modern GPU architectures from suppliers such as Nvidia.

An important result is that performance scales well with the number of cores. When the number of computing cores is increased, the computing time decreases almost linearly. This indicates that this type of hardware may be suitable for large-scale parallel computations, provided that the software is tailored to it.

HPC software not usable

Programmability remains a major bottleneck in neuromorphic systems. Traditional HPC software is not directly usable and often requires new algorithms. According to the researchers, their approach lowers this threshold by allowing existing numerical models to be used with limited modifications.

The current results are primarily intended as a technical demonstration, not as a direct replacement for existing supercomputers. Nevertheless, the researchers see potential for further development, especially with a shift from digital to analog neuromorphic systems, which could further increase efficiency.

At the same time, the playing field remains in flux. In addition to neuromorphic computing, machine learning is also being explored as an accelerator for classical simulations. Whether neuromorphic hardware will ultimately play a dominant role alongside or instead of GPUs remains uncertain, but the research shows that the future of high-performance computing will likely consist of multiple specialized architectures.

For IT and infrastructure professionals, this emphasizes that energy efficiency and architecture choice are becoming increasingly important. Neuromorphic computers are thus moving from experimental technology to a serious platform for specific HPC and data center workloads.