The human brain consumes roughly 20 watts. Normally, processors can’t match that level of efficiency, but Innatera can. The secret? A “neuromorphic” chip—a semiconductor that functions much like a human brain does. What does that mean? And what’s to be gained from it?
Sumeet Kumar, CEO of Innatera, explains that the company was founded in 2018 as a spin-off from TU Delft. The team built on years of academic research into efficient forms of computing. One such approach involved mimicking the brain as closely as possible on a silicon chip. In 2018, Kumar’s team spotted a major gap in the market: while sensors were becoming increasingly smarter, companies were still focusing mostly on Nvidia Jetson GPUs for “intelligent edge.” In practical terms, that meant trying to pack as much data center power as possible into the smallest form factor. These were an ill fit to the power-constrained environments they were put in. In the end, just scaling down conventional computing will not work.
A different approach
By 2020, according to Kumar, the industry finally realized it needed an alternative strategy. Since then, Innatera’s development has accelerated, despite (as we’ll explore further) limited resources. By now, the company has already built six generations of processors to tackle the edge computing challenge, and the technology is at a point where it’s truly ready for large-scale production.
What is edge compute?
‘Edge’ is a familiar IT term, but it can mean many different things. SUSE, for example, has a whole taxonomy for all sorts of edge variants, whose practical manifestations include telco towers (near edge) to retail PCs (far edge) and doorbell sensors (tiny edge). For this article, we will focus on the latter category. So Innatera builds chips intended for IT environments with very limited access to power. From now on, we will simply call that application the edge in this article.
Innatera has since grown into a company with around 80 employees spread across 15 countries. The product it has now introduced is called T1 and is known as a Spiking Neural Processor (SNP). Innatera announced it in January of last year at CES. The T1 is always on and requires less than a milliwatt. While conventional processors have cores and threads, the heart of an SNP is made up of 500 “processing elements,” arranged in an “Analog Spiking Neuron Synapse Array.” With a bit of imagination, you could think of it as a tiny brain.
The applications vary greatly, but they share one major limitation: there’s barely any energy available to perform tasks with classical compute paradigms. Examples include presence detection for a smart doorbell, touch-free interaction with hardware, always-on audio, and heart rate monitors. It may seem obvious that all of this has already been possible for some time and doesn’t use that much energy in the grand scheme of things. On top of that, connectivity in many regions is now strong enough that shifting these computations to the cloud is seamless.
That’s all true, but Kumar highlights several concerns. First, there’s a fundamental privacy issue: why make data from smart doorbells, audio devices, and heart rhythms vulnerable to breaches? Couldn’t this all be processed locally instead? Anyone who does so, however, faces limits on how large these smart devices can scale in terms of compute. Innatera, however, promises to make simple calculations 10,000 times more efficient and run them locally, keeping data as secure as it would be on an offline PC. To top it off, it also relieves data centers of what would otherwise have been an endless stream of edge data to process.
Neuromorphic architecture
On top of that, the architecture of this neural processor is well-suited for simpler applications. “In fact, we’ve already built the first robust presence-sensing system that pairs a 60 GHz radar sensor with our neuromorphic AI,” says Kumar. That radar sensor reveals a specific pattern in the data when a person is present, and the T1 picks up on it immediately. It doesn’t mistake fluttering birds or leaves for people—something that needlessly consumes a lot of energy with conventional computing. Simply removing the need to filter out false positives is already a huge benefit. According to Kumar, the above example lends some weight to the practical value of neuromorphic chips. Looking ahead, he envisions a smart doorbell that only turns on its camera when someone is actually there. The T1 detects the signs of human presence without difficulty.
Incidentally, a T1 SNP still includes some standard components. Each chip, for example, contains an “ultra low power” RISC-V CPU that manages data traffic. The T1 also uses spike encoders and decoders to differentiate the various neuron spikes (the ‘spikes’ in a ‘spiking neural processor’). This allows the processor to interpret otherwise indiscernable patterns, such as the neurons firing when the T1 spots someone’s presence or a change in a heart rhythm.
Scratching the surface
Admittedly, detecting audio or human presence isn’t as spectacular as what the human brain can handle. But a T1 with more than a milliwatt and 500 neurons might be capable of impressive feats, scaling up as the power and ‘core count’ does. Who knows what would be possible with a neuromorphic chip drawing the same 20 watts as a human brain? Going by the earlier figure when compared to classical computing efficiency, we’d have the power of an Nvidia Blackwell B100 AI chip 10,000 times over with a 35x reduction in wattage to boot. It’s a figure in the realm of fantasy as things stand, of course, but it may get some people thinking who are currently planning to buy tens of thousands of Nvidia’s latest.
“We’re only scratching the surface when it comes to neuromorphic computing,” Kumar says. Indeed, the field also excels in predictive analytics, noise isolation, and even retraining its own behaviour, he points out. Currently, most AI developers rely on pre-trained models that they deploy into the world. The next step, says the Innatera CEO, is to let AI learn as it goes.
Even so, neuromorphic AI on par with human cognition is still a distant goal. After all, we don’t fully understand how our own brains function to begin with. “We regularly mimic natural principles at the level of worms or insects,” Kumar explains. This aligns with the concept of liquid neural networks, a potential challenger—or even successor—to today’s transformer-based AI models. Still, the potential even with the equivalent cognition of simpler beings is enormous, Kumar believes, because organisms fundamentally think in similar ways regardless of their brain size. Spiking neurons have been evolution’s winner, and it’s not far-fetched to imagine that the same could one day be true in the digital realm.
Not much cash (needed)
We were also curious about Innatera’s operating model. Neuromorphic computing extends beyond any single company, and the TU Delft spinoff seems fairly traditional inside this context. It’s not the first chip startup to emerge from an academic setting, and it has maintained close ties with Belgian research institute imec and the Taiwanese chipmaker TSMC since its TU Delft days. The latter serves giants like Apple and Nvidia as well as smaller academic ventures, so Innatera’s team was already familiar with tape-outs (producing physical chips) at TSMC after they spun off in 2018.
Even after six generations, Innatera’s chips still use TSMC’s 28-nanometer node. For perspective, AMD’s most powerful GPU in 2011 used that same process. Meanwhile, industry leaders have moved on to 5, 4 and 3 nanometers, enabling more transistors at the same die size for higher performance. In the past, a firm like Innatera might have had to fend off car manufacturers who needed simpler chips in large volumes, but even automakers are transitioning to newer process technologies. By contrast, 28 nanometers is considered a mature node with high efficiency and simpler applications, leading to commodity pricing.
TSMC itself seems to be learning from Innatera’s new ideas and questions even on the older process node. The startup clearly left an impression: during an official presentation in mid-2024, TSMC highlighted neuromorphic computing as a distinct and forward-looking application. Thanks to its unique approach, Innatera remains competitive at 28 nanometers, even compared to others working on smaller nodes.
Sticking with 28 nanometers also keeps costs down—a good thing, since Innatera hasn’t raised enormous sums of cash compared to some others. In 2024, it had secured nearly 20 million euros over the year. Kumar explains: “What sets us apart in the chip industry is that U.S. companies often spend 100 to 200 million dollars to get their first chip into customers’ hands. We did it with just 5 million euros back in 2020.” In other words, efficiency isn’t just about computing; it applies to the budget as well.
European strength
Finally, let’s consider Innatera’s distinctively European character. Its innovations might not have happened, or might have happened elsewhere, if TU Delft hadn’t been such a magnet for international talent and expertise. Kumar emphasizes that deep-tech investments in Europe should build on this strength—namely, invention and innovation. “Manufacturing is nice to have,” he says, “but you need an IP ecosystem as well.” Europe has long laid that foundation and can continue expanding it. As for TSMC’s current plan to build factories for older nodes in Europe, that may be less of a barrier than some believe.
Naturally, questions remain. Even if Innatera’s approach proves correct, larger players like Nvidia, Apple, AMD, and Intel have the resources to integrate neuromorphic computing into their own products. The industry is already focused on the energy costs of AI, and if the power consumption of edge devices becomes a widespread problem, major chipmakers could roll out their own solutions. Still, Innatera could gain a first-mover advantage if customers see the benefits of neuromorphic computing today. Makers of smart sensors will be able to order mass-produced T1s in the coming year—a serious commitment for Innatera. From this point on, this particular architecture may yet emerge as a real contender to the classical computing ways.