Innatera: Shaping the Future of Neuromorphic Computing for the Sensor Edge

From smart thermostats in our homes to the latest wearables on our wrists, sensors have become ubiquitous, generating a staggering amount of complex data.

Traditionally, this data has been sent to the cloud for processing. However, high latency and mounting costs are driving a shift in how sensor data is processed, namely, on the edge where it is collected.

Innatera leverages spiking neural networks to efficiently and quickly process sensor data on the edge. Their mission: to bring intelligence to a billion sensors by 2030. Founded in 2018 by Sumeet Kumar, Amir Zjajo, Rene van Leuken, and Uma Mahesh in Delft, Netherlands, Innatera raised a €15M Series A this spring from Invest-NL Deep Tech Fund, the EIC Fund, MIG Capital, Matterwave Ventures and Delft Enterprises.

Learn more about the future of neuromorphic computing for the sensor edge from our interview with the co-founder and CEO, Sumeet Kumar:

Why Did You Start Innatera?

We started Innatera as a spin-off from Delft University of Technology based on ten years of research, which covered two tracks: One focused on recreating how the brain functions and helping neuroscientists conduct experiments to understand the human brain better. The other track focused on building very energy-efficient microprocessors for mobile processing, e.g., processing videos on a mobile phone.

Around 2018, the European Commission funded several research projects, including two mega projects dedicated to AI and next-gen compute, among other things, to achieve level-3 autonomous driving. However, we soon discovered with our European semiconductor partners that there was a disconnect between the growing number of sensors that always generate data and the available compute resources on the edge to process all this data. 

Sensors are everywhere—in every phone, car, and pretty much every device we use today. They produce complex, large amounts of data that are typically pushed to the cloud for analysis. But this takes time and induces latency, which has become the bottleneck in human-machine interactions. Also, sending large amounts of data becomes prohibitively expensive, so you want to place AI close to the sensor. Yet, on the sensor edge, battery capacity is limited, allowing typically just very basic processing. 

Doing research at the intersection of neuroscience and energy-efficient processors, we thought there could be a match. We didn’t have a concrete product yet, but we could clearly see those developments as the starting point for a company to figure out how to build a product, which led us to found Innatera and develop spiking neural processors. 

What Are Spiking Neural Networks?

The human brain represents and processes information using discrete events called spikes, brief surges in electrical activity when a neuron fires. If an event is important, a neuron spikes early, which is how correlations are formed, so the timing of spikes is important.

It allows the human brain to process information rapidly. When someone throws an object at you, it takes just about 100 ms for your brain to analyze the situation and put your muscles into motion to avoid the object. And even if the object hits you, you might have already raised your arm to deflect it. 

Spiking neural networks mimic how the human brain works, using the timing of spikes to identify correlations, i.e., patterns, in data. Unlike traditional neural networks, which process information continuously, time plays a central role in spiking neural networks. It makes them powerful computationally, allowing them to process data quickly while being 100 times smaller than conventional neural networks. 

We have developed the Spiking Neural Processor T1 to emulate spiking neural networks by hardware design. At its core is a neuron-synapse array, where analog electronic circuits vary their currents over time and thereby simulate the spiking of neurons. A RISC-V processor handles data acquisition from the sensor and supplies it to the spiking neural processor. 

We have built the world’s first neuromorphic microcontroller to serve as an ultra-low-power AI chip next to a sensor and detect patterns in the sensor data in real-time. We can map the weights and biases of a neural network to our chip, running inference at very low latency, well below a millisecond, and high energy efficiency at the milliwatts level. 

How Did You Develop Your Spiking Neural Processor?

We built our brain-inspired processors starting with single neurons, connecting them, and giving them the ability to learn autonomously. The number of neurons grew steadily, and once we had a decent number of them and picked the right parameters, we could see the neurons synchronize and fire repeatedly, all of them at a constant frequency—similar to the brain waves of a human who goes to sleep. It was stunning to see this behavior reproduced by an electronic system.

What stood out was not only that our processors could learn on their own but also how quickly they recognized patterns even with relatively few neurons. In 2018, people in the industry were talking about neural networks with millions of neurons. At that time, the devices we had used for our research had thousands of spiking neurons, and we could still process data with similar performance. This difference in scale was mind-boggling and set the ball rolling on our side to build an actual product.  

Over time, more and more industry players realized that to perform AI at scale, they had to build more energy-efficient hardware, and they couldn’t realize next-gen AI with existing computing hardware. Some had built chips specialized for AI applications, like Google with its Tensor Processing Units, but in reality, it was still costly and painful to deploy AI models on such customized chips. We asked ourselves, how could we build an even more low-power chip while making it easy to deploy AI models for our customers? 

This is crucial for processing continuous sensor data streams. Typically, 95% of the data is irrelevant, but the remaining 5% contains relevant patterns. You want to detect the pattern efficiently while not burning power for the rest of the time.

Most applications today require fixed functionalities, so you train and deploy a neural network once and generally don’t have to change it. For some applications, retraining a neural network is important, but those are often not power-constraint. Our chips stand out where power is the constraint, and we can do semi-supervised or unsupervised learning to identify patterns in sensor data streams. 

How Do You Train Spiking Neural Networks?

Training spiking neural networks used to be hard, as the spikes are discontinuous, and thus, you can’t use back-propagation straightforwardly. In the past, people tried to take conventional neural networks and map them to a spiking neural network. Today, we have better approaches using surrogate gradient descent and training spiking neural networks in an event-driven way as they run on our processors. 

We have developed a complete software development kit called Talamo so you can build spiking neural networks yourself, even in PyTorch. Programming a new chip requires often learning a new language, but we have made it compatible with the machine-learning frameworks you’re already familiar with. Once you train a model, you can deploy it onto our hardware without having to worry about any of the details of our hardware. 

What Is Next for Innatera?

We have spent the past six years developing our processor and software based on our previous research, and we’re now at a point where our chips will enter volume production by the end of this year. We’ll continue to enhance future generations of the product, taking more inspiration from neuroscience. Our focus is more on the commercial side already, and we’re working hard to get all the nuances right in how AI is deployed at the edge for various applications. 

How Did You Evaluate Your Startup Idea?

At the start, we didn’t have a concrete product. After doing a lot of research, it was a mutual discovery process with our customers to find out how our research could solve actual business problems. We talked a lot with customers and focused on where we saw a lot of customer engagement and interest, as well as urgency. 

There are about four billion devices with sensors out there, and wherever you have a sensor, you will need a processor. It’s like one processor for four sensors. So, the sensor market itself is massive, and even the markets for individual sensors like microphones or image sensors are huge. We think it’s inevitable that the edge processing market will also become huge. Surely, the market is still very early, but even if we nail just one application, this will already be an exciting business.  

What Advice Would You Give Fellow Deep Tech Founders?

As an entrepreneur, you will quickly realize that you operate in a highly uncertain environment. It’s okay not to know everything, especially if you’re operating in a new and emerging market. Become comfortable making decisions with little information to support them.

Some of the best qualities you can develop are grit and perseverance—you’ll definitely need them. It’s like when deciding on a PhD or getting married: find a good reason for why you’re doing it. When it’s dark in winter, and you’re feeling like crap, what gets you out of bed? It can be any reason like you want to be rich or a leader. That’s okay. But it has to be a strong reason for yourself.

Comments are closed.