SpiNNcloud Systems: Shaping the Future of Large-Scale, Real-Time AI

SpiNNcloud Systems: Shaping the Future of Large-Scale, Real-Time AI

As more data and more sophisticated data analysis methods have become available, machine learning emerged as the dominant force in artificial intelligence research. However, neural networks remain a vague analogy rather than an actual implementation of the human brain – with artificial neurons feeding into each other and weights and biases controlling and processing of flow of information. 

But what if we could build and understand an actual brain? This is what the Human Brain Project set out to explore in 2013 – with more than one billion euros in EU research funding. One of the projects is SpiNNaker – building a cognitive computer, a ‘brain’ encoded by hardware design and electronic integrated circuits. Whereas the first version, SpiNNaker 1, was built in Manchester, the second version, SpiNNaker 2, has been developed since 2019 in Dresden – a highly parallelized supercomputer with 152 ARM cores and the capability to run various types of neural networks – from deep to spiking neural networks and symbolic AI.

Founded by Christian MayrChristian EichhornMatthias Lohrmann, and Hector Gonzalez, the startup SpiNNcloud System is on a mission to transfer the research findings from the SpiNNaker 2 project into applications – building an even larger spiNNaker supercomputer aiming at 5 million ARM cores and 10x increased simulation capacity. It will serve as an infrastructure for AI models in computer vision, robotics, or prosthetics, where fast reaction times, very low latency, and crazy good energy efficiency are crucial. 

Learn more about the future of large-scale, real-time AI from our interview with the CEO, Christian Eichhorn, and Co-CTO, Hector Gonzalez:

Why Did You Start SpiNNcloud Systems?

SpiNNcloud Systems is not like the typical garage startup that founders came up with by quitting their jobs and rapid prototyping. It originated from a research project that has been ongoing for years with the outcome of the world’s largest brain-like supercomputer – and for us, this is a unique opportunity to be part of something big that very few people can build.

Our goal is to build the infrastructure that other startups could build their AI business on top of. It will provide a unique backend to run your AI applications. And to implement this vision, Christian Mayr, professor at the TU Dresden and head of the SpiNNaker 2 research project, brought together a team of entrepreneurs to transfer his research into practice through the startup SpiNNcloud Systems.

We all met Christian Mayr on different occasions: After a decade-long journey in research, Hector met him at a research conference in Abu Dhabi. They talked about their research, and Hector decided to join his research group in Dresden and ultimately co-founded SpiNNcloud Systems.

Christian Eichhorn guided already in 2008 the tech transfer for a project around amorph metals and has since gathered a decade of experience in operations. He was subsequently headhunted by Christian Mayr, fell in love with the technology – studying overnight and asking smart questions – and ultimately, he joined SpiNNcloud Systems as the CEO.

How Do Spiking Neural Networks Work?

SpiNNaker is just like our brain, composed of different processing units – 152 highly configurable ARM cores whose architecture can adapt to different use cases and stay competitive even after it has been released. And it supports natively various types of neural networks, not only the classic deep neural networks but also symbolic AI and spiking neural networks. 

Spiking neural networks are particularly interesting as they mimic the human brain more closely and incorporate the concept of time into their operations: Whereas standard neural networks are continuous, differentiable functions, spiking neural networks use discontinuous spike trains – series of spikes – to feed signals from one neuron into the other – as the human brain does.

But this makes them hard to train, as spike trains are not differentiable, and standard backpropagation doesn’t work. So traditionally, you would train a deep neural network first and then convert it to a spiking neural network – it’s like replacing the spikes with a function that is differentiable and using surrogate gradients for the training. 

However, last year two researchers from Heidelberg came up with another training method that allows to compute exact gradients for spiking neural networks using adjoint equations and the implicit function theorem. It works like traditional backpropagation but with another subroutine to calculate the gradients. Timo Wunderlich, one of the two researchers from Heidelberg, has implemented this method to train spiking neural networks in SpiNNaker 2, with the ARM cores parallelizing the processing of the input data and the gradient updates quite efficiently.

How Did You Evaluate Your Startup Idea?

The SpiNNaker 2 research project has already been rigorously benchmarking our chip technology – to work in an event-based manner, even for non-spiking networks, using only the cores you need right now. All the cores operate individually – if only three cores are needed to process some data, the remaining 149 can do something else or completely shut down. Energy consumption is a serious issue in times of climate crisis, and our architecture allows us to be very energy-efficient. 

Many AI chip startups go after edge applications, and the edge market gets quite crowded these days. Whereas we could also go to the edge, our main focus is on large-scale systems and edge-to-cloud continuum for real-time information processing with very low latency and very good energy efficiency. 

Every startup needs to solve the chicken-egg problem that you need to show traction to attract funding, but you need funding to develop your technology and show traction. That’s why we’re getting started with pilot projects: We are cooperating with ABR to deploy their functional brain model (SPAUN) that no other machine can run right now in real-time. And we also got SpiNNaker2 to be in industrial pilot projects ongoing, e.g. one project with BMW for processing radar signals and one project with Infineon for IoT applications. The 5 million core system composed of SpiNNaker2 chips is also part of the supercomputers in the Center for Scalable Data Analytics and Artificial Intelligence of Germany (ScaDS.AI).  

What Advice Would You Give Fellow Deep Tech Founders?

Everything takes longer than expected – plan for it. Especially when you are dealing with research transfer, you need the stamina to negotiate the IP transfer contract with your university. And when you’re dealing with hardware, it’s again more challenging than just doing software – you might have large CAPEX expenditure upfront, while everyone wants to invest in scalable software solutions. Yet, it’s never too early to talk to investors, get to know them and build a relationship over time. 

And you have to love what you’re doing. Otherwise, it would be crazy starting a startup – you are always on – there are no holidays, weekends, or vacations. But you got to do what you love! And we’re all super excited to work on SpiNNcloud Systems!