Gemesys: Shaping the Future of AI Chips Inspired By The Human Brain
Neural networks are inspired by the human brain, but they remain firmly rooted in the world of zeros and ones and lack the efficiency of their biological counterparts. Training and operating these networks involve significant energy consumption, which is one of the major challenges to realizing genuine artificial intelligence (AI).
Maybe it’s time for a hardware update: Enter Gemesys, founded in 2021 by Dennis Michaelis, Enver Solan, and Moritz Schmidt. Their big goal is to develop an analog AI chip that mimics the same information-processing mechanisms as the human brain, with the potential to be 20,000 times more energy-efficient than today’s graphics processors.
With €3M in funding and support from the German government, including the Federal Ministry for Economics and Climate Action, as well as the EXIST Program and Projektträger Jülich, Forschungszentrum Jülich GmbH, Gemesys is poised to reshape the future of AI.
Learn more about the future of AI chips inspired by the human brain from our interview with the co-founder and CEO, Dennis Michaelis:
Why Did You Start Gemesys?
When I finished my Ph.D. in electrical engineering, I asked myself: “What is the one thing I should focus my time on?” I love efficiency, so it had to be just one thing. AI seemed like the most interesting technology, even before the recent ChatGPT hype, since it has the potential to help humans build better tools.
The human brain is amazing in that it invented all the things we use every day, like buildings, cars, and computers. What sets humans apart is their ability to create tools, and AI is trying to imitate the best creator of tools that nature has ever produced and become the best tool to build better tools. From autonomous driving, potentially preventing millions of traffic deaths, to finding new vaccines and curing diseases, work on AI has the potential to improve a lot of things.
So, what is holding AI back? Our computers are digital—that is, they shuffle around zeros and ones to train and run neural networks by multiplying large matrices and vectors. On the one hand, they were designed to handle complex computations, but machine learning only requires very simple multiplication and accumulation operations, really just plus and multiplication, but billions of them. On the other hand, a lot of data needs to be moved back and forth between memory and the processor, which is slow and energy-intensive. This is known as the von Neumann bottleneck. Consequently, most of the effort is spent on moving data, not manipulating it.
While machine-learning algorithms mimic the human brain through software, there’s still a huge gap on the hardware side. The human brain works nothing like a digital computer, and digital chips are nowhere near as efficient as the human brain.
Narrowing this gap intrigued me. My technical co-founder, Enver, and I began using the research from our PhDs, during which we shared an office, to design an AI chip that works more like the human brain. To me, building a brain-like AI chip is like building a colony on Mars—one of those great leaps for humanity that nobody has made and that really pushes technology beyond the limits of what is currently possible. We wanted to be part of this, so with Moritz joining as our third co-founder, we founded Gemesys as a startup.
How Does Your AI Chip Work?
Our goal is to mimic the basic mechanisms of the brain by hardware design. The brain still runs on carbon, while electronic chips run on silicon. So we will not be replicating the brain exactly, but rather on an abstract level. Like an airplane mimics some aspects of a bird but without the feathers and wing beats. Yet, a plane can weigh a lot more and travel longer distances than a bird, making it superior to its biological template in certain aspects. We expect the same to come true for AI chips, and once we reach what futurists call the technological singularity—loosely speaking, AI becoming smarter than humans—things will start to become really interesting.
To get there, we’re using novel electronic components called memristors to build a brain-like AI chip. A memristor is an electronic component made of a material that changes its resistivity when voltage is applied. It is like a memory of which voltages have been applied in the past. Discovered by Leon Chua in 1971, the memristor turned out to be another fundamental electronic component like resistors, capacitors, or inductors that enabled fundamentally new types of electronic circuits: self-organizing circuits.
When you excite a simple electronic circuit containing a memristor with a voltage, the circuit gives a response, and at the same time, the memristor changes its resistivity. So when you apply the same voltage again, you get a different response. The circuit has adapted.
It is very similar to how synapses work when the human brain learns. Every time a neuron fires and a potential is forwarded over a synapse to another neuron, the synapse becomes stronger—analogous to how the resistivity of a memristor changes. Just as you could draw a circle around a single neuron in the brain, you can draw one around a physical area on our chip encoding a neuron by a couple of memristors.
We use this self-organizing behavior to train neural networks. Instead of performing backpropagation, we apply voltages to memristors to adapt their resistivity (equal to the weight of a neural network) to the required task. That way, we’ll reach the same or better machine learning performance while drastically reducing the energy consumption required for neural network training. While we focus first on training, we could also leverage our technology for inference.
By working with threshold values instead of absolute resistivity values for our memristors, we can work with lower precision for the neural network parameters, which makes them even more efficient and leads to lower memory requirements for inference and, thus, lower latency. When you work with 2-bit instead of 32-bit numbers, that’s 16 times less information that needs to be processed per operation.
We have already mitigated a lot of the scientific risk through our PhD work; now we’re working on a breadboard prototype of our chip and the first MVP in a few months for the MNIST data set—a standard benchmark to test the performance of machine learning models. Then our goal is to have the first integrated chip prototype by next year. Alongside the hardware prototypes, we also need to develop our own electronic design automation (EDA) software since, for certain reasons, we can’t simply use state-of-the-art EDA software: memristors require a different kind of math to design a chip.
We will start with smaller neural networks for edge applications and, over time, extend to bigger models and use them in data centers. Our big goal is that eventually every device, even portable devices, will be equipped with a Gemesys AI chip. Data should be owned by the people, so they should also have an AI chip to process them locally on their portable devices. We believe we can get there. Just like computers themselves once made it from the research lab to every household, AI chips will make it to every personal device.
How Did You Evaluate Your Startup Idea?
First, semiconductors are incredibly important, and not just due to the recent AI hype. Semiconductors are the currency of today. If you haven’t, read the book “Chip War.” You’ll learn about the history of semiconductors and understand their importance. It blew my mind that every year more transistors are manufactured than any other good in the history of humanity combined.
Another important aspect is that we’re building a deep-tech startup, not a dating app. Dating apps are quite common, so they face a lot of market risk in whether customers will like them. Upfront, we face a lot of technology risk, but then we can do something that only we can do, and the market risk is low. If the technology works, it’s clear that customers will want more powerful AI chips and use AI everywhere. It will take more upfront investment and more time, more like a seven-year marathon than a two-week sprint, to have a full-fledged product in the market, but then we also have a unique and defensible value proposition.
Last but not least, AI today is not sustainable. The amount of energy consumed in data centers and the associated carbon emissions are mind-boggling. But in the future, if AI is developed correctly, we see a lot of potential for it to contribute to all 17 sustainable development goals (SDGs). That’s what we want to do at GEMESYS.
What Advice Would You Give Fellow Deep Tech Founders?
Make sure that you know what the tech risks are and how you can create milestones to mitigate them. Aim to mitigate big risks early on and showcase that the market risk is actually low.
To gather the right talent for your company, practice articulating your vision until you become really good at it. It’s the stuff that glues the team together—and your investors also. As a deep tech startup, you won’t have serious revenue for a long time, so you need a vision about making the world a better place that moves people on an emotional level. Then you need to become a pro at communicating that vision.
Finally, some encouragement to get started with deep tech projects. We’re already really good at research in Germany and Europe, but we also need more entrepreneurs starting deep tech rather than e-commerce startups. Have the courage to engage with research, take it out of the lab, and build a great product. Successful deep-tech startups are still rare in Europe but are crucial to advancing technology based on European values and supporting European tech sovereignty.