Lumai: Shaping the Future of Optical Computing for Artificial Intelligence

In a world where machine learning pushes the boundaries of what’s possible, the thirst for more computing power has become insatiable. Yet, the miniaturization of electronic components is reaching physical limits and delivering only diminishing improvements.

With the recent surge in AI applications, traditional electronic processors are struggling to keep up. Optical computing may be a way forward, using light instead of electrons as an information carrier and for calculations – promising a new paradigm for fast, energy-efficient, and high-bandwidth computing.

Lumai was founded by Tim Weil, Xianxin Guo, Alex Lvovsky, Thomas Barrett, and James Spall in 2022 to develop optical AI processors using light and optical components to emulate neural networks by hardware design. What sets Lumai apart is its ground-breaking 3D optics approach, a unique technique that allows the company to perform giant matrix multiplications at breathtaking speeds. At its inception, Lumai raised a £1.75M seed round from IP Group plc and Runa Capital.

Learn more about the future of optical computing for artificial intelligence from our interview with one of the co-founders and co-developers of Lumai’s core technology, James Spall

Why Did You Start Lumai?

During my Ph.D. research I was looking at new forms of computing, and I found optical computing for AI particularly fascinating. On the one hand, using light to perform calculations sounded like sci-fi, but on the other hand, the benefits were clear, and the impact was much nearer-term and tangible than something like quantum computing. 

At the same time, it was clear that the processing demands of AI were increasing at such a rate that new approaches were needed and that optical compute could solve this problem. The timing was perfect – if we had been a few years earlier, neither the technology nor market demand was there – but now there was such clear potential of our technology with the growth of AI. We founded Lumai to turn our many years of research into a product that could have a huge impact, developing incredibly fast and energy-efficient AI processors.

How Can Optical Computing Aid Artificial Intelligence?

We’re building a new type of computing hardware that can massively accelerate matrix multiplication – the mathematical operation at the core of almost all modern-day machine learning. We aim to create the fastest, most energy-efficient processor for AI inference.

While digital processors are shuffling around zeros and ones, analog processors – like the one we’re building – use continuous physical quantities for computation, which is particularly suited for AI applications, where you don’t need to use zeros and ones for arbitrarily high precision, but can operate with a lower precision threshold.

Still, as the main two challenges, we had to manage the conversion of digital to analog signals, and vice versa, and deal with meeting the minimum precision typically required for AI inference – equivalent to INT8 digital computing. Yet, the advantages over electronic processors are outstanding. Optical clock speeds are much higher, the energy consumption is much lower, and the bandwidth can be much greater thanks to new degrees of freedom like wavelength multiplexing – using lasers with multiple wavelengths concurrently within the same processors for parallel computing.

However, these benefits of optical computing are only unlocked using our unique 3D optics approach, i.e., using lasers that propagate in the 3D volume of space. We encode numbers in their intensity and implement summation operations by combining different laser beams together so that their intensities add up. Multiplication operations are implemented through a pixelated display, where each pixel has a different transmissivity – multiplying a laser’s original intensity by its transmissivity to yield the new intensity.

While we speed up matrix multiplications optically, we still use electronics for everything else and to integrate with existing infrastructure. Converting optical and electronic signals back and forth consumes energy, so to make it worthwhile, we need to do as much processing in the optical domain as possible, which barely costs any energy in comparison. 

The solution to this problem is scaling up the system using the third spatial dimension with 3D optics to multiply huge matrices. Typically, in integrated photonic chips, precision diminishes when you scale the size of your chip as light passes through each additional component in series, accumulating noise as it does so. In contrast, we scale the number of additional components – pixels in the display – in the third dimension so that all beams of light are processed independently and in parallel. Processing millions of numbers simultaneously makes the electronic-optical conversions worth it whilst allowing the numerical precision to be maintained.  

Light is a great carrier of information and is used for data communications over increasingly shorter distances. This is, for example, one way to address the von Neumann bottleneck and provide a fast way to transfer data between a processor and memory. In the future, the whole computing stack will be optical – processing, interconnecting, and networking – and we will be part of it!

How Did You Evaluate Your Startup Idea?

We started envisioning what the next decade in computing would look like. The headlines proclaimed that Moore’s Law was dead, but processors kept getting faster anyway. We questioned all assumptions: Is the rate of advancement of electronics really declining? Do people actually need faster AI processors? The answers to both turned out to be yes.

Semiconductor manufacturing has reached a point where further miniaturization in electronics yields diminishing returns and speedups. AI algorithms are still getting faster as people use ideas such as sparse matrices and clever algorithms and data formats. These, however, deliver only incremental benefits. On the demand side, machine learning has turned out to be even bigger than many expected only a few years ago – quite literally, as models continue to grow in parameter size in order to get better. So, yes, we’ll need better and faster AI processors. 

At Lumai, we’re mainly addressing data centers as that’s where the fastest computing is needed. At the edge, other factors matter too, such as limited power supply from a battery in a mobile device. In data centers, you don’t have the same limitation – however, as processing requirements increase, data centers are also hitting both thermal and power limits, up to the point that they can’t put in more energy. Equally, data centers are now aiming to become more energy efficient too, and to achieve this alongside more computing power, it’s vital that they can harness the fastest processors with the smallest energy draw.

We are part of the Intel Ignite program, and this has been fantastic in helping to evaluate our technology and product. It’s a great opportunity to help us elucidate how our optical AI processors could fit into the rest of the digital stack. Working with the Intel Ignite team and getting their support in establishing partnerships with other ecosystem players has been tremendously helpful. 

What Advice Would You Give Fellow Deep Tech Founders?

Identify the skills that you don’t have personally and build a team with a range of skills to complement you. We’re fortunate to have such an amazing team with expertise on both the technical and commercial side, and we’re constantly looking to balance our team’s skills. 

Comments are closed.