Q.ANT: Shaping the Future of Photonic Computing
As AI models grow, the limits of today’s computing infrastructure are becoming increasingly visible. Power consumption is rising faster than efficiency improvements, cooling already accounts for a significant share of data-center energy, and CMOS scaling is reaching both physical and economic boundaries. The industry faces a dual bottleneck: energy is becoming the dominant cost driver, and transistor-based systems can no longer scale at the pace required by modern AI.
Photonic computing offers an alternative. By using light instead of electricity to perform mathematical operations, photonic systems promise higher performance, lower energy use, and new forms of computation that are difficult to achieve with digital hardware. While many players in the field have shifted toward optical networking and interconnect applications and avoided full optical computing, a few continue to advance the more ambitious vision of photonic processors.
Q.ANT is one of those companies. Founded in 2018 by Dr. Michael Förtsch (CEO), the Stuttgart-based startup is developing photonic processors that compute natively in light. The company has raised more than $80M dollars from investors, including Cherry Ventures, imec.xpand, and UVC Partners.
What sets Q.ANT apart is its ability to reach 16-bit floating-point precision in a photonic circuit (a milestone essential for AI training and inference) and its deep expertise in tuning the manufacturing processes for Thin-Film Lithium Niobate. These capabilities allow the company to build extremely low-noise analog computing units that operate efficiently at scale.
Last week (mid-November 2025), Q.ANT announced its second-generation Native Processing Unit, NPU 2, featuring enhanced nonlinear processing capabilities and significant improvements in energy efficiency. Learn more about the future of photonic computing from our interview with Q.ANT founder and CEO, Dr. Michael Förtsch:
Note: The terms “photonic” and “optical” describe the same underlying technology, using light to transmit or process information, and are used interchangeably in this article.
What Key Experiences Shaped Your Background Before Starting Q.ANT?
I studied mathematics and physics and earned a PhD in quantum information processing. That academic foundation, combined with several years of industry experience, shaped my professional path.
Editor’s note: Quantum information processing uses the principles of quantum mechanics to encode, process, and manipulate information.
My interest in computing began much earlier, though. I like to say my career started at the age of four, when I received my first computer. I was one of those early digital natives who didn’t just want to use technology but understand how it works. I spent countless hours modifying and experimenting with it. Sometimes, to the frustration of my parents as not every optimization went as planned.
Those early experiences taught me an important lesson: progress requires experimentation, and failure is part of the learning process. That mindset continues to guide how I think about science, technology, and innovation today.
How Is Q.ANT Advancing Photonic Computing for the AI Era?
In simple terms, we build servers similar to those you can buy from companies like NVIDIA or AMD. However, in contrast, Q.ANT’s technology, which we call Native Computing, operates entirely in the photonic domain. Our first product, the Q.ANT Native Processing Server (NPS), is the first commercial photonic processor. In complex AI workloads, it delivers up to 30 times the energy efficiency of conventional CMOS technologies, helping to reduce both operational costs and the environmental footprint of data centers. The system is fully compatible with existing computing infrastructure and can accelerate demanding workloads, including AI training, inference, physics simulations, and time-series analysis.
Editor’s note: CMOS stands for Complementary Metal-Oxide-Semiconductor, a standard technology used to build most modern microchips, especially in logic and memory devices.
With Native Computing, we aim to address three core challenges of computing:
(1) First, traditional CMOS-based architectures are reaching their physical and economic limits as AI workloads grow exponentially.
(2) Second, the energy demand of AI hardware continues to rise, creating both environmental and financial strain.
(3) Third, today’s chip manufacturing is concentrated in a few regions and dominated by a handful of players.
Q.ANT’s photonic chips can be manufactured using refurbished semiconductor fabrication lines, enabling cost-efficient and decentralized production around the world. We operate our own Thin-Film Lithium Niobate (TFLN) chip pilot line in collaboration with the Institute for Microelectronics Stuttgart (IMS CHIPS) and are already shipping the NPS to selected partners.
Editor’s note: Thin-film Lithium Niobate (TFLN) is a transparent, crystalline material with excellent electro-optic properties. It’s increasingly used in photonic chips for modulating and routing light at high speeds, and is more efficient than traditional silicon in many optical applications.
Why Do You Operate Your Own TFLN Pilot Line?
We operate our own pilot line to accelerate the development of our products. And, on a more technical side, to prevent cross-contamination. Silicon and TFLN cannot be processed in the same equipment, as the materials would contaminate each other. A dedicated line ensures the purity and reliability of our TFLN circuits.
How Is Q.ANT’s Approach Fundamentally Different From Traditional Computing?
The key difference is that our processors do not compute with electricity but with light. This choice has three major implications:
(1) The first is that we have an analog processor. This matters because it enables a more advanced mathematical space that allows us to optimize the performance of the entire compute stack. A digital circuit flips transistors between 0 and 1 and between 1 and 0. That is the math it operates on. We, on the other hand, can perform fully analog functions. And since the goal of the AI industry is to model nature, it makes sense to use a system that behaves more like nature itself. Nature is not digital. Even at the smallest scales, it does not take on zeros and ones but continuous, analog forms. There are continuous functions at work, many of which we do not yet fully understand. With this type of processor, we are coming closer to how the nature that we want to model with these machines actually behaves, and can be up to 50 times faster than with digital circuits.
(2) The second difference is that light propagates freely. With electricity, you need to push current through a resistor, which requires energy. With light, you only need to ignite it, and it moves from A to B on its own. This makes our circuits far more energy efficient than those in the digital world.
(3) The third point is that light can be both high-performance and energy-saving, but it does not depend on the extremely small structures found in electronics. We can manufacture our circuits in foundries built in the 1990s. We do not need the latest 3- or 4-nanometer fabs, which are almost all located in Taiwan.
Editor’s note: Analog processor performs continuous mathematical operations using physical signals instead of binary values.
Editor’s note: A processor core is the computational unit inside a processor that performs calculations and runs instructions.
Editor’s note: Circuit is a defined layout of components and pathways that direct how signals move and interact on a chip.
Editor’s note: Transistors are tiny switches that control the flow of electrical signals in a circuit, while resistors are components that limit or regulate the flow of current.
Editor’s note: 3- or 4-nanometer fabs are semiconductor manufacturing facilities capable of producing chips at extremely small feature sizes.
We call this the liberalization of compute. It breaks the dependency on a few dominant players and gives regions like Europe a new opportunity to become part of the future AI stack. Europe gave up its position in digital chip manufacturing decades ago, when we stopped investing in the next semiconductor nodes that would have kept us at the leading edge. As a result, all progress in foundries and circuit design continued elsewhere. In our opinion, Europe cannot catch up in advanced digital chip manufacturing. That opportunity has already passed. Photonic computing offers us the chance to embark on a new technological path where Europe can play a leading role from the outset.
What Is the Long-Term Product Vision You’re Building Toward?
With the second generation of our photonic processor, the Q.ANT NPU 2, we extended the architecture’s nonlinear processing capabilities. In this generation, nonlinear functions are implemented much more efficiently, enabling us to address more complex AI and scientific workloads. This includes areas like physical AI, advanced robotics, computer vision, industrial intelligence, physics-based simulation, and scientific discovery.
Editor’s note: Nonlinearity, in this context, refers to optical effects where a material’s response is not proportional to the input light, enabling functions such as modulation and frequency conversion.
The system is offered as a 19-inch server solution with an integrated x86 host processor and a Linux operating system, making it a natural fit for existing HPC and data center environments. It is directly deployable.
Editor’s note: A 19-inch server is a standard rack-mounted server format used in data centers, where equipment is designed to fit into 19-inch-wide racks.
Editor’s note: A x86 host processor is a processor based on the widely used x86 instruction set architecture found in most servers, desktops, and laptops.
We demonstrated the NPU 2 at Supercomputing 2025 and showed an image-based learning demo powered by our Photonic Algorithm Library, Q.PAL. In this demo, the photonic processor learned and classified images with fewer parameters and fewer operations than a CPU-based system. The idea was to show how photonic processors can accelerate real workloads inside conventional server architectures.
Within one year, we moved from simple digit recognition to full image classification and image learning. It shows how quickly photonic computing progresses.
The new generation introduces enhanced analog units optimized specifically for nonlinear network models. These architectural updates build on the improvements mentioned earlier, enabling more efficient training and inference with reduced parameter counts and lower training depth. The server integrates multiple NPU 2 units, making photonic acceleration practical for real deployments.
For applications like manufacturing, logistics, or inspection, this makes a big difference. Photonic processors can execute nonlinear neural networks much more efficiently, which reduces energy consumption and allows more advanced computer-vision systems to run economically. Beyond that, photonics will support the next generation of AI architectures, especially hybrid models that combine statistical reasoning with physical modelling. This is relevant for areas like drug discovery, materials design, or adaptive optimization.
Systems equipped with the NPU 2 are available to order now, with customer shipments planned for the first half of 2026.
How Scalable Is the Manufacturing of Your Photonic Processors?
The fabrication process for lithium niobate circuits closely mirrors that of CMOS manufacturing. It relies on standard semiconductor equipment rather than specialized tools, which makes scaling straightforward.
Process development follows the same sequence as in the silicon industry. You start with a wafer, apply a mask with your circuit layout, etch it into the material, remove the mask, and the circuit is complete. For the experts operating the machines, the workflow feels familiar. The process is demanding, but it is demanding in both worlds. The main differences lie in how the individual steps are tuned, and that is one of Q.ANT’s secret sauces.
Editor’s note: A wafer is a thin slice of semiconductor material on which circuits are fabricated.
Editor’s note: A mask is a patterned template used during lithography to transfer circuit layouts onto a wafer.
Photonic Processors Introduce Significant Packaging and Alignment Challenges. How Are You Approaching These Issues?
This industry is still in the early stages. As with any new technology, scaling feels like the Wild West. It is similar to the early CMOS industry in the 60s, when many innovators tried to build circuits without shared standards. The CMOS industry today can rely on a highly structured value chain because those standards eventually emerged. In photonics, they do not exist yet, and it will take time until they do.
That is why companies like us have to cover large parts of the value chain ourselves. Packaging is one of those areas. For now, we need to handle how to attach optical fiber to the chip with as little loss as possible. But if you look further ahead, you would not want to rely on fiber coupling at all. In the context of optical computing, the goal is a monolithic circuit where light travels from A to B without ever leaving the material.
Editor’s note: Packaging, in this context, refers to assembling and protecting a chip, including connecting it to power, cooling, and external interfaces.
Editor’s note: Optical fibre is a thin, flexible strand of glass or plastic that guides light signals with low loss.
Editor’s note: A monolithic circuit integrates all optical elements into one continuous piece of material without leaving the substrate.
This is one of the reasons why lithium niobate is such a strong host material. It may not be the absolute best in each category. Some materials have a better electro-optical coefficient, others have stronger nonlinearity. But if you look at the combination of parameters you need to control a photonic circuit for computation, no other material performs consistently well across all of them. Every material interface in a compute stack introduces problems, forces compromises, and requires standards that do not exist yet. Staying within a single material is a significant advantage. With TFLN, this is possible.
Editor’s note: Electro-optical coefficient describes how efficiently a material changes its optical properties in response to an electric field.
This may also be why we progressed so quickly with this material stack. Once we could manufacture circuits in lithium niobate, we could run the entire optical process chain in a single monolithic block, integrating waveguides, modulators, nonlinear interaction zones, and more into a single continuous piece of material.
Editor’s note: Waveguides are structures on a chip that confine and guide light along controlled paths.
Editor’s note: Modulators control how light carries information. They adjust properties such as intensity or phase to encode data in optical signals used for communication or processing.
Editor’s note: Nonlinear interaction zones are regions in a photonic circuit where light undergoes nonlinear optical effects for processing tasks.
While Much of the Photonics Industry Is Shifting From Computing to Networking, You Still Pursue Full Photonic Computing. What Gives You Confidence For That?
Others stepped away from photonic computing because they failed to meet the minimum requirements for photonic circuits. One of our breakthroughs was understanding the level of numerical precision needed. We were the first company in the world to achieve 16-bit floating-point accuracy in a photonic circuit. This was a prerequisite for training and running AI networks with the necessary reliability.
Editor’s note: 16-bit floating-point accuracy represents numerical values with 16 bits of precision, enabling reliable computation for tasks such as AI training and inference.
We were also the first company to use TFLN for this type of processor, which allowed us to build extremely low-noise circuits.
Another important factor is persistence. Entering the compute ecosystem means competing with dominant players like NVIDIA, Intel, and AMD, all of which are leaders in digital circuits. When we announced two years ago at a Silicon Valley conference that we were building the first photonic processor, the reaction from most of the established digital industry was clear: “It is going to stay digital, and it is going to stay copper.”
Editor’s note: Bandwidth is the amount of data that can be transferred between systems per second. It affects how efficiently data flows between components, a key factor in data-intensive applications such as AI.
Focusing solely on photonic networking and interconnects is insufficient. If the processor itself does not improve, it does not matter how much data you push through it. Digital processors cannot advance much further. Moore’s observation that smaller transistors lead to better performance has held for more than 40 years, but you cannot miniaturize indefinitely.
Even without being a physicist, it is easy to understand that structures cannot shrink forever. We have now reached that limit with 3-nanometer technology. Beyond this, new effects appear that are hard to control, and the economics become unsustainable. A foundry capable of producing at the 3-nanometer node costs around $35BN; at the 2-nanometer node, the investment is expected to reach $50 to $60BN. Chips are getting more expensive, and the question is whether customers will accept those prices.
Something has to change. This is where we come in. We are not starting from the beginning. We are already waiting at the finishing line. The industry is now bringing photonic interconnects to digital circuits, and we are standing there, waving and saying, “Hello, we already have a photonic processor.”
How Will Developers Interface With Your Hardware?
That is straightforward in our case. The advantage of our approach holds as long as we stay in the optical domain. As soon as we return to the digital world, we are only as good as the digital ecosystem we inhabit. So we extend the optical compute space as far as possible, even across multiple chiplets.
But optics also has a limitation that is both a disadvantage and an advantage: light lacks memory. To store information, we have to return to digital memory. Once we are in digital memory, we are inside the classical compute stack. For us, it is natural for the interface to be based on memory, because that gives us access to all standardized interfaces.
We use PCI Express and x86-compatible memory controllers, which makes us Linux-compatible from the start. The server system we ship runs a standard Linux distribution with a modified kernel. From the user’s perspective, it behaves like a standard server. Developers can also work in familiar environments. Through the compiler structure we adapted for our systems, we are seamlessly connected to existing programming languages.
Editor’s note: PCI Express and x86-compatible memory controllers are standard digital interfaces that connect processors to peripherals and system memory.
Editor’s note: A kernel is the core part of an operating system that manages hardware resources and system operations.
What Advice Would You Give to Fellow Deep Tech Founders?
Today is the best time to be a deep‑tech founder in Europe, precisely because the continent missed the last generation of digital cash cows. That gives us freedom: freedom to build what does not exist yet, instead of copying. So here is the advice: Have an attitude. And if you feel that tug to build something radically new, take the leap and do it with full commitment. No ifs, no buts, no safety nets. Find the best people to join your journey. It’s risky and often hard, but it is also the best job in the world.
