InfinityQ: Shaping the Future of Probabilistic Computing for Optimization Problems

Whether it’s finding the optimal route for a last-mile delivery service or figuring out the train schedule after a storm hits – many real-life problems have a tremendous amount of possible solutions, and it’s prohibitively expensive to find the best one.

In practice, people try to find an ‘optimal’ solution, accounting for the effort required to find it. That’s why they’re called ‘optimization problems.’ Yet, an optimal solution is not perfect, and finding better solutions can have a lot of value, as it saves time, fuel, and, ultimately, carbon emissions. 

InfinityQ was founded by Aurélie Hélouis in 2020 to use quantum-inspired optimization methods and probabilistic computing to find better solutions for tough optimization problems, with applications ranging from logistics and gaming to the pharmaceuticals industry. In 2022, it raised a seed extension round, bringing its total funding to nearly $5M. It was led by Hevella Capital, iGlobe Partners, and Westcott Investment Group, joined by existing investors Cato Stonex and Louis Vachon.

Learn more about the future of probabilistic computing for optimization problems from our interview with the cofounder and CTO, Saavan Patel:

Why Did You Start InfinityQ?

Like many founders in deep tech, my journey started in graduate school. As part of my Ph.D. thesis, I researched probabilistic computing and how it could be useful in solving optimization problems. So the question becomes, why did I pursue a Ph.D. in this domain?

Our world is inherently probabilistic – you don’t know when scientific breakthroughs will happen, when a competitor will release a product, or when a world-changing event like a pandemic will hit. Our computers today are inherently deterministic – the same input always yields the same output – and they struggle to deal with uncertainty.  

I thought about a better way to model real-world problems that face uncertainty, and it led me to dive into probabilistic computing, an emerging research domain. It looks into handling such problems more efficiently, either requiring less computational effort and thus energy to find a solution or even being able to reach better solutions. After finishing my Ph.D., I joined InfinityQ as a cofounder and CTO, as it seemed like a good opportunity to continue working on probabilistic computing and have a real-world impact at the same time.

What is Probabilistic Computing?

Probabilistic computing is a computational approach that includes uncertainty directly in the hardware or software so that the system can perform calculations that involve ambiguity, incomplete information, or statistical inference.

Unlike traditional computing, which operates with bits, which are deterministic and either 0 or 1, probabilistic computing uses probabilistic bits, called “p-bits,” that can represent a spectrum of probabilities between 0 and 1. One can simulate them on off-the-shelf computing hardware, like GPUs or FPGAs, and that’s our starting point. Using the right algorithms, we can handle probabilistic workloads on the software side and benefit from GPUs improving every year. So long as Nvidia makes better GPUs, we can clearly see how our solution gets better every year. 

In the longer term, our ambition is to use probabilistic computing hardware. Deterministic hardware can only approximate being probabilistic. Thus, having inherently probabilistic microchips by hardware design would greatly improve our performance and change the idea of what a computer is – from flipping bits to producing samples from a distribution. 

Finally, probabilistic computing is inherently connected to quantum mechanics and quantum computing, and many probabilistic techniques, models, and inspirations come from the quantum world. That’s why we call our systems “quantum-inspired.” Quantum Mechanics tells us that the physics of our world is not deterministic, and we can learn a lot from this interpretation of the world. On top of this, quantum computing is probabilistic computing in disguise! 

When Richard Feynman first proposed the idea of quantum computing for physics, he introduced probabilistic computing as a stepping stone on the way to quantum computing – it’s like probabilistic computing but allows negative probabilities. Probabilistic methods sit somewhere between regular digital, deterministic, and quantum computing.

How Does Probabilistic Computing Help in Solving Optimization Problems?

Probabilistic computing is particularly useful for modeling and solving problems where information is incomplete, uncertain, or noisy. It allows computers to make decisions, perform computations, and draw conclusions when outcomes are not deterministic but are instead described by probability distributions.

Probabilistic computing will be particularly useful in four areas: quantum computing, machine learning, solving optimization problems, and understanding biological systems like the human brain. With InfinityQ, we’re focusing on using probabilistic computing to solve optimization problems better—we’re faster, more accurate, and able to tackle larger problems compared to established solvers. Also, our cloud platform TitanQ comes with the necessary tools to solve optimization problems without requiring a Ph.D. 

Optimization problems are tough due to their vast array of potential solutions. One famous example is the Traveling Salesman Problem, frequently encountered by logistics companies. Imagine a salesman must visit clients in various cities, aiming to determine the most efficient route that visits each city exactly once and returns to the starting point. As the number of cities increases, the number of possible routes expands combinatorially. This explosion in possibilities makes finding the optimal solution increasingly difficult.

We have demonstrated up to two orders of magnitude improvements for solving such optimization problems compared to using established tools like Gurobi. Our platform is very scalable and can solve problems with over 100,000 variables (e.g., cities in the Traveling Salesman Problem), which is larger than what other tools can handle today. At the end of this year, we may go up to a million variables. These super-large optimization problems are an untouched domain in optimization that we’ll be able to explore. 

The intuition behind our approach is this: Think of stochastic gradient descent, a method commonly used in machine learning to find the optimal weights and biases for a neural network. It illustrates the two main ingredients for solving tough optimization problems: one is the gradient descent part, that is, how to get to a local minimum, and the other part is stochastic, to explore the landscape of local minima and find the best one. 

Similarly, our approach includes multiple “walkers” that traverse the space of possible solutions for an optimization problem. Some focus on exploring large swaths of the space, and some focus on finding local minima. Both inform each other to find an optimal solution. What is ‘optimal’ then depends on the use case, outside requirements, and what people are looking for. Generally, if you see your search efforts giving diminishing improvements, you know it’s time to stop.

Finally, it’s one thing to solve an optimization problem better and another thing to convince people that you can actually help them, i.e., to demonstrate the value of your solution. It’s about crossing the chasm between the technology being there and a solution to a business problem being there. It takes quite a bit of effort to show customers how to use our system and get the most out of it. 

How Did You Evaluate Your Startup Idea?

We’re focusing initially on the logistics market. Even if you can shave off just a few minutes from a delivery route, this could lead to massive savings overall. The value of a better solution becomes evident rapidly. 

Also, logistics as an industry still uses a lot of legacy systems, sometimes even pen and paper or Excel sheets, to plan out supply chains and delivery routes. Switching from these to a cloud platform is a genuine leap forward for them, so it’s easy for us to create value. As we improve and flesh out our optimization product over time, we can also address other, more tech-savvy industries. 

We’re really excited to benchmark our optimization solutions for larger problems, with a million dense variables, by the summer – an order of magnitude larger than what people could previously solve. This will also open the doors to addressing more complex applications, going beyond just quadratic optimization problems.

What Advice Would You Give Fellow Deep Tech Founders?

Building a startup is always like running a marathon, not a sprint. You’re signing up not just for a two-year journey but, if things work out, for a decade-long journey with your cofounders and investors. Make sure you’re taking care of yourself in the process so you can run the entire marathon. Not just the first two kilometers. 

Want to Know More?

Learn more about how InfinityQ solves giant optimization problems on their blog at https://www.infinityq.tech/bigger-faster-greener* and learn about their cloud platform from the F.A.Q. at https://www.infinityq.tech/faqs*


*Sponsored links – we greatly appreciate the support by InfinityQ

Comments are closed.