Dotphoton: Shaping the Future of Raw Image Compression for Machine Learning

Image processing plays a pivotal role in machine learning, especially when it comes to computer vision for medical applications. However, the storage requirements for the vast amount of measurement data behind an image can be enormous. 

While it may seem possible to store only the visual representation of an image or compress it using conventional techniques, doing so results in the loss of critical signal information—information machine learning models rely on to make decisions that can mean the difference between life and death for patients.

Dotphoton uses methods from quantum physics to discern signals from noise and compress scientific images in a lossless way, leading to significantly reduced storage requirements. Founded by Eugenia Balysheva and Bruno Sanguinetti, it went through the Creative Destruction Lab and Venture Kick startup programs and received a grant from Innosuisse. In 2020, it raised a seven-figure round from Swiss private investors.

Learn more about the future of raw image compression for machine learning from our interview with the co-founder and CTO, Bruno Sanguinetti: 

Why Did You Start Dotphoton?

As a quantum physicist, I was fascinated by how quantum mechanics could solve real-life problems and translate research into products. 

Quantum physics is the only theory where randomness is inherently part of the theory, and you can actually define how random something is. Before, there was either information or no information. Quantum physics added a third category: the unknown. For the first time, a physics theory outlined constraints that certain things cannot be known, in principle, and thus appear random. 

Before starting Dotphoton, I worked at ID Quantique, another startup using that inherent randomness to build quantum random number generators. As the company grew and started high-volume production of its devices, the pace of innovation naturally became slower as more focus on reliability and manufacturing was needed.

That’s when I realized machine learning had become a thing and was moving fast, an opportunity I didn’t want to miss. So I quit my job and, together with Eugenia, co-founded Dotphoton—using methods from quantum physics to compress scientific images.

How Does Raw Image Compression Work?

Every pixel in an image is based on a measurement that includes both the signal measured for and a random noise component.

If you buy a kilogram of potatoes, it’s a traceable quantity, in the sense that someone has defined a kilogram, and you can benchmark against this standard. But for a microscopy image, usually, there’s no such traceability. No one is telling you how much signal there is. We use methods from quantum physics to discern what the information and the random parts are and ensure that images are measurements rather than just pictures.

This is not trivial—distinguishing signal from noise is usually not possible. We get around this problem by calibrating the image sensors: we generate a precise signal in the lab, i.e., with very little noise, and inject it into the sensor. Since you know the signal, you can subtract it from the measurement result and get the noise. As sensors are becoming more and more stable, only the fundamental principles of quantum mechanics become the limitation, and the measurement is valid for the entire sensor family. We started doing this with single-photon sources, which are very complicated quantum devices, but over time, we found a way to get similarly good results with a simpler setup. 

Nowadays, reliable, traceable image quality is important because cameras are the eyes of the machine-learning models that drive self-driving cars or give you a medical diagnosis. The first hurdle in using high-quality and traceable “raw” images is that they are large, and working with them is just too costly and slow. Therefore, our first product is an image compressor that retains the image’s quality and traceability but speeds up working with them by 800% and reduces their storage cost by nearly 90%. It works by blending a lossless compressor for the signal part and a lossy compressor for the noise part.

There we focus on two aspects: achieving the best physically possible compression, typically 8:1, and ensuring that it runs in a speedy and power-efficient way. For example, a 100Gbps link will use approximately 100W, whereas you can send the same data on a compressed 10Gbps link, which will use 5W for the link and 3W for the compression. In a data center or in a car, this 10x power efficiency increase can make a big difference. A large part of our work is also in ensuring the reliability of our system, as it has to run on satellites and cars.

This technology allows us and our customers to work with images, which are now treated as measurements, at scale, and check the quality and reliability of the images before they are used in AI applications. 

We can draw a parallel to electricity: someone had to define the Volt and label electrical outlets so that you can safely plug a 230V appliance into a 230V outlet, and you know that you need a transformer if the voltages do not match.

We do the same for images: we specify them and check them before they are used in machine learning models, and if needed, we transform them to make them compatible with the required machine learning pipeline. Without our technology, you’d need to take care of all this yourself, and it’s tough; if you change one thing in your camera setup, you need to validate everything again. We want to be the interface between image acquisition and machine learning, bringing reliability to systems and allowing compatibility across systems so that if, for some reason, you need to change a camera in your system, you don’t have to re-develop and re-test your entire machine learning setup.

How Did You Evaluate Your Startup Idea?

From the previous startup I worked at, I knew it was beneficial to develop a product in a growing field—which we’re doing by building something with AI, cloud, and imaging. Each of these fields is growing by between 20% and 50% annually.

When we started, we asked ourselves why European startups are often less successful than US companies. So we went to the US to learn how they do it. They have broadly applicable ideas (e.g., everyone can find some use for Excel), and then they sit down with individual customers and check if it works for them. It’s balancing this broad applicability with specific solutions for customers. You know you found such an idea when you get “traction.”

Before then, we kept the team as small, talented, and agile as possible so that we could move fast. For us, traction is when customers start calling every day and asking: “When is this ready? I need it now!”— which is our current situation and why we are growing the team. With lots of technical talent in the team, there is always the risk of technology coming before business, and we noticed that successful companies put business first, so we made sure that our CEO is non-technical and can prioritize business.

There are also a couple of upsides to being in Europe, such as more affordable skilled employees and an excellent STEM education. Therefore you can keep going with your deep tech for longer. This also gives us time to work with standardization bodies like WHO, ITU, EMVA, and Quarep and make meaningful contributions to new standards, which will guide how things are done in data science and how image data is standardized and used in AI and machine learning applications.

What Advice Would You Give Fellow Deep Tech Founders?

I knew this before starting Dotphoton, but it’s hard to follow it: Focus! Just do one thing, see what your core market and value proposition is, and just do it. As a young startup, you get lots of stimuli and requests from potential partners, and it’s hard to say no when you’re small. But it can slow you down and distract you from the main goal, so you must stay focused.

Comments are closed.