How to Evaluate Quantum Processor’s Performance: In Conversation With Himadri Majumdar*
Not all quantum processors are created equal. While everyone aims to increase the fidelity of quantum states and lower the error rate, many other metrics also play an important role in building large-scale quantum computers and delivering commercial value.
SemiQon builds quantum processors by trapping electrons or holes within silicon and using their spins as qubits, called silicon spin qubits. Since its founding as a spinout from the Finnish research institute VTT in 2023 and our last interview on Future of Computing, SemiQon has successfully completed several fab runs and started shipping the first 4-qubit chips to research collaborators.
This article is part of a series of deep dives into SemiQon’s advancements, and we had the pleasure of speaking again with Himadri Majumdar, CEO and co-founder of SemiQon, about what quantum hardware manufacturers and end users should keep in mind when evaluating quantum processors:
What’s the Fidelity of Quantum States?
Fidelity is sort of the counterpart to the error rate, measuring how accurately a quantum state has been prepared, transformed, or measured. It quantifies how close two quantum states are, which is typically the ideal target state and the actual quantum state produced by a quantum operation in the physical world. If you’re familiar with machine learning, think vector similarity but for quantum states.
Fidelity is a critical metric for evaluating the reliability and accuracy of quantum computations. It’s used to demonstrate the capabilities of quantum processors, where achieving high fidelity is often regarded as the holy grail of quantum computing—or simply put, the higher the fidelity, the closer we’re to practically useful quantum computing.
But practically, the important question is: How do we measure fidelity at scale when we have thousands, or eventually millions, of connected qubits? Individual qubits are not very useful unless they’re networked together, which opens up many more questions around: How will the networking be implemented? How will we control and calibrate them individually? And how will the control electronics behave, and what latency will it introduce?
Fidelity is important, but we need to start speaking about fidelity at scale, with connected qubits. We need to consider quantum processors as a whole and look at holistic metrics—this will decide who will build large-scale, useful quantum computers.
What Other Metrics Are Important When Evaluating a Quantum Processor’s Performance?
Quantum hardware manufacturers focus a lot on what their machines are fundamentally capable of: What set of quantum computing operations, called quantum gates, can we implement? What’s our error rate? How often do we need to do error correction?
Most metrics today focus on single qubits or a few qubits, as quantum computers are still rather small. It still takes an unbelievable amount of time to measure and calibrate or tune qubits, which is possible for small qubit numbers but not at scale. We’ll need to evolve our measurement and calibration protocols for scaling. And we need measures that are industrially relevant and doable. So, besides fidelity at scale, there are various other metrics one should keep in mind, such as the ones that have been proposed in the literature:
- Computation speed: to compensate for low coherence time, gate operations should occur fast enough to make complex computations
- Multiqubit networking: the more qubits can be linked to one another to perform gate operations, the more readily they can implement quantum computing algorithms and the more powerful the resulting quantum computer would be
- Control over individual qubits at scale: as the number of qubits in a quantum computing system increases, control over the performance of individual qubits becomes increasingly complex
- Cooling and environmental control: for most qubit technologies, the required cooling equipment needs to scale in terms of both size and power
- Manufacturing: some qubit designs use existing production technology, while others require new manufacturing techniques. The production of full-scale quantum computers will eventually require automated manufacturing and testing for components
The quantum computing industry creates and adapts metrics as it develops further. For example, IBM introduced quantum volume as a metric to measure the computational power of a quantum processor, considering the largest random quantum circuit involving an equal number of qubits and operations that a quantum computer can successfully run.
Rather than just considering the number of qubits, it also accounts for other factors such as their interconnectedness, error rates, and coherence times. But it doesn’t describe computational power across multiple quantum chips well. As manufacturers like IBM hit the ceiling on how many superconducting qubits they can pack in a single chip with a reasonably sized dilution refrigerator, they need to develop new metrics that are a better fit for multicore quantum computers and distributed across many dilution refrigerators.
Having all of these metrics can confuse end-users, the buyers of quantum computers or cloud users, as they only care about what solves their industry problems in a commercially viable way. That’s why quantum hardware manufacturers should focus not only on the fundamental metrics of their machines but also on what is relevant to solving practical problems and individual use cases.
When Should You Scale the Number of Qubits?
The billion-dollar question is when to transition from small quantum machines to building large-scale machines. An important consideration here is how many actual, physical qubits you need to get one ideal, logical qubit. The quantum industry has come from a ratio of 1000 to 1 down to about 100 to 1, while some players promise even a 12 to 1 ratio of physical to logical qubits.
The fewer physical qubits you must sacrifice to get a logical qubit, the better. However, some industry players seem to obsess about pushing the fidelity of their machines for small qubit numbers, while we think that even with 98% fidelity, you can make up for it in the volume of qubits. You don’t need six nines of fidelity if you get to the point where you have a sufficiently large number of physical qubits to implement logical qubits.
Spin qubits are still early, but their moment to shine will come when scaling the number of interconnected qubits in a quantum processor. Hardware manufacturers will continue to demonstrate small machines with improved metrics for some time, while end users will keep asking: When will a quantum computer solve commercially relevant problems?
Ultimately, to solve commercially relevant problems, we’ll need large-scale quantum computers. We see silicon as the only platform where quantum processors can scale to billions of qubits. The semiconductor industry has demonstrated this before with transistors; we’ll make it happen again with qubits. And this will also allow us to package classical control electronics and the qubits on the same silicon chip.
How Does SemiQon Do Wafer-Scale Testing?
As the number of qubits grows, we’ll also need different measurement protocols to ensure uniform and high performance from chip to chip. The semiconductor industry has already established such standards, and we’re open to working with partner companies to do the same for the quantum industry. We can help each other create standard industry protocols—from testing single qubits to wafer-scale testing.
When a wafer comes from the fab, we get thousands of chips. We then do wafer probing at room temperature, measuring the quantum dot performance and their potential to form excellent qubits—that’s the first iteration to eliminate malfunctioning chips.
Next, the ideal step would be to cool the wafer to cryogenic temperatures and perform the same characterization. At low temperatures, quantum phenomena, such as single-charge effects, can be observed and measured. Intel has demonstrated this capability in a recent scientific article, which is a significant step in the right direction. Our collaboration partner, the University of Basel, has also developed this capability independently.
Finally, we dice the samples to get individual chips, pick a random set of pre-screened chips, and perform benchmarks on metrics like charge noise, pinch-offs, or transistor behavior—quick, cryogenic measurements. We measure a statistically representative subset of all chips and then extrapolate from there, having enough confidence that the other chips will also be great.
We then ship these as samples to collaborators, researchers who do the same measurements as we did, to reproduce the results. This takes time, and there’s no way to get around thoroughly testing the chips. Different labs have different processes and equipment, so if they can reproduce the same results despite these variations, we know our chips can work everywhere in the same way.
Thus, we’re creating a benchmark for quantum processor performance: Distributing our chips widely among collaborators will ensure that many have measured them and will have the performance of our chips available as a benchmark.
We have shipped chips already to researchers at the University of Basel, Seoul National University, Taiwan National University, and the University of Jyväskylä in Finland. And we’re going to send more samples and are thrilled to collaborate with more research groups.
This was the first of a series of conversations that we will have with SemiQon, and we will continue further in the next edition. Stay tuned!
*Sponsored post—we greatly appreciate the support from SemiQon
This development work at SemiQon was funded by the European Union’s Horizon EIC programme under grant agreement No. 101136793 (SCALLOP)