This is Part 2 in our Countdown to Q-Day series. Read Part 1 here.
A cryptographically relevant quantum computer, if realized, would be able to perform Shor’s algorithm and crack modern day encryption anywhere from a few hours to a day. However, the fragile nature of quantum effects that make quantum computers so powerful for certain algorithms also creates a major challenge for scaling quantum systems. Thus, a cryptographically relevant quantum computer (CRQC) must be sufficiently fault tolerant.
Fault tolerance is the central requirement for any CRQC, because no physical quantum system today is reliable enough to run a long computation on its own. To scale beyond toy programs, a quantum computer must keep physical errors below a specific threshold and simultaneously apply quantum error correction fast enough to prevent errors from accumulating. Achieving this regime requires three things:
- a physical architecture with low baseline error rates
- an error correction code that can extract and correct errors efficiently
- a control system that can apply those corrections at high speed
Note that these aspects are not necessarily independent of each other, as certain error correction algorithms will work on some architectures, but not others. And the requirements for the control planes may differ depending on what architecture, error correction algorithm, or parameters are being used.
Fault tolerance is what converts inherently fragile physical qubits into stable logical qubits capable of running deep algorithms like Shor’s. Because no meaningful quantum computation is possible without it, fault-tolerant capability is the single most important predictor of real quantum progress.
A universal Turing machine is an abstract description of a modern computer. This description doesn’t specify how the computer should be implemented. For example, the computer that you are viewing this blog post on is an implementation of a Turing machine. But a Turing machine could also be built from vacuum tubes, bacterial cells, or even the outputs of an image compression algorithm.
In a similar vein, what we call a “quantum computer” is also an abstract concept, which can take a number of physical forms since nature is, fundamentally, quantum mechanical. The foundation of fault tolerance rests on the fidelity of the underlying architecture, and some quantum architectures have a lower baseline error rate for some/all operations (called gates) compared to others. “Noise” in this context means any unintended interaction between qubits that causes their quantum state to collapse or evolve in unintended ways.
The “chandelier” image that is often shown in popular news stories about quantum computing is a “superconducting” qubit architecture, because it uses superconducting electrical circuits cooled to extremely low temperatures (read: colder than the vacuum of space). Much of the physical structure of this “chandelier” is actually for refrigeration.
Superconducting architectures (1) have fast gate operations but also decohere quickly. And they are also more sensitive to perturbation, as anything as miniscule as stray microwaves, tiny vibrations, or other sources of environmental noise, which is why they require cryogenic temperatures. Moreover, connectivity between qubits is limited by locality, which can result in larger and more complex circuits.
Google and IBM are the most prominent examples of companies building a quantum computer based on a superconducting architecture.
On the other end of the spectrum from superconducting qubits are trapped ions (2) which are relatively stable but qubit operations are slow. This runtime limitation presents its own challenges to scaling this architecture to cryptographically-relevant scale, because even though trapped ions are stable at the qubit level, the slow gate operations make error correction cycles slower and more computationally expensive.
IonQ and Quantinuum are two prominent quantum computing companies building systems with trapped-ion architectures.
One up-and-coming architecture that has been advancing fast recently is neutral atoms (3). These are physically similar to the ionic modality, allowing all-to-all connectivity and permitting greater stability, but introduces a larger degree of flexibility when it comes to realizing different types of gates. Cycle times are also faster at the cost of slightly lower two qubit gate fidelities and sensitivity to control errors.
.png%3F2025-12-04T18%3A51%3A59.428Z&w=3840&q=100)
Figure 1: Illustration of a quantum classical interface
The next generation of quantum computing architectures being considered all feature low baseline error rates. One prospective approach championed by Microsoft (called Majorana qubits) promises a feature known as topological protection, which if realized would offer extremely low physical error rates.
That said, many of these new architectures are still quite nascent. So error correction, the topic of the next section, will likely be the primary way in which a cryptographically relevant quantum computer is realized in the short/medium-term.
Mitigating the physical errors introduced during a quantum computation to ensure that the result of the computation is meaningful is the job of quantum error correction (QEC). QEC is an independent process, requiring both classical and quantum components. Error correction algorithms are often specialized to different architectures due to their different physical characteristics, such as the degree of connectivity between qubits.
Error correction operations are typically divided into two categories:
- Error detection is the process to actually discover when and where something has gone wrong in the quantum computation (“syndrome extraction”)
- Error correction applies classical decoding techniques to that syndrome to determine the exact error and “feeds forward” instructions on how to correct it midstream (so you don’t have to start the whole program over)
QEC codes provide the foundation of this technique. In classical computing, data can be read/copied trivially, and applying error correction to ensure fidelity is a relatively straightforward process. On the contrary, the “no-cloning” property of quantum computing prohibits copying the quantum state without affecting it, making error correction much more challenging.
A QEC code defines how logical information is spread across many physical qubits by using “stabilizers.” A stabilizer is a specific multi-qubit measurement defined by the code that checks whether a group of qubits is still in a valid encoded state. If the measurement outcome changes from its expected value, it signals that an error has occurred somewhere within that group of physical qubits. By distributing information in this way, the code allows the system to identify and repair errors on the logical level without ever directly measuring the underlying quantum state.
Of course, introducing more physical qubits that are themselves prone to error will only help if the way in which they are utilized results in a marginally greater decrease in the error rate for the overall system.
This benchmark for error correction is known as below threshold, which means that the logical error rate decreases exponentially as the code grows. In other words, below threshold error correction implies increasing returns to scale, so adding more physical qubits to a quantum error-correcting code actually helps increase fidelity
Google demonstrated below threshold error correction for the first time in a notable result published earlier this year. In this work, they used a special kind of error correcting code called a surface code. A surface code arranges physical qubits on a two dimensional grid and protects information by repeatedly measuring local stabilizers that detect bit flip and phase flip errors. These stabilizer measurements produce a syndrome that can be decoded to identify and correct errors without collapsing the logical quantum state.
Trapped-ion/neutral atom architectures often use a different kind of error-correcting code called a color code. Color codes are another family of topological QEC codes that arrange qubits on a lattice and use stabilizer measurements to detect and correct errors, similar to surface codes. Their main advantage is that they leverage the any-to-any connectivity of the ion/atom architecture which simplifies error correction for a subset of gate operations, although this comes with lower error thresholds and more complex stabilizers that make them more sensitive to noise.
Quantum low-density parity-check codes, or qLDPC codes, are a newer class of QEC codes designed to reduce the number of physical qubits needed per logical qubit. They use sparse stabilizers that act on fewer qubits, which makes measurements easier, reduces error propagation, and lowers control complexity. If hardware fidelity is high enough, qLDPC codes can achieve similar logical error rates with much smaller code distances (thus requiring fewer physical qubits). In turn, this means performance scales more efficiently with distance than surface codes, enabling higher error thresholds.
Having demonstrated below-threshold QEC earlier this year, the field has a unique opportunity to continue to improve these algorithms and their. The codes themselves could be refined or expanded to enable smaller stabilizers, greater efficiency, and universality across architectures and gate types. Better decoding and measurement algorithms could also reduce the latency for error correction cycles, reducing the resource overhead while retaining the same fidelity.
QEC is only as effective as the quantum-classical control plane that implements it. This interface, where decoding is applied and corrections are fed forward into the quantum computation is the topic of the next section.
The classical-quantum control plane is the interface through which a quantum computer detects errors, decodes them, and feeds forward corrections to guide future operations. Its performance determines how quickly stabilizer measurements can be processed, how fast corrections can be applied, and how efficiently the system can handle program logic such as branching or loops through selective mid-circuit measurement. (5)
A practical example of this approach is NVIDIA’s NVQlink, which uses a graphics processing unit (GPU) to perform syndrome decoding, mid-circuit measurement, and classical control flow with very low latency. Systems like this illustrate that scalable quantum computing depends as much on fast classical feedback as it does on high-fidelity qubits.
A final consideration is that not all quantum gates place equal demands on the control plane or on error correction. Certain gate types (like Clifford gates, which will be covered in the next post) are relatively easy to correct and can be handled efficiently by codes such as the surface code, but they do not provide computational power beyond what a classical computer can simulate. The operations that provide true quantum advantage, particularly those involving complex entangled states, tend to be harder to implement and require significantly more resources to correct.
Therefore non-Clifford gates are essential for universal quantum computation, but they come at the cost of increased circuit depth, higher error correction overheads, and greater demands on the quantum-classical control plane. The high costs to implement and error correct these non-Clifford gate types is the primary resource bottleneck for running Shor’s algorithm, and thus one of the most important remaining challenges in building a cryptographically relevant quantum computer.
.png%3F2025-12-04T18%3A51%3A48.463Z&w=3840&q=100)
Figure 2: Illustration of a quantum classical interface
As we covered in Part I, quantum mechanics enables a powerful new paradigm for computation because it leverages fundamental properties like superposition and entanglement. However, these features make the physical realization of a quantum computer inherently fragile, difficult to implement, and prone to errors.
Overcoming this inherent fragility to achieve a useful computational result,is the goal of quantum error correction. While companies over the last decade have built increasingly large quantum computers in terms of the number of physical qubits, these systems are still mostly too noisy to compute anything meaningful.
Separating “signal from noise” is at the heart of solving the practical challenge of useful quantum computation. Therefore, error correction and its associated metrics are the most important ones to track quantum progress along the road to a CRQC. And over the past year, improvements in error correction have greatly accelerated, creating potential conditions where a scalable, fault-tolerant quantum computer might be realized (5)
With this foundation in QEC, the next post in the series will cover the logical layer of quantum computation: logical qubits, gate operations, and quantum vs. classical circuits.
- https://postquantum.com/quantum-modalities/superconducting-qubits/
- https://postquantum.com/quantum-modalities/trapped-ion-qubits/
- https://postquantum.com/quantum-modalities/neutral-atom-quantum/
- For a better understanding of error decoding and quantum control, check out the resources on https://www.riverlane.com/
See the 20x improvement cited in this paper from Google QAI: https://arxiv.org/abs/2505.15917