This is Part 3 in our Countdown to Q-Day series. Read Part 2 here.
Our previous post provided a general primer on fault tolerant quantum computing. Here we will focus on logical qubits, which are the basic building blocks of logical quantum computation and practically required to run a quantum algorithm like Shor’s.
As a brief refresher, a physical qubit is the most fundamental resource in quantum computing, just like a classical bit is the basic unit of classical computation.
Much more so than a classical bit, a single physical qubit is fragile, noisy, and unable to support long or complex computations. In a quantum computer, physical qubits are constantly subjected to noise from the environment.
From a programming perspective, this is less than ideal, because when designing a quantum algorithm or circuit, we want to abstract away from the underlying physical architecture with logical operations or logical “gates” that reliably perform the operation we want.
We don’t often think about this in classical computing, because physical error rates are very low. So practically speaking, the physical implementation of a NOT gate is pretty close to its logical one (although it has required decades of semiconductor developments to get us to this point).
But a quantum computer is more like a leaky ship; it only stays afloat if you are able to detect and patch leaks faster than they accumulate. So, a continuous cycle of error detection, decoding, and correction is required to reliably and consistently create a “logical qubit”.
To mitigate inevitable errors in quantum computing, we “bundle” physical qubits together so they collectively behave as a more reliable logical qubit. The specific way qubits are grouped and managed depends on the underlying hardware architecture. Some approaches are more efficient than others, but none are perfect. All of them remain sensitive to noise and errors, which is why continuous error correction is required.
The equation for logical error rate is defined by the formula below:

A logical qubit can only exist when a fast, continuous error-correction cycle detects errors on physical qubits, decodes them classically, and feeds corrections back into the quantum system before errors accumulate. This is why quantum error correction (QEC) is so important.
If the QEC loop runs fast/efficiently enough, the “leaks” in the hull of the ship are patched faster than they occur. Error correction outpaces error accumulation, and increasing the size of the QEC code beyond threshold causes the logical error rate to decrease (rather than increase) exponentially. This condition, where larger codes yield exponentially better logical fidelity, is known as operating below the fault-tolerance threshold.
In other words, it enables us to effectively abstract away this physical layer and reliably run logical operations. Logical qubits are not physical objects but emergent entities maintained by QEC.
We can think of the variable d in the equation above as representing the “size” of the physical qubit arrangement that sustains a logical qubit. Higher values of d enable higher logical qubit fidelity (lower error rate), but also introduce additional cost in terms of physical qubits. Therefore, what we want is efficiency; that is, smaller codes that pack a bigger error-correction punch.
The ratio of physical qubits required to create one logical qubit depends on several factors. Efficient measurement, error decoding, feed-forward, and control were discussed in our prior post covering QEC. Other important aspects are:
Certain physical architectures have qubits with a higher “baseline” fidelity. In other words, some qubits are more prone to “error” than others. For example, trapped ion-based qubits have a higher 1-qubit (1Q) gate fidelity compared to superconducting qubits, as shown in the chart below.
.png%3F2026-01-28T14%3A54%3A19.591Z&w=3840&q=100)
Even more important than 1Q is 2Q (or 2-qubit) gate fidelity. 2-qubit gates can take advantage of unique quantum properties, such as entanglement, which enables quantum computers to run algorithms (like Shor’s) that a classical computer simply cannot.
Even fidelities over 99% become unstable over time, because the error rate cascades with each operation. The higher the fidelity, the more operations one can perform before errors accumulate and need to be corrected. Before they can be corrected, though, they need to be detected. That’s where codes come in.
There are many different “codes” or ways to arrange qubits that enable detection of errors. Some of these are more efficient than others. For example, the image below shows the “surface code”, used by Google’s Willow to demonstrate below-threshold quantum computation.
.png%3F2026-01-28T21%3A17%3A11.710Z&w=3840&q=100)
The most recent resource estimate for breaking RSA-2048 with 1 million physical qubits assumed surface codes for error correction. One unfortunate fact about surface codes (evident in the picture above) is that physical qubit costs scale quadratically. Newer approaches, such as qLDPC codes, scale linearly in the ideal case. Using more efficient codes has the potential to reduce the cost of Shor’s algorithm by another order of magnitude.
Some families of QEC codes are also more efficient for certain physical modalities & error types than others.
A final aspect to consider (partially related to the prior two) is connectivity. Different physical architectures have limitations for which qubits can connect with each other. For example, the cryogenic temperatures required for superconducting processors necessitate supercooled physical connections, evident in the picture below.
.png%3F2026-01-28T21%3A16%3A13.208Z&w=3840&q=100)
With limited connectivity between qubits, the resource overhead increases because there are fewer ways to arrange physical qubits to create logical ones. And because quantum information cannot be cloned, creating arrangements requires “swapping” between qubits. Making qubit connections through these “swap” operations now has to be accounted for within the logical circuit, increasing the logical error rate or adding further overhead.
.png%3F2026-01-28T21%3A16%3A30.610Z&w=3840&q=100)
On the other hand, modalities based on neutral atoms and trapped ions are more flexible, and permit “any-to-any” connectivity. However, this comes at the cost of slower runtimes and error correction cycles.
In the half-century since quantum computing was theorized, the advent of below-threshold quantum computing and logical qubits over the past year was a true breakthrough. As this blog post has hopefully explained, harnessing the power of these quantum systems required a huge amount of scientific and engineering effort. Now, quantum computing enters its logical era, where practical quantum computing becomes possible, and relevant for cryptographic systems such as Bitcoin.
In the next blog post, we’ll dive into the different types of logical quantum operations, their specific considerations and relative costs, and how those operations fit together to run an algorithm like Shor’s.