Imagine I hand you a ball and tell you it’s colorless. Not white. Not transparent. Colorless. The property “color” does not apply to this object.
You’d push back. Everything has a color. Maybe I mean it’s clear, or grey, or I’m being difficult. But I mean something specific: the ball does not have a color until you measure it. When you do, it becomes red or green—not reveals red or green, becomes red or green. Before measurement, asking “what color is it?” is like asking “what is the smell of Tuesday?” The question doesn’t apply.
This is what quantum mechanics actually claims about particles. Not that we don’t know the state. That the state doesn’t exist yet.
Now go further. Imagine I hand you two of these colorless balls and tell you they’re identical. Not similar. Not manufactured to the same spec. Identical—so completely that there is no fact of the matter about which one is in your left hand and which is in your right. They aren’t “ball A” and “ball B with unknown colors.” They aren’t “ball A” and “ball B” at all. There is no this one versus that one. There’s just... ball, twice. You’re not holding two objects. You’re holding the same indistinguishable thing in two places.
That’s where the metaphor starts to strain, because a ball is a thing you can point at, and pointing is exactly what quantum mechanics says you can’t do with identical particles. The objects don’t just lack properties. They lack individuality.
Most explanations get this wrong. The 2022 Nobel Prize in Physics went to Aspect, Clauser, and Zeilinger for experimental work proving that quantum entanglement is real and that hidden variables—the intuition that particles must have definite properties we just can’t see—don’t work as an explanation. The Royal Swedish Academy published a diagram to explain this: a box produces a black ball and a white ball. In the hidden variables version, the balls were always black and white inside the box. In the quantum version, they become black and white upon measurement.
The problem with this diagram is that black and white are both definite states. They differ in value but not in kind. The hidden variables box and the quantum box look structurally identical: thing goes in, colored things come out. The entire conceptual leap—that the property doesn’t exist before measurement—gets flattened into “the balls were always in there, you just couldn’t see which was which.” Which is the hidden variables interpretation wearing a quantum costume.
The colorless ball forces a category violation. A colorless ball isn’t a ball with an unknown color. It’s a ball where color isn’t a meaningful attribute yet. And at a deeper level, it’s not even a ball you can distinguish from any other ball of its type. That’s the gap most people can’t cross. And it matters, because the question of whether superposition represents genuine computational parallelism—whether a quantum computer is really exploring an exponentially large space of possibilities—depends entirely on whether that colorless, indistinguishable something actually has more degrees of freedom than a ball that’s secretly red.
Whether that colorless ball genuinely has more degrees of freedom than a secretly red one isn’t a philosophical curiosity. It’s a $50 billion engineering bet. Every quantum computer ever built is a machine designed to exploit the difference. If the difference is smaller than the math says, so is the machine.
Which brings me to what I think of as five ceilings.
A caveat before I start: the mainstream view in physics is that Ceilings 1 and 2 are the real obstacles, and both are engineering problems that will yield to better hardware. Ceilings 3 through 5 represent increasingly minority positions—not crackpottery, but not consensus either. I find them worth taking seriously. You should know you’re getting a skeptical take, not a summary of where the field stands.
The quantum computing industry talks about one barrier to cryptographically relevant machines: engineering. Build better hardware, reduce error rates, scale up qubit counts, and you get a machine that runs Shor’s algorithm against RSA-2048. The timeline is debated. The destination is not.
I think there are actually five possible ceilings, and they’re nested. Each one is harder to test, harder to talk about, and more consequential than the last.
Ceiling 1: Error correction overhead
The known problem. You need roughly 1,000 to 10,000 physical qubits per logical qubit. To run Shor’s algorithm against RSA-2048, recent estimates put the requirement at around 4 million physical qubits with current error rates. We’re at roughly 1,000 noisy physical qubits today.
This is the ceiling the industry acknowledges and calls purely an engineering challenge. Maybe it is.
Ceiling 2: Correlated errors at scale
Current error correction models assume errors are mostly independent and random across qubits. If qubit 7 flips, that tells you nothing about whether qubit 8 will flip. The math depends on this.
But researchers have found that vibrations from cryogenic cooling systems cause correlated error bursts across multiple qubits simultaneously. If error correlations increase nonlinearly as you add qubits—if the errors aren’t random but structured in ways current models don’t capture—then error correction thresholds may be fundamentally unreachable, not just hard to reach.
The difference matters. An engineering problem yields to funding and time. A threshold problem doesn’t.
Ceiling 3: Decoherence scaling
Coherence times—how long a qubit maintains its quantum state—range from microseconds to milliseconds depending on the platform. The standard assumption is that this is an environmental isolation problem. Better shielding, colder temperatures, cleaner materials.
But here’s what I find suggestive: every qubit platform hits decoherence through completely different physical mechanisms. Superconducting qubits lose coherence through electromagnetic noise. Trapped ions through stray electric fields. Photonic qubits through absorption and scattering. Silicon spin qubits through nuclear spin interactions. The timescales vary dramatically—microseconds for superconducting circuits, seconds for trapped ions—so I’m not claiming the limitations are identical. But every platform, regardless of the underlying physics, eventually loses coherence. The universal pattern is decoherence itself, even if the speeds differ.
Maybe that’s just what happens when you try to isolate delicate quantum states from a noisy universe—different platforms, different noise sources, same fundamental challenge. Or maybe decoherence reflects something about how superposition works at scale rather than how well we build isolating containers. If it’s the latter, there’s a wall that no amount of engineering touches. I lean toward the former explanation, but I don’t think we know yet.
Ceiling 4: Computational space is smaller than the math says
Here’s where the colorless ball matters.
Shor’s algorithm assumes that a qubit in superposition is genuinely exploring both states simultaneously. Two entangled qubits explore four states. Three explore eight. Scale that to thousands and you get a computational space so vast that it can factor large numbers efficiently—something classical computers cannot do.
But what if superposition is constrained by hidden structure? What if the colorless ball has fewer degrees of freedom than the math assumes—not zero, not classical, but less than fully quantum? The standard model of quantum computation treats qubits as distinguishable objects occupying a vast Hilbert space. But if the particles themselves resist individuation—if “qubit 7” and “qubit 8” aren’t as separable as the addressing scheme implies—then the computational space they span may be smaller than the math says. You’d see algorithms that work perfectly at small qubit counts but deliver diminishing returns at scale. Classical simulations would keep pace longer than quantum theory predicts.
In 2024, a team at the Flatiron Institute demonstrated exactly this. They simulated IBM’s 127-qubit Eagle processor on a laptop, producing results more accurate than the quantum device itself. They used tensor network methods that exploited structure in the quantum state—structure that, according to the standard model of quantum computation, shouldn’t have been there to exploit.
The mainstream interpretation: IBM’s circuits weren’t utilizing the full computational space effectively. The device was noisy, the algorithms weren’t optimized, and the tensor network found shortcuts through the portion of Hilbert space that was actually being used. Fair enough. But there’s another reading: the structure the classical simulation exploited might not be a flaw in IBM’s implementation. It might be a feature of how quantum computation actually works at that scale. One result doesn’t prove a ceiling. But it’s a data point that should make you curious.
Ceiling 5: The theory is incomplete
This is the most heretical position and the hardest to test. Quantum mechanics, as formulated, might be missing something. There may be structure underneath the probability distributions—not the simple hidden variables that Bell’s theorem ruled out, but something more subtle that constrains outcomes in ways the current formalism doesn’t predict.
Bell’s theorem proved that no local hidden variable theory can reproduce quantum predictions. That’s a strong result. But it doesn’t rule out nonlocal structure, retrocausal models, or constraints that emerge only at scale.
And the foundations are still producing surprises. In late 2025, Blasiak and Markiewicz published in npj Quantum Information showing that particle identity alone—the bare fact that electrons are indistinguishable—generates nonlocality for nearly all quantum states. Not engineered entanglement. Not carefully prepared Bell pairs. Just... identical particles existing. They also showed that the classical Bell test framework doesn’t even apply to identical particles, because you can’t label them and send them to separate labs. The standard toolkit for testing nonlocality breaks down precisely where nonlocality turns out to be most fundamental.
If we’re still discovering what the basic axioms from the 1920s actually imply, confidence that we’ve mapped the full computational consequences of quantum mechanics seems premature.
At small qubit counts, any missing structure wouldn’t matter—the probability distributions are accurate enough. At the scale required for cryptographically relevant computation, hidden structure could impose correlations or limits that make the theoretical speedup unrealizable.
Not because of engineering. Because of physics.
Einstein thought quantum mechanics was incomplete. The last 60 years of experimental results have shown he was wrong about the specific mechanism he proposed. They have not shown he was wrong about incompleteness itself.
What this means for cryptography
The implications cascade directly from which ceiling is real.
Ceilings 1–2 mean Shor’s algorithm is delayed but eventually arrives. Post-quantum migration is urgent. Every year you wait is a year of encrypted traffic vulnerable to harvest-now-decrypt-later collection.
Ceiling 3 means Shor’s might arrive for some applications but not others—perhaps optimization and simulation but not the sustained coherence required for breaking 2048-bit keys. Migration is prudent but the timeline stretches.
Ceilings 4–5 mean Shor’s at cryptographic scale may never work. The multi-billion dollar quantum computing industry would still produce useful machines for other problems, but the specific threat to public-key cryptography would be a false alarm.
The uncomfortable observation: nobody with funding has an incentive to determine which ceiling is real. The quantum computing industry needs the threat to be genuine to justify investment. The post-quantum cryptography industry needs it to be genuine to sell solutions. National security agencies need the ambiguity to justify budgets in both directions. Academic researchers need the open question to fund continued work.
I run a post-quantum migration company. I’m telling you this anyway, because here’s the thing: you don’t buy fire insurance because you’re certain your house will burn. You buy it because you can’t afford to be wrong.
The rational response to genuine uncertainty isn’t to pick a ceiling and bet on it. It’s crypto-agility—building systems that can swap cryptographic primitives without architectural upheaval. If Shor’s arrives on schedule, you’re ready. If it arrives late or never, you’ve still modernized infrastructure that was overdue for replacement. The organizations tracking their migration trajectory today aren’t just hedging against quantum threats. They’re building the ability to respond to any cryptographic disruption, from quantum or otherwise.
Certainty is a luxury. Agility is a strategy.
pqprobe tracks post-quantum migration over time. A scan tells you where you are. A trend tells you whether you’ll finish.
Sources:
- The Nobel Prize in Physics 2022. NobelPrize.org. Nobel Prize Outreach AB 2022. nobelprize.org/prizes/physics/2022/summary/
- Tindall, J., Fishman, M., Stoudenmire, E.M., Sels, D. “Efficient tensor network simulation of IBM’s Eagle kicked Ising experiment.” PRX Quantum, 2024. DOI: 10.1103/PRXQuantum.5.010308
- Blasiak, P., Markiewicz, M. “Nonlocality of particle identity.” npj Quantum Information, 2025.
- Gidney, C. and Ekerå, M. “How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits.” Quantum, 2021. arXiv:1905.09749