
The Day the Clock Stopped
In October 2019, a Google paper landed in Nature with a claim that rattled the computing world: their 53-qubit Sycamore processor had completed a specific calculation in 200 seconds — a task they estimated would take the world's fastest classical supercomputer 10,000 years.
The number was contested. IBM replied within days that their Summit supercomputer could solve the same problem in 2.5 days given enough disk space. But arguing over the exact speedup missed the point. Something had shifted. Quantum computing had crossed from theoretical physics into experimental engineering, and every major technology company took notice.
Classical vs Quantum: What Actually Differs
A classical bit is a transistor — a tiny switch that is either on or off. Billions of these switches, flipping billions of times per second, give us modern computing. It is deterministic, binary, and extraordinarily well-understood.
A qubit is different at a fundamental level.
Two completely linked particles
A qubit can exist in a weighted combination of |0⟩ and |1⟩ simultaneously, described by two complex numbers α and β where |α|² + |β|² = 1. Only upon measurement does it resolve to a definite state.
Two qubits can be correlated such that measuring one instantly determines the outcome of measuring the other, regardless of the distance between them. This non-local correlation is a resource that classical computers cannot replicate.
Quantum algorithms deliberately manipulate phase to make wrong answers cancel (destructive interference) and correct answers amplify (constructive interference). This is how quantum computers achieve speedups — not raw parallelism.
The combination of these three properties is what makes certain computations exponentially faster on quantum hardware. Not all computations — most, in fact, gain nothing from quantumness. But for a specific class of problems including factoring, simulation, and search, the advantage can be staggering.
The Key Players
The race for scalable quantum hardware is genuinely global, with radically different physical approaches competing.
Google chose superconducting qubits — tiny circuits of niobium cooled to 15 millikelvin, colder than outer space. Their Sycamore chip and the newer Willow processor pursue scale through engineering precision. In 2024, Willow demonstrated below-threshold error correction: adding more physical qubits actually reduced the error rate, a milestone the field had chased for decades.
IBM takes a more open approach, offering cloud access to their quantum systems since 2016. Their Eagle, Osprey, and Condor chips track qubit count aggressively — Condor reached 1,121 physical qubits in 2023. IBM's bet is that software, tools, and ecosystem matter as much as raw hardware performance.
IonQ uses trapped ions rather than superconducting circuits. Individual ytterbium atoms are held in place by electromagnetic fields and manipulated with lasers. The approach is slower to gate but achieves dramatically higher coherence times and naturally higher-fidelity operations.
Quantinuum (formerly Cambridge Quantum + Honeywell Quantum) also pursues trapped ions, claiming the highest quantum volume — a combined measure of qubit count, connectivity, and error rate — of any commercial system.
PsiQuantum is taking a different bet entirely, building photonic quantum computers using standard semiconductor fabs. They are not racing to demonstrate intermediate systems — they are betting that the only path to fault tolerance runs through photons and silicon fabrication at scale.
The Hardest Problem: Decoherence
Every qubit technology faces the same fundamental enemy: decoherence.
Quantum states are fragile. The superposition that gives a qubit its power is destroyed the moment the qubit interacts with its environment — a stray photon, a vibration, a fluctuating magnetic field. This collapse happens in microseconds for superconducting qubits, seconds for trapped ions. Either way, it is far too fast for long computations.
Why decoherence is hard
The very act of measuring a qubit to check for errors can destroy the quantum state you are trying to preserve. Quantum error correction works around this using redundancy — encoding one logical qubit across many physical qubits, monitoring for error syndromes without directly measuring the data qubits. The overhead is enormous: current estimates suggest thousands of physical qubits per logical qubit for fault-tolerant computation.
This is why the gap between "quantum supremacy" and "quantum utility" remains so large. Today's systems — called NISQ devices (Noisy Intermediate-Scale Quantum) — are too noisy and too small for the algorithms that would deliver the most value: Shor's factoring algorithm, large-scale quantum chemistry simulation, optimisation at industrial scale.
The path forward requires two things happening in parallel: more physical qubits, and lower error rates per operation. Google's Willow chip achieving below-threshold error correction in December 2024 was the first credible evidence that both can happen together.
What Comes Next
The honest answer is that no one knows the timeline. Experts' estimates for fault-tolerant quantum computing range from five years to never, depending on how optimistic they are about error rates and manufacturing yield.
What is clear: the architecture of computing is being fundamentally reconsidered. Classical supercomputers are not going away — quantum computers will almost certainly be co-processors, accelerating specific subroutines rather than replacing general-purpose hardware. The analogy often used is the GPU: a specialised accelerator that transformed certain workloads without replacing the CPU.
The race for quantum supremacy was always a proxy war for something larger — the question of whether human beings can engineer systems that exploit the full strangeness of quantum mechanics at scale. That question is still open.
By Quantum Wallah