With a new 107-qubit chip called Willow, Google extended the lifetime of quantum information.
Photo: Google Quantum AI.
Dear Commons Community,
In a long-awaited advance, researchers at Google have shown they can suppress errors in the finicky quantum bits critical to the promise of quantum computing. By spreading one “logical” qubit of information across multiple redundant physical qubits, they enabled it to survive longer than the fragile quantum state of any of the physical qubits, they report this week in Nature. As reported in Science.
“This result is what convinces me that we can actually build a big quantum computer that will work,” says Kevin Satzinger, a physicist with Google Quantum AI. Scott Aaronson, a theoretical computer scientist at the University of Texas at Austin, says the work “very clearly represents an exciting milestone for the field.” But he notes that researchers using other types of qubits are also closing in on practical error correction.
Unlike conventional bits, which can be set only to 0 or 1, a qubit can also be put in a weird 0-and-1 state. That property could enable a full-fledged quantum computer to solve certain problems that would overwhelm the best conventional supercomputer. For example, it could factor huge numbers and crack the encryption algorithms that until recently set the standards for protecting information on the internet. Today’s quantum computers can’t do anything like that, however, because their qubits can’t maintain their delicate two-way states long enough.
Google’s qubits are tiny superconducting circuits that slosh with current. A lower energy state represents 0 and a higher state represents 1. Microwaves can ease the circuit into one state or the other—or both at once. But the quantum state of a superconducting qubit persists for just a fraction of a millisecond before environmental noise scrambles it, causing, for example, 0 and 1 to flip.
Ordinary computers can correct for errors by simply making copies of a bit. The computer takes the reading of the majority of the bits as the true state of the “logical” bit. By comparing pairs of bits, it can even deduce which ones flipped. In quantum mechanics, however, a “no-cloning” theorem forbids the copying of one qubit’s state on to another. And even if cloning were possible, the act of measuring a qubit’s precarious two-way state generally squashes it to be either 0 or 1.
To correct a qubit’s state without copying or measuring it, researchers first need to spread it to other qubits using a subtle quantum link called entanglement. To make a single logical qubit, for example, a qubit in the 0-and-1 state can be entangled with two others so that all three are 0 and, simultaneously all three are 1. Researchers also entangle an “ancillary” qubit with each pair of “data” qubits, to keep tabs on them. By measuring just the ancillas, researchers can detect whether any of data qubits flip without touching them. In principle they can then flip a disturbed data qubit back.
In reality, the simplest error-correcting scheme requires a square grid of data qubits and interleaved ancillas. If the physical qubits are too flaky, the errors just proliferate. But if the physical qubits and their interactions are sufficiently clean, then expanding the array of qubits makes the state of the encoded logical qubit more robust. At some point the logical qubit passes the threshold at which it lasts longer than the state of the physical qubits, explains Michael Newman, a physicist at Google. “Threshold is basically a magic line in the sand where error correction goes from hurting to helping.”
Google has now passed that line. Previously the researchers had shown that as the size of their logical qubit increased, its error rate edged down just slightly. Now, they have improved things so that, as the logical qubit expands from nine to 25 to 49 physical data qubits, the error rate falls by a factor of two at each step. The largest logical qubit has a lifetime of 291 microseconds, 2.4 times longer than any of the physical qubits. “This is indeed a very convincing demonstration of error suppression improving exponentially with the [grid] size,” says Barbara Terhal, a physicist at the Technical University of Delft. “Google is the first team to achieve this.”
That may be open to debate. In December 2023 researchers at Harvard University who use individual atoms as qubits showed they could reduce the error rate in a logical qubit by encoding it on bigger grids of atoms. In 2022, a team at Yale University demonstrated beyond-threshold error correction in an experiment in which the qubits were modes of microwaves in a hollow aluminum cylinder. But Google researchers did something unprecedented, says John Preskill, a theoretical physicist at the California Institute of Technology: They decoded the ancillary qubits repeatedly on the fly, which will be essential for using logical qubits in computation.
Now, they can try basic operations with several logical qubits, says Charina Chou, Google Quantum AI’s chief operating officer. “You can imagine having multiple smaller logical qubits instead of one bigger logical qubit, and testing out those interacting.”
But the team still has a long way to their goal of a 1-million-qubit, fully error-corrected machine, notes Irfan Siddiqi, a physicist at the University of California, Berkeley. And they could hit serious snags along the way. In the new work “the physics is great,” Siddiqi says. “But I wouldn’t buy stock just yet.”
We are getting a little closer to quantum computing.
Tony