Physicists take a key step in correcting quantum computer errors

Like a child learning math, scientists developing quantum computers—dream machines that could crack problems that would overwhelm any supercomputer—are learning to spot and correct their mistakes. In the latest step, a team has demonstrated a way to detect errors in the setting of a quantum bit—or qubit—that’s guaranteed not to make matters worse. Such “fault tolerance” is a necessary step toward the grand goal of maintaining finicky qubits so that they can be manipulated indefinitely.

“It looks like a real milestone,” says Scott Aaronson, a theoretical computer scientist at the University of Texas, Austin, who wasn’t involved with the work. “We knew that it was just a matter of time until someone did this.” However, John Martinis, an experimental physicist at the University of California, Santa Barbara, questions whether the authors of the new work may be overstating what they’ve done. “It’s a very nice step,” he says. “But it’s just a step.”

A conventional computer manipulates tiny electrical switches, or bits, that can be set to either 0 or 1; a quantum computer employs qubits that can be set to 0 and 1 simultaneously. A qubit can be, for example, a tiny circuit of superconducting metal with two different energy states; or an individual ion spinning one way, the other, or both ways at once. Thanks to such both-ways-at-once states, a quantum computer can encode all of the potential solutions to certain problems as quantum waves sloshing through the qubits. Interference cancels out the wrong solutions and the right one emerges. Such techniques would enable a large quantum computer to quickly factor huge numbers, something that’s hard for an ordinary computer, and thus break encryption schemes used to protect information on the internet.

The slightest interference can mangle a qubit’s delicate state, however. Were a qubit like an ordinary bit, researchers could simply make redundant copies of it and count the majority to retain the proper state. If a copy does flip, then summing up various subsets of the bits—so-called parity checks—will reveal which one. But quantum theory forbids the copying of one qubit’s state on to another. Even worse, any attempt to measure a qubit and see whether it’s in the correct state makes it collapse to either 0 or 1.

Researchers get around these problems by exploiting a quantum connection called entanglement, which allows them to spread the state of an initial “logical” qubit—the thing that will eventually perform the desired operation—among several physical qubits. So, for example, a 0-and-1 state of one qubit can be spread to three qubits in a state in which all three are 0 and simultaneously all three are 1. Researchers can then entangle more ancillary qubits with the group and, in the quantum equivalent of parity checks, measure the ancillary qubits to detect errors in the main qubits—without ever touching them.

In reality, the scheme is much more complicated, as developers must guard against two distinct types of errors, known as bit flips and phase flips. Still, scientists have been making progress. In June, researchers with Google, who use superconducting qubits, showed they could reduce the incidents of one type of error or the other—but not both at once—if they spread a logical qubit over as many as 11 physical ones with 10 ancillas.

Now, Laird Egan and Christopher Monroe, physicists at the University of Maryland (UMD), College Park, and colleagues have gone a step further and demonstrated a scheme that simultaneously corrects both types of flips—and, thus, any error. Their qubits consist of individual ytterbium ions trapped in an electromagnetic field on the surface of a chip. The team used nine ions to encode a single logical qubit and four more ancillary ones to keep tabs on the main ones.

Most important, the encoded logical qubit performed better than the physical ones on which it depends, at least in some ways. For example, the researchers succeeded in preparing either the logical 0 or the logical 1 state 99.67% of the time—better than the 99.54% for the individual qubits. “This is really the first time that the quality of the [logical] qubit is better than the components that encode it,” says Monroe, who is cofounder of IonQ, a company developing ion-based quantum computers.

However, Egan notes, the encoded qubit did not outshine the individual ions in every way. Instead, he says, the real advance is in demonstrating fault tolerance, which means the error-correcting machinery works in a way that doesn’t introduce more errors than it corrects. “Fault tolerance is really the design principle that prevents errors from spreading,” says Egan, now at IonQ.

Martinis questions that use of the term, however. To claim true fault-tolerant error correction, he says, researchers must do two other things. They must show that the errors in a logical qubit get exponentially smaller as the number of physical qubits increases. And they must show they can measure the ancillary qubits repeatedly to maintain the logical qubit, he says.

Egan agrees that those are the obvious next steps for the UMD and IonQ teams. He notes that to reach the stage where the encoded logical qubit outperforms the underlying physical qubits in all ways requires the latter to have a low enough error rate to begin with. “That will be a big result when it happens, and everybody is pushing for it,” Egan says. “But it hasn’t happened yet.”

source: sciencemag.org