Why haven’t we got useful quantum computers yet?

Four years after Google first demonstrated the supremacy of quantum computers over ordinary ones, why aren't these exotic machines being used for practical problems?

Quantum computers have long promised to solve certain problems faster than any ordinary, or classical, computer can. In fact, Google delivered on this promise in 2019, when it declared that its quantum computer had achieved quantum supremacy, performing a calculation impossible for the best classical computers of the day. As New Scientist said at the time, Google had “secured yet another place in the history books”.

Quantum computers are here, but aren’t yet particularly useful
John D/Getty Images


Yet the next chapter of the quantum revolution is struggling to be written. Since Google’s breakthrough in 2019, other groups have made similar claims, but in each case improved algorithms for classical computers have reasserted dominance over quantum machines, or at least threatened to. With this back and forth ongoing, will quantum computers ever pull ahead?


While Google’s original result appeared to be impossible for classical computers to replicate in any reasonable time frame, in 2022, researchers managed to come up with a new algorithm to do just that. Settling the question of quantum supremacy once and for all will depend on both the number of qubits, or quantum bits, used and the complexity with which they are programmed, referred to as circuit depth. Only when a computer scores high enough on both counts will the results be out of reach for any classical computing or algorithmic improvements.

“Eventually, the number of qubits will become large enough that no classical algorithm can catch up, but it’s unclear at which point that is – which is one thing that Google are trying to figure out,” says Bill Fefferman at the University of Chicago, Illinois.


Google’s original result demonstrated a task called random circuit sampling, which involves checking that the values of qubits after they have undergone random operations are truly random. It used 54 superconducting qubits for 20 cycles, which refers to how long they perform those random operations, and relates to the circuit depth.


Increased complexity

In April this year, researchers from Google performed this same feat, but with 70 qubits for 24 cycles. Though the increase may seem modest, the jump in complexity is large and, the firm hopes, enough to make the classical-quantum gap more permanent. A calculation on its 70-qubit machine would now take the best supercomputers 47 years to replicate, Google claims.


At the moment, this stands as the best demonstration of quantum supremacy, yet to be bested by classical computers, but these 70 qubits aren’t perfect – they are plagued by “noise”, which makes it difficult to verify that the computer is fully taking advantage of its quantum nature and isn’t vulnerable to classical advancements. Researchers at Google are now working on how they can prove and quantify that the computer is performing a truly quantum task, and how this noise affects that measurement.


So far, they have done this through a benchmark that uses a classical computer to predict the outputs for a quantum machine and then calculates the difference between the final answers. The larger the difference, the more complex the quantum system.


But it was unclear how faithful this measure was to the true nature of the quantum computer and at what point noise made this a useless measurement. Google and, in a separate result, Fefferman and his colleagues have pinpointed the exact level of noise at which we can still effectively use this benchmark for a quantum computer with a certain amount of qubits. “It’s really important because it gives us a benchmark by which we can compare, in an apples-to-apples way, successive generations of these experiments,” says Fefferman.

Researchers at the University of Science and Technology of China (USTC) have also demonstrated quantum supremacy using 56 qubits of a superconducting quantum computer called Zuchongzhi – a similar kind of hardware as Google’s – but they are also working on an alternative quantum computing design that uses photons for qubits. This machine, called Jiuzhang, has demonstrated quantum advantage, but comes with some unique challenges.


Jiuzhang performs boson sampling, which measures a sample of photons that have bounced around a maze of mirrors and beam splitters. Classical computers can’t accurately make these measurements above a certain number of photons. Verifying that the measurements are truly quantum isn’t straightforward – in fact, a coherent way to do so doesn’t currently exist. “The theory to certify these machines is still largely an open question,” says Nicolás Quesada at Polytechnique Montréal in Canada.


Because of this, the researchers’ results are vulnerable to classical breakthroughs. USTC claimed its original Jiuzhang result would take 600 million years to verify classically, but in 2022 a group of researchers showed that it could instead be performed in several months, due to a loophole in how the photons were measured by detectors. In April, USTC fixed this loophole by using a new kind of photon detector and reaffirmed its quantum advantage – but without a coherent means to verify this advantage, classical improvements could still chip away at it, says Quesada.


Practical problems

While the USTC team is focused on firming up its quantum advantage and understanding how its machines work, no practical use has yet been found for the quantum advantage itself, though not for lack of trying.

In February, researchers from USTC published a paper exploring how boson sampling might apply to graph problems, which are mathematical problems that can be practically useful for things like drug design and machine learning. “The way that we describe a quantum computer, the kind of mathematical framework that’s used to do that, is very similar to other interesting mathematical frameworks,” says Naomi Solomons at the University of Bristol, UK.


While the authors concluded that the boson sampling could help perform certain graph problems much faster, they ran into the same verification problem as before and couldn’t rule out whether classical algorithms might be able to give the same performance boost.


Mapping real-world problems to quantum computers, and vice versa, will probably make up a large part of research and development in the coming years, says Jay Gambetta at IBM. “We can say that quantum processors are getting to this utility-scale size, but I don’t think we are doing enough as a community to work out what circuits we’re going to run – I think that problem is as hard as the others.”

Gambetta and his colleagues are part of four separate working groups with scientists in other fields looking at how current quantum machines might be applied to scientific problems, for subjects like high-energy physics, materials, life sciences and finance. In July, the first results from the high-energy physics group, following discussions at CERN in Switzerland, were published. Specific problems, such as how particles bounce off each other and how particle pairs separate, were highlighted as having particular promise for quantum machines in the near future.


Instead of marking the point at which quantum computers can be said to finally have advantage over classical machines using benchmarks and mathematical proofs, it might make more sense to define it as when scientists in other fields choose to use quantum computers for their work, says Gambetta. “I think there will be many quantum advantages, but I think when it comes from someone that is not a quantum information scientist, that’s when I’m going to care,” he says.

Post a Comment

Last Article Next Article