Defining and detecting quantum speedup

  1. Troels F. Rønnow,
  2. Zhihui Wang,
  3. Joshua Job,
  4. Sergio Boixo,
  5. Sergei V. Isakov,
  6. David Wecker,
  7. John M. Martinis,
  8. Daniel A. Lidar,
  9. and Matthias Troyer
The development of small-scale digital and analog quantum devices raises the question of how to fairly assess and compare the computational power of classical and quantum devices, and
of how to detect quantum speedup. Here we show how to define and measure quantum speedup in various scenarios, and how to avoid pitfalls that might mask or fake quantum speedup. We illustrate our discussion with data from a randomized benchmark test on a D-Wave Two device with up to 503 qubits. Comparing the performance of the device on random spin glass instances with limited precision to simulated classical and quantum annealers, we find no evidence of quantum speedup when the entire data set is considered, and obtain inconclusive results when comparing subsets of instances on an instance-by-instance basis. Our results for one particular benchmark do not rule out the possibility of speedup for other classes of problems and illustrate that quantum speedup is elusive and can depend on the question posed.

Error corrected quantum annealing with hundreds of qubits

  1. Kristen L. Pudenz,
  2. Tameem Albash,
  3. and Daniel A. Lidar
Quantum information processing offers dramatic speedups, yet is famously susceptible to decoherence, the process whereby quantum superpositions decay into mutually exclusive classical
alternatives, thus robbing quantum computers of their power. This has made the development of quantum error correction an essential and inescapable aspect of both theoretical and experimental quantum computing. So far little is known about protection against decoherence in the context of quantum annealing, a computational paradigm which aims to exploit ground state quantum dynamics to solve optimization problems more rapidly than is possible classically. Here we develop error correction for quantum annealing and provide an experimental demonstration using up to 344 superconducting flux qubits in processors which have recently been shown to physically implement programmable quantum annealing. We demonstrate a substantial improvement over the performance of the processors in the absence of error correction. These results pave a path toward large scale noise-protected adiabatic quantum optimization devices.

Quantum annealing with more than one hundred qubits

  1. Sergio Boixo,
  2. Troels F. Rønnow,
  3. Sergei V. Isakov,
  4. Zhihui Wang,
  5. David Wecker,
  6. Daniel A. Lidar,
  7. John M. Martinis,
  8. and Matthias Troyer
At a time when quantum effects start to pose limits to further miniaturisation of devices and the exponential performance increase due to Moore’s law, quantum technology is maturing
to the point where quantum devices, such as quantum communication systems, quantum random number generators and quantum simulators, may be built with powers exceeding the performance of classical computers. A quantum annealer, in particular, finds solutions to hard optimisation problems by evolving a known initial configuration towards the ground state of a Hamiltonian that encodes an optimisation problem. Here, we present results from experiments on a 108 qubit D-Wave One device based on superconducting flux qubits. The correlations between the device and a simulated quantum annealer demonstrate that the device performs quantum annealing: unlike classical thermal annealing it exhibits a bimodal separation of hard and easy problems, with small-gap avoided level crossings characterizing the hard problems. To assess the computational power of the quantum annealer we compare it to optimised classical algorithms. We discuss how quantum speedup could be detected on devices scaled to a larger number of qubits where the limits of classical algorithms are reached.

Experimental signature of programmable quantum annealing

  1. Sergio Boixo,
  2. Tameem Albash,
  3. Federico M. Spedalieri,
  4. Nicholas Chancellor,
  5. and Daniel A. Lidar
Quantum annealing is a general strategy for solving difficult optimization problems with the aid of quantum adiabatic evolution. Both analytical and numerical evidence suggests that
under idealized, closed system conditions, quantum annealing can outperform classical thermalization-based algorithms such as simulated annealing. Do engineered quantum annealing devices effectively perform classical thermalization when coupled to a decohering thermal environment? To address this we establish, using superconducting flux qubits with programmable spin-spin couplings, an experimental signature which is consistent with quantum annealing, and at the same time inconsistent with classical thermalization, in spite of a decoherence timescale which is orders of magnitude shorter than the adiabatic evolution time. This suggests that programmable quantum devices, scalable with current superconducting technology, implement quantum annealing with a surprising robustness against noise and imperfections.