Recently the question of whether the D-Wave processors exhibit large-scale quantum behavior or can be described by a classical model has attracted significant interest. In this workwe address this question by studying a 503 qubit D-Wave Two device as a „black box“, i.e., by studying its input-output behavior. We examine three candidate classical models and one quantum model, and compare their predictions to experiments we have performed on the device using groups of up to 40 qubits. The candidate classical models are simulated annealing, spin dynamics, a recently proposed hybrid O(2) rotor-Monte Carlo model, and three modified versions thereof. The quantum model is an adiabatic Markovian master equation derived in the weak coupling limit of an open quantum system. Our experiments realize an evolution from a transverse field to an Ising Hamiltonian, with a final-time degenerate ground state that splits into two types of states we call „isolated“ and „clustered“. We study the population ratio of the isolated and clustered states as a function of the overall energy scale of the Ising term, and the distance between the final state and the Gibbs state, and find that these are sensitive probes that distinguish the classical models from one another and from both the experimental data and the master equation. The classical models are all found to disagree with the data, while the master equation agrees with the experiment without fine-tuning, and predicts mixed state entanglement at intermediate evolution times. This suggests that an open system quantum dynamical description of the D-Wave device is well-justified even in the presence of relevant thermal excitations and fast single-qubit decoherence.
The development of small-scale digital and analog quantum devices raises the question of how to fairly assess and compare the computational power of classical and quantum devices, andof how to detect quantum speedup. Here we show how to define and measure quantum speedup in various scenarios, and how to avoid pitfalls that might mask or fake quantum speedup. We illustrate our discussion with data from a randomized benchmark test on a D-Wave Two device with up to 503 qubits. Comparing the performance of the device on random spin glass instances with limited precision to simulated classical and quantum annealers, we find no evidence of quantum speedup when the entire data set is considered, and obtain inconclusive results when comparing subsets of instances on an instance-by-instance basis. Our results for one particular benchmark do not rule out the possibility of speedup for other classes of problems and illustrate that quantum speedup is elusive and can depend on the question posed.
Quantum information processing offers dramatic speedups, yet is famously susceptible to decoherence, the process whereby quantum superpositions decay into mutually exclusive classicalalternatives, thus robbing quantum computers of their power. This has made the development of quantum error correction an essential and inescapable aspect of both theoretical and experimental quantum computing. So far little is known about protection against decoherence in the context of quantum annealing, a computational paradigm which aims to exploit ground state quantum dynamics to solve optimization problems more rapidly than is possible classically. Here we develop error correction for quantum annealing and provide an experimental demonstration using up to 344 superconducting flux qubits in processors which have recently been shown to physically implement programmable quantum annealing. We demonstrate a substantial improvement over the performance of the processors in the absence of error correction. These results pave a path toward large scale noise-protected adiabatic quantum optimization devices.
At a time when quantum effects start to pose limits to further miniaturisation of devices and the exponential performance increase due to Moore’s law, quantum technology is maturingto the point where quantum devices, such as quantum communication systems, quantum random number generators and quantum simulators, may be built with powers exceeding the performance of classical computers. A quantum annealer, in particular, finds solutions to hard optimisation problems by evolving a known initial configuration towards the ground state of a Hamiltonian that encodes an optimisation problem. Here, we present results from experiments on a 108 qubit D-Wave One device based on superconducting flux qubits. The correlations between the device and a simulated quantum annealer demonstrate that the device performs quantum annealing: unlike classical thermal annealing it exhibits a bimodal separation of hard and easy problems, with small-gap avoided level crossings characterizing the hard problems. To assess the computational power of the quantum annealer we compare it to optimised classical algorithms. We discuss how quantum speedup could be detected on devices scaled to a larger number of qubits where the limits of classical algorithms are reached.
Quantum annealing is a general strategy for solving difficult optimization
problems with the aid of quantum adiabatic evolution. Both analytical and
numerical evidence suggests thatunder idealized, closed system conditions,
quantum annealing can outperform classical thermalization-based algorithms such
as simulated annealing. Do engineered quantum annealing devices effectively
perform classical thermalization when coupled to a decohering thermal
environment? To address this we establish, using superconducting flux qubits
with programmable spin-spin couplings, an experimental signature which is
consistent with quantum annealing, and at the same time inconsistent with
classical thermalization, in spite of a decoherence timescale which is orders
of magnitude shorter than the adiabatic evolution time. This suggests that
programmable quantum devices, scalable with current superconducting technology,
implement quantum annealing with a surprising robustness against noise and
imperfections.