Optimizing quantum gates towards the scale of logical qubits

  1. Paul V. Klimov,
  2. Andreas Bengtsson,
  3. Chris Quintana,
  4. Alexandre Bourassa,
  5. Sabrina Hong,
  6. Andrew Dunsworth,
  7. Kevin J. Satzinger,
  8. William P. Livingston,
  9. Volodymyr Sivak,
  10. Murphy Y. Niu,
  11. Trond I. Andersen,
  12. Yaxing Zhang,
  13. Desmond Chik,
  14. Zijun Chen,
  15. Charles Neill,
  16. Catherine Erickson,
  17. Alejandro Grajales Dau,
  18. Anthony Megrant,
  19. Pedram Roushan,
  20. Alexander N. Korotkov,
  21. Julian Kelly,
  22. Vadim Smelyanskiy,
  23. Yu Chen,
  24. and Hartmut Neven
A foundational assumption of quantum error correction theory is that quantum gates can be scaled to large processors without exceeding the error-threshold for fault tolerance. Two major
challenges that could become fundamental roadblocks are manufacturing high performance quantum hardware and engineering a control system that can reach its performance limits. The control challenge of scaling quantum gates from small to large processors without degrading performance often maps to non-convex, high-constraint, and time-dependent control optimization over an exponentially expanding configuration space. Here we report on a control optimization strategy that can scalably overcome the complexity of such problems. We demonstrate it by choreographing the frequency trajectories of 68 frequency-tunable superconducting qubits to execute single- and two-qubit gates while mitigating computational errors. When combined with a comprehensive model of physical errors across our processor, the strategy suppresses physical error rates by ∼3.7× compared with the case of no optimization. Furthermore, it is projected to achieve a similar performance advantage on a distance-23 surface code logical qubit with 1057 physical qubits. Our control optimization strategy solves a generic scaling challenge in a way that can be adapted to other quantum algorithms, operations, and computing architectures.

Resolving catastrophic error bursts from cosmic rays in large arrays of superconducting qubits

  1. Matt McEwen,
  2. Lara Faoro,
  3. Kunal Arya,
  4. Andrew Dunsworth,
  5. Trent Huang,
  6. Seon Kim,
  7. Brian Burkett,
  8. Austin Fowler,
  9. Frank Arute,
  10. Joseph C Bardin,
  11. Andreas Bengtsson,
  12. Alexander Bilmes,
  13. Bob B. Buckley,
  14. Nicholas Bushnell,
  15. Zijun Chen,
  16. Roberto Collins,
  17. Sean Demura,
  18. Alan R. Derk,
  19. Catherine Erickson,
  20. Marissa Giustina,
  21. Sean D. Harrington,
  22. Sabrina Hong,
  23. Evan Jeffrey,
  24. Julian Kelly,
  25. Paul V. Klimov,
  26. Fedor Kostritsa,
  27. Pavel Laptev,
  28. Aditya Locharla,
  29. Xiao Mi,
  30. Kevin C. Miao,
  31. Shirin Montazeri,
  32. Josh Mutus,
  33. Ofer Naaman,
  34. Matthew Neeley,
  35. Charles Neill,
  36. Alex Opremcak,
  37. Chris Quintana,
  38. Nicholas Redd,
  39. Pedram Roushan,
  40. Daniel Sank,
  41. Kevin J. Satzinger,
  42. Vladimir Shvarts,
  43. Theodore White,
  44. Z. Jamie Yao,
  45. Ping Yeh,
  46. Juhwan Yoo,
  47. Yu Chen,
  48. Vadim Smelyanskiy,
  49. John M. Martinis,
  50. Hartmut Neven,
  51. Anthony Megrant,
  52. Lev Ioffe,
  53. and Rami Barends
Scalable quantum computing can become a reality with error correction, provided coherent qubits can be constructed in large arrays. The key premise is that physical errors can remain
both small and sufficiently uncorrelated as devices scale, so that logical error rates can be exponentially suppressed. However, energetic impacts from cosmic rays and latent radioactivity violate both of these assumptions. An impinging particle ionizes the substrate, radiating high energy phonons that induce a burst of quasiparticles, destroying qubit coherence throughout the device. High-energy radiation has been identified as a source of error in pilot superconducting quantum devices, but lacking a measurement technique able to resolve a single event in detail, the effect on large scale algorithms and error correction in particular remains an open question. Elucidating the physics involved requires operating large numbers of qubits at the same rapid timescales as in error correction, exposing the event’s evolution in time and spread in space. Here, we directly observe high-energy rays impacting a large-scale quantum processor. We introduce a rapid space and time-multiplexed measurement method and identify large bursts of quasiparticles that simultaneously and severely limit the energy coherence of all qubits, causing chip-wide failure. We track the events from their initial localised impact to high error rates across the chip. Our results provide direct insights into the scale and dynamics of these damaging error bursts in large-scale devices, and highlight the necessity of mitigation to enable quantum computing to scale.

Learning Non-Markovian Quantum Noise from Moiré-Enhanced Swap Spectroscopy with Deep Evolutionary Algorithm

  1. Murphy Yuezhen Niu,
  2. Vadim Smelyanskyi,
  3. Paul Klimov,
  4. Sergio Boixo,
  5. Rami Barends,
  6. Julian Kelly,
  7. Yu Chen,
  8. Kunal Arya,
  9. Brian Burkett,
  10. Dave Bacon,
  11. Zijun Chen,
  12. Ben Chiaro,
  13. Roberto Collins,
  14. Andrew Dunsworth,
  15. Brooks Foxen,
  16. Austin Fowler,
  17. Craig Gidney,
  18. Marissa Giustina,
  19. Rob Graff,
  20. Trent Huang,
  21. Evan Jeffrey,
  22. David Landhuis,
  23. Erik Lucero,
  24. Anthony Megrant,
  25. Josh Mutus,
  26. Xiao Mi,
  27. Ofer Naaman,
  28. Matthew Neeley,
  29. Charles Neill,
  30. Chris Quintana,
  31. Pedram Roushan,
  32. John M. Martinis,
  33. and Hartmut Neven
Two-level-system (TLS) defects in amorphous dielectrics are a major source of noise and decoherence in solid-state qubits. Gate-dependent non-Markovian errors caused by TLS-qubit coupling
are detrimental to fault-tolerant quantum computation and have not been rigorously treated in the existing literature. In this work, we derive the non-Markovian dynamics between TLS and qubits during a SWAP-like two-qubit gate and the associated average gate fidelity for frequency-tunable Transmon qubits. This gate dependent error model facilitates using qubits as sensors to simultaneously learn practical imperfections in both the qubit’s environment and control waveforms. We combine the-state-of-art machine learning algorithm with Moiré-enhanced swap spectroscopy to achieve robust learning using noisy experimental data. Deep neural networks are used to represent the functional map from experimental data to TLS parameters and are trained through an evolutionary algorithm. Our method achieves the highest learning efficiency and robustness against experimental imperfections to-date, representing an important step towards in-situ quantum control optimization over environmental and control defects.

A 28nm Bulk-CMOS 4-to-8GHz <2mW Cryogenic Pulse Modulator for Scalable Quantum Computing

  1. Joseph C Bardin,
  2. Evan Jeffrey,
  3. Erik Lucero,
  4. Trent Huang,
  5. Ofer Naaman,
  6. Rami Barends,
  7. Ted White,
  8. Marissa Giustina,
  9. Daniel Sank,
  10. Pedram Roushan,
  11. Kunal Arya,
  12. Benjamin Chiaro,
  13. Julian Kelly,
  14. Jimmy Chen,
  15. Brian Burkett,
  16. Yu Chen,
  17. Andrew Dunsworth,
  18. Austin Fowler,
  19. Brooks Foxen,
  20. Craig Gidney,
  21. Rob Graff,
  22. Paul Klimov,
  23. Josh Mutus,
  24. Matthew McEwen,
  25. Anthony Megrant,
  26. Matthew Neeley,
  27. Charles Neill,
  28. Chris Quintana,
  29. Amit Vainsencher,
  30. Hartmut Neven,
  31. and John Martinis
Future quantum computing systems will require cryogenic integrated circuits to control and measure millions of qubits. In this paper, we report the design and characterization of a
prototype cryogenic CMOS integrated circuit that has been optimized for the control of transmon qubits. The circuit has been integrated into a quantum measurement setup and its performance has been validated through multiple quantum control experiments.

Characterizing Quantum Supremacy in Near-Term Devices

  1. Sergio Boixo,
  2. Sergei V. Isakov,
  3. Vadim N. Smelyanskiy,
  4. Ryan Babbush,
  5. Nan Ding,
  6. Zhang Jiang,
  7. John M. Martinis,
  8. and Hartmut Neven
A critical question for the field of quantum computing in the near future is whether quantum devices without error correction can perform a well-defined computational task beyond the
capabilities of state-of-the-art classical computers, achieving so-called quantum supremacy. We study the task of sampling from the output distributions of (pseudo-)random quantum circuits, a natural task for benchmarking quantum computers. Crucially, sampling this distribution classically requires a direct numerical simulation of the circuit, with computational cost exponential in the number of qubits. This requirement is typical of chaotic systems. We extend previous results in computational complexity to argue more formally that this sampling task must take exponential time in a classical computer. We study the convergence to the chaotic regime using extensive supercomputer simulations, modeling circuits with up to 42 qubits – the largest quantum circuits simulated to date for a computational task that approaches quantum supremacy. We argue that while chaotic states are extremely sensitive to errors, quantum supremacy can be achieved in the near-term with approximately fifty superconducting qubits. We introduce cross entropy as a useful benchmark of quantum circuits which approximates the circuit fidelity. We show that the cross entropy can be efficiently measured when circuit simulations are available. Beyond the classically tractable regime, the cross entropy can be extrapolated and compared with theoretical estimates of circuit fidelity to define a practical quantum supremacy test.

Tunable inductive coupling of superconducting qubits in the strongly nonlinear regime

  1. Dvir Kafri,
  2. Chris Quintana,
  3. Yu Chen,
  4. Alireza Shabani,
  5. John M. Martinis,
  6. and Hartmut Neven
For a variety of superconducting qubits, tunable interactions are achieved through mutual inductive coupling to a coupler circuit containing a nonlinear Josephson element. In this paper
we derive the general interaction mediated by such a circuit under the Born-Oppenheimer approximation. This interaction naturally decomposes into a classical part with origin in the classical circuit equations and a quantum part associated with the zero-point energy of the coupler. Our result is non-perturbative in the qubit-coupler coupling strengths and circuit nonlinearities, leading to significant departures from previous treatments in the nonlinear or strong coupling regimes. Specifically, it displays no divergences for large coupler nonlinearities, and it can predict k-body and non-stoquastic interactions that are absent in linear theories. Our analysis provides explicit and efficiently computable series for any term in the interaction Hamiltonian and can be applied to any superconducting qubit type.

What is the Computational Value of Finite Range Tunneling?

  1. Vasil S. Denchev,
  2. Sergio Boixo,
  3. Sergei V. Isakov,
  4. Nan Ding,
  5. Ryan Babbush,
  6. Vadim Smelyanskiy,
  7. John Martinis,
  8. and Hartmut Neven
Quantum annealing (QA) has been proposed as a quantum enhanced optimization heuristic exploiting tunneling. Here, we demonstrate how finite range tunneling can provide considerable
computational advantage. For a crafted problem designed to have tall and narrow energy barriers separating local minima, the D-Wave 2X quantum annealer achieves significant runtime advantages relative to Simulated Annealing (SA). For instances with 945 variables this results in a time-to-99\%-success-probability that is ∼108 times faster than SA running on a single processor core. We also compared physical QA with Quantum Monte Carlo (QMC), an algorithm that emulates quantum tunneling on classical processors. We observe a substantial constant overhead against physical QA: D-Wave 2X runs up to ∼108 times faster than an optimized implementation of QMC on a single core. To investigate whether finite range tunneling will also confer an advantage for problems of practical interest, we conduct numerical studies on binary optimization problems that cannot yet be represented on quantum hardware. For random instances of the number partitioning problem, we find numerically that QMC, as well as other algorithms designed to simulate QA, scale better than SA and better than the best known classical algorithms for this problem. We discuss the implications of these findings for the design of next generation quantum annealers.

Artificial Quantum Thermal Bath

  1. Alireza Shabani,
  2. and Hartmut Neven
Temperature determines the relative probability of observing a physical system in an energy state when that system is energetically in equilibrium with its environment. In this manuscript,
we present a theory for engineering the temperature of a quantum system different from its ambient temperature, that is basically an analog version of the quantum metropolis algorithm. We define criteria for an engineered quantum bath that, when couples to a quantum system with Hamiltonian H, drives the system to the equilibrium state e−H/TTr(e−H/T) with a tunable parameter T. For a system of superconducting qubits, we propose a circuit-QED approximate realization of such an engineered thermal bath consisting of driven lossy resonators. We consider an artificial thermal bath as a simulator for many-body physics or a controllable temperature knob for a hybrid quantum-thermal annealer.

Computational Role of Multiqubit Tunneling in a Quantum Annealer

  1. Sergio Boixo,
  2. Vadim N. Smelyanskiy,
  3. Alireza Shabani,
  4. Sergei V. Isakov,
  5. Mark Dykman,
  6. Vasil S. Denchev,
  7. Mohammad Amin,
  8. Anatoly Smirnov,
  9. Masoud Mohseni,
  10. and Hartmut Neven
Quantum tunneling, a phenomenon in which a quantum state traverses energy barriers above the energy of the state itself, has been hypothesized as an advantageous physical resource for
optimization. Here we show that multiqubit tunneling plays a computational role in a currently available, albeit noisy, programmable quantum annealer. We develop a non-perturbative theory of open quantum dynamics under realistic noise characteristics predicting the rate of many-body dissipative quantum tunneling. We devise a computational primitive with 16 qubits where quantum evolutions enable tunneling to the global minimum while the corresponding classical paths are trapped in a false minimum. Furthermore, we experimentally demonstrate that quantum tunneling can outperform thermal hopping along classical paths for problems with up to 200 qubits containing the computational primitive. Our results indicate that many-body quantum phenomena could be used for finding better solutions to hard optimization problems.

Computational Role of Collective Tunneling in a Quantum Annealer

  1. Sergio Boixo,
  2. Vadim N. Smelyanskiy,
  3. Alireza Shabani,
  4. Sergei V. Isakov,
  5. Mark Dykman,
  6. Vasil S. Denchev,
  7. Mohammad Amin,
  8. Anatoly Smirnov,
  9. Masoud Mohseni,
  10. and Hartmut Neven
Quantum tunneling is a phenomenon in which a quantum state traverses energy barriers above the energy of the state itself. Tunneling has been hypothesized as an advantageous physical
resource for optimization. Here we present the first experimental evidence of a computational role of multiqubit quantum tunneling in the evolution of a programmable quantum annealer. We develop a theoretical model based on a NIBA Quantum Master Equation to describe the multiqubit dissipative tunneling effects under the complex noise characteristics of such quantum devices. We start by considering a computational primitive, the simplest non-convex optimization problem consisting of just one global and one local minimum. The quantum evolutions enable tunneling to the global minimum while the corresponding classical paths are trapped in a false minimum. In our study the non-convex potentials are realized by frustrated networks of qubit clusters with strong intra-cluster coupling. We show that the collective effect of the quantum environment is suppressed in the „critical“ phase during the evolution where quantum tunneling „decides“ the right path to solution. In a later stage dissipation facilitates the multiqubit tunneling leading to the solution state. The predictions of the model accurately describe the experimental data from the D-Wave Two quantum annealer at NASA Ames. In our computational primitive the temperature dependence of the probability of success in the quantum model is opposite to that of the classical paths with thermal hopping. Specifically, we provide an analysis of an optimization problem with sixteen qubits, demonstrating eight qubit tunneling that increases success probabilities. Furthermore, we report results for larger problems with up to 200 qubits that contain the primitive as subproblems.