Optimizing quantum gates towards the scale of logical qubits

  1. Paul V. Klimov,
  2. Andreas Bengtsson,
  3. Chris Quintana,
  4. Alexandre Bourassa,
  5. Sabrina Hong,
  6. Andrew Dunsworth,
  7. Kevin J. Satzinger,
  8. William P. Livingston,
  9. Volodymyr Sivak,
  10. Murphy Y. Niu,
  11. Trond I. Andersen,
  12. Yaxing Zhang,
  13. Desmond Chik,
  14. Zijun Chen,
  15. Charles Neill,
  16. Catherine Erickson,
  17. Alejandro Grajales Dau,
  18. Anthony Megrant,
  19. Pedram Roushan,
  20. Alexander N. Korotkov,
  21. Julian Kelly,
  22. Vadim Smelyanskiy,
  23. Yu Chen,
  24. and Hartmut Neven
A foundational assumption of quantum error correction theory is that quantum gates can be scaled to large processors without exceeding the error-threshold for fault tolerance. Two major
challenges that could become fundamental roadblocks are manufacturing high performance quantum hardware and engineering a control system that can reach its performance limits. The control challenge of scaling quantum gates from small to large processors without degrading performance often maps to non-convex, high-constraint, and time-dependent control optimization over an exponentially expanding configuration space. Here we report on a control optimization strategy that can scalably overcome the complexity of such problems. We demonstrate it by choreographing the frequency trajectories of 68 frequency-tunable superconducting qubits to execute single- and two-qubit gates while mitigating computational errors. When combined with a comprehensive model of physical errors across our processor, the strategy suppresses physical error rates by ∼3.7× compared with the case of no optimization. Furthermore, it is projected to achieve a similar performance advantage on a distance-23 surface code logical qubit with 1057 physical qubits. Our control optimization strategy solves a generic scaling challenge in a way that can be adapted to other quantum algorithms, operations, and computing architectures.

Resolving catastrophic error bursts from cosmic rays in large arrays of superconducting qubits

  1. Matt McEwen,
  2. Lara Faoro,
  3. Kunal Arya,
  4. Andrew Dunsworth,
  5. Trent Huang,
  6. Seon Kim,
  7. Brian Burkett,
  8. Austin Fowler,
  9. Frank Arute,
  10. Joseph C Bardin,
  11. Andreas Bengtsson,
  12. Alexander Bilmes,
  13. Bob B. Buckley,
  14. Nicholas Bushnell,
  15. Zijun Chen,
  16. Roberto Collins,
  17. Sean Demura,
  18. Alan R. Derk,
  19. Catherine Erickson,
  20. Marissa Giustina,
  21. Sean D. Harrington,
  22. Sabrina Hong,
  23. Evan Jeffrey,
  24. Julian Kelly,
  25. Paul V. Klimov,
  26. Fedor Kostritsa,
  27. Pavel Laptev,
  28. Aditya Locharla,
  29. Xiao Mi,
  30. Kevin C. Miao,
  31. Shirin Montazeri,
  32. Josh Mutus,
  33. Ofer Naaman,
  34. Matthew Neeley,
  35. Charles Neill,
  36. Alex Opremcak,
  37. Chris Quintana,
  38. Nicholas Redd,
  39. Pedram Roushan,
  40. Daniel Sank,
  41. Kevin J. Satzinger,
  42. Vladimir Shvarts,
  43. Theodore White,
  44. Z. Jamie Yao,
  45. Ping Yeh,
  46. Juhwan Yoo,
  47. Yu Chen,
  48. Vadim Smelyanskiy,
  49. John M. Martinis,
  50. Hartmut Neven,
  51. Anthony Megrant,
  52. Lev Ioffe,
  53. and Rami Barends
Scalable quantum computing can become a reality with error correction, provided coherent qubits can be constructed in large arrays. The key premise is that physical errors can remain
both small and sufficiently uncorrelated as devices scale, so that logical error rates can be exponentially suppressed. However, energetic impacts from cosmic rays and latent radioactivity violate both of these assumptions. An impinging particle ionizes the substrate, radiating high energy phonons that induce a burst of quasiparticles, destroying qubit coherence throughout the device. High-energy radiation has been identified as a source of error in pilot superconducting quantum devices, but lacking a measurement technique able to resolve a single event in detail, the effect on large scale algorithms and error correction in particular remains an open question. Elucidating the physics involved requires operating large numbers of qubits at the same rapid timescales as in error correction, exposing the event’s evolution in time and spread in space. Here, we directly observe high-energy rays impacting a large-scale quantum processor. We introduce a rapid space and time-multiplexed measurement method and identify large bursts of quasiparticles that simultaneously and severely limit the energy coherence of all qubits, causing chip-wide failure. We track the events from their initial localised impact to high error rates across the chip. Our results provide direct insights into the scale and dynamics of these damaging error bursts in large-scale devices, and highlight the necessity of mitigation to enable quantum computing to scale.

What is the Computational Value of Finite Range Tunneling?

  1. Vasil S. Denchev,
  2. Sergio Boixo,
  3. Sergei V. Isakov,
  4. Nan Ding,
  5. Ryan Babbush,
  6. Vadim Smelyanskiy,
  7. John Martinis,
  8. and Hartmut Neven
Quantum annealing (QA) has been proposed as a quantum enhanced optimization heuristic exploiting tunneling. Here, we demonstrate how finite range tunneling can provide considerable
computational advantage. For a crafted problem designed to have tall and narrow energy barriers separating local minima, the D-Wave 2X quantum annealer achieves significant runtime advantages relative to Simulated Annealing (SA). For instances with 945 variables this results in a time-to-99\%-success-probability that is ∼108 times faster than SA running on a single processor core. We also compared physical QA with Quantum Monte Carlo (QMC), an algorithm that emulates quantum tunneling on classical processors. We observe a substantial constant overhead against physical QA: D-Wave 2X runs up to ∼108 times faster than an optimized implementation of QMC on a single core. To investigate whether finite range tunneling will also confer an advantage for problems of practical interest, we conduct numerical studies on binary optimization problems that cannot yet be represented on quantum hardware. For random instances of the number partitioning problem, we find numerically that QMC, as well as other algorithms designed to simulate QA, scale better than SA and better than the best known classical algorithms for this problem. We discuss the implications of these findings for the design of next generation quantum annealers.