Optimizing quantum gates towards the scale of logical qubits

  1. Paul V. Klimov,
  2. Andreas Bengtsson,
  3. Chris Quintana,
  4. Alexandre Bourassa,
  5. Sabrina Hong,
  6. Andrew Dunsworth,
  7. Kevin J. Satzinger,
  8. William P. Livingston,
  9. Volodymyr Sivak,
  10. Murphy Y. Niu,
  11. Trond I. Andersen,
  12. Yaxing Zhang,
  13. Desmond Chik,
  14. Zijun Chen,
  15. Charles Neill,
  16. Catherine Erickson,
  17. Alejandro Grajales Dau,
  18. Anthony Megrant,
  19. Pedram Roushan,
  20. Alexander N. Korotkov,
  21. Julian Kelly,
  22. Vadim Smelyanskiy,
  23. Yu Chen,
  24. and Hartmut Neven
A foundational assumption of quantum error correction theory is that quantum gates can be scaled to large processors without exceeding the error-threshold for fault tolerance. Two major
challenges that could become fundamental roadblocks are manufacturing high performance quantum hardware and engineering a control system that can reach its performance limits. The control challenge of scaling quantum gates from small to large processors without degrading performance often maps to non-convex, high-constraint, and time-dependent control optimization over an exponentially expanding configuration space. Here we report on a control optimization strategy that can scalably overcome the complexity of such problems. We demonstrate it by choreographing the frequency trajectories of 68 frequency-tunable superconducting qubits to execute single- and two-qubit gates while mitigating computational errors. When combined with a comprehensive model of physical errors across our processor, the strategy suppresses physical error rates by ∼3.7× compared with the case of no optimization. Furthermore, it is projected to achieve a similar performance advantage on a distance-23 surface code logical qubit with 1057 physical qubits. Our control optimization strategy solves a generic scaling challenge in a way that can be adapted to other quantum algorithms, operations, and computing architectures.

Effects of Laser-Annealing on Fixed-Frequency Superconducting Qubits

  1. Hyunseong Kim,
  2. Christian Jünger,
  3. Alexis Morvan,
  4. Edward S. Barnard,
  5. William P. Livingston,
  6. M. Virginia P. Altoé,
  7. Yosep Kim,
  8. Chengyu Song,
  9. Larry Chen,
  10. John Mark Kreikebaum,
  11. D. Frank Ogletree,
  12. David I. Santiago,
  13. and Irfan Siddiqi
As superconducting quantum processors increase in complexity, techniques to overcome constraints on frequency crowding are needed. The recently developed method of laser-annealing provides
an effective post-fabrication method to adjust the frequency of superconducting qubits. Here, we present an automated laser-annealing apparatus based on conventional microscopy components and demonstrate preservation of highly coherent transmons. In one case, we observe a two-fold increase in coherence after laser-annealing and perform noise spectroscopy on this qubit to investigate the change in defect features, in particular two-level system defects. Finally, we present a local heating model as well as demonstrate aging stability for laser-annealing on the wafer scale. Our work constitutes an important first step towards both understanding the underlying physical mechanism and scaling up laser-annealing of superconducting qubits.

Machine Learning for Continuous Quantum Error Correction on Superconducting Qubits

  1. Ian Convy,
  2. Haoran Liao,
  3. Song Zhang,
  4. Sahil Patel,
  5. William P. Livingston,
  6. Ho Nam Nguyen,
  7. Irfan Siddiqi,
  8. and K. Birgitta Whaley
We propose a machine learning algorithm for continuous quantum error correction that is based on the use of a recurrent neural network to identity bit-flip errors from continuous noisy
syndrome measurements. The algorithm is designed to operate on measurement signals deviating from the ideal behavior in which the mean value corresponds to a code syndrome value and the measurement has white noise. We analyze continuous measurements taken from a superconducting architecture using three transmon qubits to identify three significant practical examples of non-ideal behavior, namely auto-correlation at temporal short lags, transient syndrome dynamics after each bit-flip, and drift in the steady-state syndrome values over the course of many experiments. Based on these real-world imperfections, we generate synthetic measurement signals from which to train the recurrent neural network, and then test its proficiency when implementing active error correction, comparing this with a traditional double threshold scheme and a discrete Bayesian classifier. The results show that our machine learning protocol is able to outperform the double threshold protocol across all tests, achieving a final state fidelity comparable to the discrete Bayesian classifier.

Experimental demonstration of continuous quantum error correction

  1. William P. Livingston,
  2. Machiel S. Blok,
  3. Emmanuel Flurin,
  4. Justin Dressel,
  5. Andrew N. Jordan,
  6. and Irfan Siddiqi
The storage and processing of quantum information are susceptible to external noise, resulting in computational errors that are inherently continuous A powerful method to suppress these
effects is to use quantum error correction. Typically, quantum error correction is executed in discrete rounds where errors are digitized and detected by projective multi-qubit parity measurements. These stabilizer measurements are traditionally realized with entangling gates and projective measurement on ancillary qubits to complete a round of error correction. However, their gate structure makes them vulnerable to errors occurring at specific times in the code and errors on the ancilla qubits. Here we use direct parity measurements to implement a continuous quantum bit-flip correction code in a resource-efficient manner, eliminating entangling gates, ancilla qubits, and their associated errors. The continuous measurements are monitored by an FPGA controller that actively corrects errors as they are detected. Using this method, we achieve an average bit-flip detection efficiency of up to 91%. Furthermore, we use the protocol to increase the relaxation time of the protected logical qubit by a factor of 2.7 over the relaxation times of the bare comprising qubits. Our results showcase resource-efficient stabilizer measurements in a multi-qubit architecture and demonstrate how continuous error correction codes can address challenges in realizing a fault-tolerant system.

Implementation of a canonical phase measurement with quantum feedback

  1. Leigh S. Martin,
  2. William P. Livingston,
  3. Shay Hacohen-Gourgy,
  4. Howard M. Wiseman,
  5. and Irfan Siddiqi
Much of modern metrology and communication technology encodes information in electromagnetic waves, typically as an amplitude or phase. While current hardware can perform near-ideal
measurements of photon number or field amplitude, to date no device exists that can even in principle perform an ideal phase measurement. In this work, we implement a single-shot canonical phase measurement on a one-photon wave packet, which surpasses the current standard of heterodyne detection and is optimal for single-shot phase estimation. By applying quantum feedback to a Josephson parametric amplifier, our system adaptively changes its measurement basis during photon arrival and allows us to validate the detector’s performance by tracking the quantum state of the photon source. These results provide an important capability for optical quantum computing, and demonstrate that quantum feedback can both enhance the precision of a detector and enable it to measure new classes of physical observables.