Laser-annealing Josephson junctions for yielding scaled-up superconducting quantum processors

  1. Jared B. Hertzberg,
  2. Eric J. Zhang,
  3. Sami Rosenblatt,
  4. Easwar Magesan,
  5. John A. Smolin,
  6. Jeng-Bang Yau,
  7. Vivek P. Adiga,
  8. Martin Sandberg,
  9. Markus Brink,
  10. Jerry M. Chow,
  11. and Jason S. Orcutt
As superconducting quantum circuits scale to larger sizes, the problem of frequency crowding proves a formidable task. Here we present a solution for this problem in fixed-frequency
qubit architectures. By systematically adjusting qubit frequencies post-fabrication, we show a nearly ten-fold improvement in the precision of setting qubit frequencies. To assess scalability, we identify the types of ‚frequency collisions‘ that will impair a transmon qubit and cross-resonance gate architecture. Using statistical modeling, we compute the probability of evading all such conditions, as a function of qubit frequency precision. We find that without post-fabrication tuning, the probability of finding a workable lattice quickly approaches 0. However with the demonstrated precisions it is possible to find collision-free lattices with favorable yield. These techniques and models are currently employed in available quantum systems and will be indispensable as systems continue to scale to larger sizes.

Three Qubit Randomized Benchmarking

  1. David C. McKay,
  2. Sarah Sheldon,
  3. John A. Smolin,
  4. Jerry M. Chow,
  5. and Jay M. Gambetta
As quantum circuits increase in size, it is critical to establish scalable multiqubit fidelity metrics. Here we investigate three-qubit randomized benchmarking (RB) with fixed-frequency
transmon qubits coupled to a common bus with pairwise microwave-activated interactions (cross-resonance). We measure, for the first time, a three-qubit error per Clifford of 0.106 for all-to-all gate connectivity and 0.207 for linear gate connectivity. Furthermore, by introducing mixed dimensionality simultaneous RB — simultaneous one- and two-qubit RB — we show that the three-qubit errors can be predicted from the one- and two-qubit errors. However, by introducing certain coherent errors to the gates we can increase the three-qubit error to 0.302, an increase that is not predicted by a proportionate increase in the one- and two-qubit errors from simultaneous RB. This demonstrates three-qubit RB as a unique multiqubit metric.

Demonstration of quantum advantage in machine learning

  1. D. Ristè,
  2. Marcus P. da Silva,
  3. Colm A. Ryan,
  4. Andrew W. Cross,
  5. John A. Smolin,
  6. Jay M. Gambetta,
  7. Jerry M. Chow,
  8. and Blake R. Johnson
The main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer. Most problems with a proven quantum advantage
involve the repeated use of a black box, or oracle, whose structure encodes the solution. One measure of the algorithmic performance is the query complexity, i.e., the scaling of the number of oracle calls needed to find the solution with a given probability. Few-qubit demonstrations of quantum algorithms, such as Deutsch-Jozsa and Grover, have been implemented across diverse physical systems such as nuclear magnetic resonance, trapped ions, optical systems, and superconducting circuits. However, at the small scale, these problems can already be solved classically with a few oracle queries, and the attainable quantum advantage is modest. Here we solve an oracle-based problem, known as learning parity with noise, using a five-qubit superconducting processor. Running classical and quantum algorithms on the same oracle, we observe a large gap in query count in favor of quantum processing. We find that this gap grows by orders of magnitude as a function of the error rates and the problem size. This result demonstrates that, while complex fault-tolerant architectures will be required for universal quantum computing, a quantum advantage already emerges in existing noisy systems

Comment on „Distinguishing Classical and Quantum Models for the D-Wave Device“

  1. Seung Woo Shin,
  2. Graeme Smith,
  3. John A. Smolin,
  4. and Umesh Vazirani
The SSSV model is a simple classical model that achieves excellent correlation with published experimental data on the D-Wave machine’s behavior on random instances of its native
problem, thus raising questions about how „quantum“ the D-Wave machine is at large scales. In response, a recent preprint by Vinci et al. proposes a particular set of instances on which the D-Wave machine behaves differently from the SSSV model. In this short note, we explain how a simple modeling of systematic errors in the machine allows the SSSV model to reproduce the behavior reported in the experiments of Vinci et al.

How „Quantum“ is the D-Wave Machine?

  1. Seung Woo Shin,
  2. Graeme Smith,
  3. John A. Smolin,
  4. and Umesh Vazirani
Recently there has been intense interest in claims about the performance of the D-Wave machine. Scientifically the most interesting aspect was the claim in Boixo et al., based on extensive
experiments, that the D-Wave machine exhibits large-scale quantum behavior. Their conclusion was based on the strong correlation of the input-output behavior of the D-Wave machine with a quantum model called simulated quantum annealing, in contrast to its poor correlation with two classical models: simulated annealing and classical spin dynamics. In this paper, we outline a simple new classical model, and show that on the same data it yields correlations with the D-Wave input-output behavior that are at least as good as those of simulated quantum annealing. Based on these results, we conclude that classical models for the D-Wave machine are not ruled out. Further analysis of the new model provides additional algorithmic insights into the nature of the problems being solved by the D-Wave machine.

Self-Consistent Quantum Process Tomography

  1. Seth T. Merkel,
  2. Jay M. Gambetta,
  3. John A. Smolin,
  4. S. Poletto,
  5. A. D. Córcoles,
  6. B. R. Johnson,
  7. Colm A. Ryan,
  8. and M. Steffen
Quantum process tomography is a necessary tool for verifying quantum gates and diagnosing faults in architectures and gate design. We show that the standard approach of process tomography
is grossly inaccurate in the case where the states and measurement operators used to interrogate the system are generated by gates that have some systematic error, a situation all but unavoidable in any practical setting. These errors in tomography can not be fully corrected through oversampling or by performing a larger set of experiments. We present an alternative method for tomography to reconstruct an entire library of gates in a self-consistent manner. The essential ingredient is to define a likelihood function that assumes nothing about the gates used for preparation and measurement. In order to make the resulting optimization tractable we linearize about the target, a reasonable approximation when benchmarking a quantum computer as opposed to probing a black-box function.

Process verification of two-qubit quantum gates by randomized benchmarking

  1. A. D. Córcoles,
  2. Jay M. Gambetta,
  3. Jerry M. Chow,
  4. John A. Smolin,
  5. Matthew Ware,
  6. J. D. Strand,
  7. B. L. T. Plourde,
  8. and M. Steffen
We implement a complete randomized benchmarking protocol on a system of two superconducting qubits. The protocol consists of randomizing over gates in the Clifford group, which experimentally
are generated via an improved two-qubit cross-resonance gate implementation and single-qubit unitaries. From this we extract an optimal average error per Clifford of 0.0936. We also perform an interleaved experiment, alternating our optimal two-qubit gate with random two-qubit Clifford gates, to obtain a two-qubit gate error of 0.0653. We compare these values with a two-qubit gate error of ~0.12 obtained from quantum process tomography, which is likely limited by state preparation and measurement errors.

Characterization of addressability by simultaneous randomized benchmarking

  1. Jay M. Gambetta,
  2. A. D. Corcoles,
  3. S. T. Merkel,
  4. B. R. Johnson,
  5. John A. Smolin,
  6. Jerry M. Chow,
  7. Colm A. Ryan,
  8. Chad Rigetti,
  9. S. Poletto,
  10. Thomas A. Ohki,
  11. Mark B. Ketchen,
  12. and M. Steffen
The control and handling of errors arising from cross-talk and unwanted interactions in multi-qubit systems is an important issue in quantum information processing architectures. We
introduce a benchmarking protocol that provides information about the amount of addressability present in the system and implement it on coupled superconducting qubits. The protocol consists of randomized benchmarking each qubit individually and then simultaneously, and the amount of addressability is related to the difference of the average gate fidelities of those experiments. We present the results on two similar samples with different amounts of cross-talk and unwanted interactions, which agree with predictions based on simple models for the amount of residual coupling.

Superconducting qubit in waveguide cavity with coherence time approaching 0.1ms

  1. Chad Rigetti,
  2. Stefano Poletto,
  3. Jay M. Gambetta,
  4. B. L. T. Plourde,
  5. Jerry M. Chow,
  6. A. D. Corcoles,
  7. John A. Smolin,
  8. Seth T. Merkel,
  9. J. R. Rozen,
  10. George A. Keefe,
  11. Mary B. Rothwell,
  12. Mark B. Ketchen,
  13. and M. Steffen
We report a superconducting artificial atom with an observed quantum coherence time of T2*=95us and energy relaxation time T1=70us. The system consists of a single Josephson junction
transmon qubit embedded in an otherwise empty copper waveguide cavity whose lowest eigenmode is dispersively coupled to the qubit transition. We attribute the factor of four increase in the coherence quality factor relative to previous reports to device modifications aimed at reducing qubit dephasing from residual cavity photons. This simple device holds great promise as a robust and easily produced artificial quantum system whose intrinsic coherence properties are sufficient to allow tests of quantum error correction.