Experimental demonstration of fault-tolerant state preparation with superconducting qubits

  1. Maika Takita,
  2. Andrew W. Cross,
  3. A. D. Córcoles,
  4. Jerry M. Chow,
  5. and Jay M. Gambetta
Robust quantum computation requires encoding delicate quantum information into degrees of freedom that are hard for the environment to change. Quantum encodings have been demonstrated
in many physical systems by observing and correcting storage errors, but applications require not just storing information; we must accurately compute even with faulty operations. The theory of fault-tolerant quantum computing illuminates a way forward by providing a foundation and collection of techniques for limiting the spread of errors. Here we implement one of the smallest quantum codes in a five-qubit superconducting transmon device and demonstrate fault-tolerant state preparation. We characterize the resulting codewords through quantum process tomography and study the free evolution of the logical observables. Our results are consistent with fault-tolerant state preparation in a protected qubit subspace.

Experimental demonstration of a resonator-induced phase gate in a multi-qubit circuit QED system

  1. Hanhee Paik,
  2. A. Mezzacapo,
  3. Martin Sandberg,
  4. D. T. McClure,
  5. B. Abdo,
  6. A. D. Corcoles,
  7. O. Dial,
  8. D. F. Bogorin,
  9. B. L. T. Plourde,
  10. M. Steffen,
  11. A. W. Cross,
  12. J. M. Gambetta,
  13. and Jerry M. Chow
The resonator-induced phase (RIP) gate is a multi-qubit entangling gate that allows a high degree of flexibility in qubit frequencies, making it attractive for quantum operations in
large-scale architectures. We experimentally realize the RIP gate with four superconducting qubits in a three-dimensional (3D) circuit-quantum electrodynamics architecture, demonstrating high-fidelity controlled-Z (CZ) gates between all possible pairs of qubits from two different 4-qubit devices in pair subspaces. These qubits are arranged within a wide range of frequency detunings, up to as large as 1.8 GHz. We further show a dynamical multi-qubit refocusing scheme in order to isolate out 2-qubit interactions, and combine them to generate a four-qubit Greenberger-Horne-Zeilinger state.

Characterizing a Four-Qubit Planar Lattice for Arbitrary Error Detection

  1. Jerry M. Chow,
  2. Srikanth J. Srinivasan,
  3. Easwar Magesan,
  4. A. D. Corcoles,
  5. David W. Abraham,
  6. Jay M. Gambetta,
  7. and Matthias Steffen
Quantum error correction will be a necessary component towards realizing scalable quantum computers with physical qubits. Theoretically, it is possible to perform arbitrarily long computations
if the error rate is below a threshold value. The two-dimensional surface code permits relatively high fault-tolerant thresholds at the ~1% level, and only requires a latticed network of qubits with nearest-neighbor interactions. Superconducting qubits have continued to steadily improve in coherence, gate, and readout fidelities, to become a leading candidate for implementation into larger quantum networks. Here we describe characterization experiments and calibration of a system of four superconducting qubits arranged in a planar lattice, amenable to the surface code. Insights into the particular qubit design and comparison between simulated parameters and experimentally determined parameters are given. Single- and two-qubit gate tune-up procedures are described and results for simultaneously benchmarking pairs of two-qubit gates are given. All controls are eventually used for an arbitrary error detection protocol described in separate work [Corcoles et al., Nature Communications, 6, 2015]

Machine learning for discriminating quantum measurement trajectories and improving readout

  1. Easwar Magesan,
  2. Jay M. Gambetta,
  3. A. D. Corcoles,
  4. and Jerry M. Chow
High-fidelity measurements are important for the physical implementation of quantum information protocols. Current methods for classifying measurement trajectories in superconducting
qubit systems produce fidelities that are systematically lower than those predicted by experimental parameters. Here, we place current classification methods within the framework of machine learning algorithms and improve on them by investigating more sophisticated ML approaches. We find that non-linear algorithms and clustering methods produce significantly higher assignment fidelities that help close the gap to the fidelity achievable under ideal noise conditions. Clustering methods group trajectories into natural subsets within the data, which allows for the diagnosis of specific systematic errors. We find large clusters in the data associated with relaxation processes and show these are the main source of discrepancy between our experimental and achievable fidelities. These error diagnosis techniques help provide a concrete path forward to improve qubit measurements.

Detecting arbitrary quantum errors via stabilizer measurements on a sublattice of the surface code

  1. A. D. Córcoles,
  2. Easwar Magesan,
  3. Srikanth J. Srinivasan,
  4. Andrew W. Cross,
  5. M. Steffen,
  6. Jay M. Gambetta,
  7. and Jerry M. Chow
To build a fault-tolerant quantum computer, it is necessary to implement a quantum error correcting code. Such codes rely on the ability to extract information about the quantum error
syndrome while not destroying the quantum information encoded in the system. Stabilizer codes are attractive solutions to this problem, as they are analogous to classical linear codes, have simple and easily computed encoding networks, and allow efficient syndrome extraction. In these codes, syndrome extraction is performed via multi-qubit stabilizer measurements, which are bit and phase parity checks up to local operations. Previously, stabilizer codes have been realized in nuclei, trapped-ions, and superconducting qubits. However these implementations lack the ability to perform fault-tolerant syndrome extraction which continues to be a challenge for all physical quantum computing systems. Here we experimentally demonstrate a key step towards this problem by using a two-by-two lattice of superconducting qubits to perform syndrome extraction and arbitrary error detection via simultaneous quantum non-demolition stabilizer measurements. This lattice represents a primitive tile for the surface code, which is a promising stabilizer code for scalable quantum computing. Furthermore, we successfully show the preservation of an entangled state in the presence of an arbitrary applied error through high-fidelity syndrome measurement. Our results bolster the promise of employing lattices of superconducting qubits for larger-scale fault-tolerant quantum computing.

Improved superconducting qubit coherence using titanium nitride

  1. J. Chang,
  2. M. R. Vissers,
  3. A. D. Corcoles,
  4. M. Sandberg,
  5. J. Gao,
  6. David W. Abraham,
  7. Jerry M. Chow,
  8. Jay M. Gambetta,
  9. M. B. Rothwell,
  10. G. A. Keefe,
  11. Matthias Steffen,
  12. and D. P. Pappas
We demonstrate enhanced relaxation and dephasing times of transmon qubits, up to ~ 60 mu s by fabricating the interdigitated shunting capacitors using titanium nitride (TiN). Compared
to lift-off aluminum deposited simultaneously with the Josephson junction, this represents as much as a six-fold improvement and provides evidence that previous planar transmon coherence times are limited by surface losses from two-level system (TLS) defects residing at or near interfaces. Concurrently, we observe an anomalous temperature dependent frequency shift of TiN resonators which is inconsistent with the predicted TLS model.

Self-Consistent Quantum Process Tomography

  1. Seth T. Merkel,
  2. Jay M. Gambetta,
  3. John A. Smolin,
  4. S. Poletto,
  5. A. D. Córcoles,
  6. B. R. Johnson,
  7. Colm A. Ryan,
  8. and M. Steffen
Quantum process tomography is a necessary tool for verifying quantum gates and diagnosing faults in architectures and gate design. We show that the standard approach of process tomography
is grossly inaccurate in the case where the states and measurement operators used to interrogate the system are generated by gates that have some systematic error, a situation all but unavoidable in any practical setting. These errors in tomography can not be fully corrected through oversampling or by performing a larger set of experiments. We present an alternative method for tomography to reconstruct an entire library of gates in a self-consistent manner. The essential ingredient is to define a likelihood function that assumes nothing about the gates used for preparation and measurement. In order to make the resulting optimization tractable we linearize about the target, a reasonable approximation when benchmarking a quantum computer as opposed to probing a black-box function.

Process verification of two-qubit quantum gates by randomized benchmarking

  1. A. D. Córcoles,
  2. Jay M. Gambetta,
  3. Jerry M. Chow,
  4. John A. Smolin,
  5. Matthew Ware,
  6. J. D. Strand,
  7. B. L. T. Plourde,
  8. and M. Steffen
We implement a complete randomized benchmarking protocol on a system of two superconducting qubits. The protocol consists of randomizing over gates in the Clifford group, which experimentally
are generated via an improved two-qubit cross-resonance gate implementation and single-qubit unitaries. From this we extract an optimal average error per Clifford of 0.0936. We also perform an interleaved experiment, alternating our optimal two-qubit gate with random two-qubit Clifford gates, to obtain a two-qubit gate error of 0.0653. We compare these values with a two-qubit gate error of ~0.12 obtained from quantum process tomography, which is likely limited by state preparation and measurement errors.

Characterization of addressability by simultaneous randomized benchmarking

  1. Jay M. Gambetta,
  2. A. D. Corcoles,
  3. S. T. Merkel,
  4. B. R. Johnson,
  5. John A. Smolin,
  6. Jerry M. Chow,
  7. Colm A. Ryan,
  8. Chad Rigetti,
  9. S. Poletto,
  10. Thomas A. Ohki,
  11. Mark B. Ketchen,
  12. and M. Steffen
The control and handling of errors arising from cross-talk and unwanted interactions in multi-qubit systems is an important issue in quantum information processing architectures. We
introduce a benchmarking protocol that provides information about the amount of addressability present in the system and implement it on coupled superconducting qubits. The protocol consists of randomized benchmarking each qubit individually and then simultaneously, and the amount of addressability is related to the difference of the average gate fidelities of those experiments. We present the results on two similar samples with different amounts of cross-talk and unwanted interactions, which agree with predictions based on simple models for the amount of residual coupling.

Superconducting qubit in waveguide cavity with coherence time approaching 0.1ms

  1. Chad Rigetti,
  2. Stefano Poletto,
  3. Jay M. Gambetta,
  4. B. L. T. Plourde,
  5. Jerry M. Chow,
  6. A. D. Corcoles,
  7. John A. Smolin,
  8. Seth T. Merkel,
  9. J. R. Rozen,
  10. George A. Keefe,
  11. Mary B. Rothwell,
  12. Mark B. Ketchen,
  13. and M. Steffen
We report a superconducting artificial atom with an observed quantum coherence time of T2*=95us and energy relaxation time T1=70us. The system consists of a single Josephson junction
transmon qubit embedded in an otherwise empty copper waveguide cavity whose lowest eigenmode is dispersively coupled to the qubit transition. We attribute the factor of four increase in the coherence quality factor relative to previous reports to device modifications aimed at reducing qubit dephasing from residual cavity photons. This simple device holds great promise as a robust and easily produced artificial quantum system whose intrinsic coherence properties are sufficient to allow tests of quantum error correction.