Methods for Measuring Magnetic Flux Crosstalk Between Tunable Transmons

  1. Deanna M. Abrams,
  2. Nicolas Didier,
  3. Shane A. Caldwell,
  4. Blake R. Johnson,
  5. and Colm A. Ryan
In the gate model of quantum computing, a program is typically decomposed into a sequence of 1- and 2-qubit gates that are realized as control pulses acting on the system. A key requirement
for a scalable control system is that the qubits are addressable – that control pulses act only on the targeted qubits. The presence of control crosstalk makes this addressability requirement difficult to meet. In order to provide metrics that can drive requirements for decreasing crosstalk, we present three measurements that directly quantify the DC and AC flux crosstalk present between tunable transmons, with sensitivities as fine as 0.001%. We develop the theory to connect AC flux crosstalk measures to the infidelity of a parametrically activated two-qubit gate. We employ quantum process tomography in the presence of crosstalk to provide an empirical study of the effects of crosstalk on two-qubit gate fidelity.

Assessing the Influence of Broadband Instrumentation Noise on Parametrically Modulated Superconducting Qubits

  1. E. Schuyler Fried,
  2. Prasahnt Sivarajah,
  3. Nicolas Didier,
  4. Eyob A. Sete,
  5. Marcus P. da Silva,
  6. Blake R. Johnson,
  7. and Colm A. Ryan
With superconducting transmon qubits — a promising platform for quantum information processing — two-qubit gates can be performed using AC signals to modulate a tunable
transmon’s frequency via magnetic flux through its SQUID loop. However, frequency tunablity introduces an additional dephasing mechanism from magnetic fluctuations. In this work, we experimentally study the contribution of instrumentation noise to flux instability and the resulting error rate of parametrically activated two-qubit gates. Specifically, we measure the qubit coherence time under flux modulation while injecting broadband noise through the flux control channel. We model the noise’s effect using a dephasing rate model that matches well to the measured rates, and use it to prescribe a noise floor required to achieve a desired two-qubit gate infidelity. Finally, we demonstrate that low-pass filtering the AC signal used to drive two-qubit gates between the first and second harmonic frequencies can reduce qubit sensitivity to flux noise at the AC sweet spot (ACSS), confirming an earlier theoretical prediction. The framework we present to determine instrumentation noise floors required for high entangling two-qubit gate fidelity should be extensible to other quantum information processing systems.

Experimental demonstration of Pauli-frame randomization on a superconducting qubit

  1. Matthew Ware,
  2. Guilhem Ribeill,
  3. Diego Riste,
  4. Colm A. Ryan,
  5. Blake Johnson,
  6. and Marcus P. da Silva
The realization of quantum computing’s promise despite noisy imperfect qubits relies, at its core, on the ability to scale cheaply through error correction and fault-tolerance.
While fault-tolerance requires relatively mild assumptions about the nature of the errors, the overhead associated with coherent and non-Markovian errors can be orders of magnitude larger than the overhead associated with purely stochastic Markovian errors. One proposal, known as Pauli frame randomization, addresses this challenge by randomizing the circuits so that the errors are rendered incoherent, while the computation remains unaffected. Similarly, randomization can suppress couplings to slow degrees of freedom associated with non-Markovian evolution. Here we demonstrate the implementation of circuit randomization in a superconducting circuit system, exploiting a flexible programming and control infrastructure to achieve this with low effort. We use high-accuracy gate-set tomography to demonstrate that without randomization the natural errors experienced by our experiment have coherent character, and that with randomization these errors are rendered incoherent. We also demonstrate that randomization suppresses signatures of non-Markovianity evolution to statistically insignificant levels. This demonstrates how noise models can be shaped into more benign forms for improved performance.

Unsupervised Machine Learning on a Hybrid Quantum Computer

  1. J. S. Otterbach,
  2. R. Manenti,
  3. N. Alidoust,
  4. A. Bestwick,
  5. M. Block,
  6. B. Bloom,
  7. S. Caldwell,
  8. N. Didier,
  9. E. Schuyler Fried,
  10. S. Hong,
  11. P. Karalekas,
  12. C. B. Osborn,
  13. A. Papageorge,
  14. E. C. Peterson,
  15. G. Prawiroatmodjo,
  16. N. Rubin,
  17. Colm A. Ryan,
  18. D. Scarabelli,
  19. M. Scheer,
  20. E. A. Sete,
  21. P. Sivarajah,
  22. Robert S. Smith,
  23. A. Staley,
  24. N. Tezak,
  25. W. J. Zeng,
  26. A. Hudson,
  27. Blake R. Johnson,
  28. M. Reagor,
  29. M. P. da Silva,
  30. and C. Rigetti
Machine learning techniques have led to broad adoption of a statistical model of computing. The statistical distributions natively available on quantum processors are a superset of
those available classically. Harnessing this attribute has the potential to accelerate or otherwise improve machine learning relative to purely classical performance. A key challenge toward that goal is learning to hybridize classical computing resources and traditional learning techniques with the emerging capabilities of general purpose quantum processors. Here, we demonstrate such hybridization by training a 19-qubit gate model processor to solve a clustering problem, a foundational challenge in unsupervised learning. We use the quantum approximate optimization algorithm in conjunction with a gradient-free Bayesian optimization to train the quantum machine. This quantum/classical hybrid algorithm shows robustness to realistic noise, and we find evidence that classical optimization can be used to train around both coherent and incoherent imperfections.

Efficient quantum microwave-to-optical conversion using electro-optic nanophotonic coupled-resonators

  1. Mohammad Soltani,
  2. Mian Zhang,
  3. Colm A. Ryan,
  4. Guilhem J. Ribeill,
  5. Cheng Wang,
  6. and Marko Loncar
We propose a low noise, triply-resonant, electro-optic (EO) scheme for quantum microwave-to-optical conversion based on coupled nanophotonics resonators integrated with a superconducting
qubit. Our optical system features a split resonance – a doublet – with a tunable frequency splitting that matches the microwave resonance frequency of the superconducting qubit. This is in contrast to conventional approaches where large optical resonators with free-spectral range comparable to the qubit microwave frequency are used. In our system, EO mixing between the optical pump coupled into the low frequency doublet mode and a resonance microwave photon results in an up-converted optical photon on resonance with high frequency doublet mode. Importantly, the down-conversion process, which is the source of noise, is suppressed in our scheme as the coupled-resonator system does not support modes at that frequency. Our device has at least an order of magnitude smaller footprint than the conventional devices, resulting in large overlap between optical and microwave fields and large photon conversion rate (g/2π) in the range of ∼5-15 kHz. Owing to large g factor and doubly-resonant nature of our device, microwave-to-optical frequency conversion can be achieved with optical pump powers in the range of tens of microwatts, even with moderate values for optical Q (∼106) and microwave Q (∼104). The performance metrics of our device, with substantial improvement over the previous EO-based approaches, promise a scalable quantum microwave-to-optical conversion and networking of superconducting processors via optical fiber communication.

Hardware for Dynamic Quantum Computing

  1. Colm A. Ryan,
  2. Blake R. Johnson,
  3. Diego Ristè,
  4. Brian Donovan,
  5. and Thomas A. Ohki
We describe the hardware, gateware, and software developed at Raytheon BBN Technologies for dynamic quantum information processing experiments on superconducting qubits. In dynamic
experiments, real-time qubit state information is fedback or fedforward within a fraction of the qubits‘ coherence time to dynamically change the implemented sequence. The hardware presented here covers both control and readout of superconducting qubits. For readout we created a custom signal processing gateware and software stack on commercial hardware to convert pulses in a heterodyne receiver into qubit state assignments with minimal latency, alongside data taking capability. For control, we developed custom hardware with gateware and software for pulse sequencing and steering information distribution that is capable of arbitrary control flow on a fraction superconducting qubit coherence times. Both readout and control platforms make extensive use of FPGAs to enable tailored qubit control systems in a reconfigurable fabric suitable for iterative development.

Demonstration of quantum advantage in machine learning

  1. D. Ristè,
  2. Marcus P. da Silva,
  3. Colm A. Ryan,
  4. Andrew W. Cross,
  5. John A. Smolin,
  6. Jay M. Gambetta,
  7. Jerry M. Chow,
  8. and Blake R. Johnson
The main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer. Most problems with a proven quantum advantage
involve the repeated use of a black box, or oracle, whose structure encodes the solution. One measure of the algorithmic performance is the query complexity, i.e., the scaling of the number of oracle calls needed to find the solution with a given probability. Few-qubit demonstrations of quantum algorithms, such as Deutsch-Jozsa and Grover, have been implemented across diverse physical systems such as nuclear magnetic resonance, trapped ions, optical systems, and superconducting circuits. However, at the small scale, these problems can already be solved classically with a few oracle queries, and the attainable quantum advantage is modest. Here we solve an oracle-based problem, known as learning parity with noise, using a five-qubit superconducting processor. Running classical and quantum algorithms on the same oracle, we observe a large gap in query count in favor of quantum processing. We find that this gap grows by orders of magnitude as a function of the error rates and the problem size. This result demonstrates that, while complex fault-tolerant architectures will be required for universal quantum computing, a quantum advantage already emerges in existing noisy systems

Demonstration of Robust Quantum Gate Tomography via Randomized Benchmarking

  1. Blake R. Johnson,
  2. Marcus P. da Silva,
  3. Colm A. Ryan,
  4. Shelby Kimmel,
  5. Jerry M. Chow,
  6. and Thomas A. Ohki
Typical quantum gate tomography protocols struggle with a self-consistency problem: the gate operation cannot be reconstructed without knowledge of the initial state and final measurement,
but such knowledge cannot be obtained without well-characterized gates. A recently proposed technique, known as randomized benchmarking tomography (RBT), sidesteps this self-consistency problem by designing experiments to be insensitive to preparation and measurement imperfections. We implement this proposal in a superconducting qubit system, using a number of experimental improvements including implementing each of the elements of the Clifford group in single `atomic‘ pulses and custom control hardware to enable large overhead protocols. We show a robust reconstruction of several single-qubit quantum gates, including a unitary outside the Clifford group. We demonstrate that RBT yields physical gate reconstructions that are consistent with fidelities obtained by randomized benchmarking.

Implementing a strand of a scalable fault-tolerant quantum computing fabric

  1. Jerry M. Chow,
  2. Jay M. Gambetta,
  3. Easwar Magesan,
  4. Srikanth J. Srinivasan,
  5. Andrew W. Cross,
  6. David W. Abraham,
  7. Nicholas A. Masluk,
  8. B. R. Johnson,
  9. Colm A. Ryan,
  10. and M. Steffen
Quantum error correction (QEC) is an essential step towards realising scalable quantum computers. Theoretically, it is possible to achieve arbitrarily long protection of quantum information
from corruption due to decoherence or imperfect controls, so long as the error rate is below a threshold value. The two-dimensional surface code (SC) is a fault-tolerant error correction protocol} that has garnered considerable attention for actual physical implementations, due to relatively high error thresholds ~1%, and restriction to planar lattices with nearest-neighbour interactions. Here we show a necessary element for SC error correction: high-fidelity parity detection of two code qubits via measurement of a third syndrome qubit. The experiment is performed on a sub-section of the SC lattice with three superconducting transmon qubits, in which two independent outer code qubits are joined to a central syndrome qubit via two linking bus resonators. With all-microwave high-fidelity single- and two-qubit nearest-neighbour entangling gates, we demonstrate entanglement distributed across the entire sub-section by generating a three-qubit Greenberger-Horne-Zeilinger (GHZ) state with fidelity ~94%. Then, via high-fidelity measurement of the syndrome qubit, we deterministically entangle the otherwise un-coupled outer code qubits, in either an even or odd parity Bell state, conditioned on the syndrome state. Finally, to fully characterize this parity readout, we develop a new measurement tomography protocol to obtain a fidelity metric (90% and 91%). Our results reveal a straightforward path for expanding superconducting circuits towards larger networks for the SC and eventually a primitive logical qubit implementation.

Tomography via Correlation of Noisy Measurement Records

  1. Colm A. Ryan,
  2. Blake R. Johnson,
  3. Jay M. Gambetta,
  4. Jerry M. Chow,
  5. Marcus P. da Silva,
  6. Oliver E. Dial,
  7. and Thomas A. Ohki
We present methods and results of shot-by-shot correlation of noisy measurements to extract entangled state and process tomography in a superconducting qubit architecture. We show that
averaging continuous values, rather than counting discrete thresholded values, is a valid tomographic strategy and is in fact the better choice in the low signal-to-noise regime. We show that the effort to measure N-body correlations from individual measurements scales exponentially with N, but with sufficient signal-to-noise the approach remains viable for few-body correlations. We provide a new protocol to optimally account for the transient behavior of pulsed measurements. Despite single-shot measurement fidelity that is less than perfect, we demonstrate appropriate processing to extract and verify entangled states and processes.