Designing quantum systems with the measurement speed and accuracy needed for quantum error correction using superconducting qubits requires iterative design and test informed by accuratemodels and characterization tools. We introduce a single protocol, with few prerequisite calibrations, which measures the dispersive shift, resonator linewidth, and drive power used in the dispersive readout of superconducting qubits. We find that the resonator linewidth is poorly controlled with a factor of 2 between the maximum and minimum measured values, and is likely to require focused attention in future quantum error correction experiments. We also introduce a protocol for measuring the readout system efficiency using the same power levels as are used in typical qubit readout, and without the need to measure the qubit coherence. We routinely run these protocols on chips with tens of qubits, driven by automation software with little human interaction. Using the extracted system parameters, we find that a model based on those parameters predicts the readout signal to noise ratio to within 10% over a device with 54 qubits.
A foundational assumption of quantum error correction theory is that quantum gates can be scaled to large processors without exceeding the error-threshold for fault tolerance. Two majorchallenges that could become fundamental roadblocks are manufacturing high performance quantum hardware and engineering a control system that can reach its performance limits. The control challenge of scaling quantum gates from small to large processors without degrading performance often maps to non-convex, high-constraint, and time-dependent control optimization over an exponentially expanding configuration space. Here we report on a control optimization strategy that can scalably overcome the complexity of such problems. We demonstrate it by choreographing the frequency trajectories of 68 frequency-tunable superconducting qubits to execute single- and two-qubit gates while mitigating computational errors. When combined with a comprehensive model of physical errors across our processor, the strategy suppresses physical error rates by ∼3.7× compared with the case of no optimization. Furthermore, it is projected to achieve a similar performance advantage on a distance-23 surface code logical qubit with 1057 physical qubits. Our control optimization strategy solves a generic scaling challenge in a way that can be adapted to other quantum algorithms, operations, and computing architectures.
Measurement is an essential component of quantum algorithms, and for superconducting qubits it is often the most error prone. Here, we demonstrate model-based readout optimization achievinglow measurement errors while avoiding detrimental side-effects. For simultaneous and mid-circuit measurements across 17 qubits, we observe 1.5% error per qubit with a 500ns end-to-end duration and minimal excess reset error from residual resonator photons. We also suppress measurement-induced state transitions achieving a leakage rate limited by natural heating. This technique can scale to hundreds of qubits and be used to enhance the performance of error-correcting codes and near-term applications.
Superconducting qubits typically use a dispersive readout scheme, where a resonator is coupled to a qubit such that its frequency is qubit-state dependent. Measurement is performedby driving the resonator, where the transmitted resonator field yields information about the resonator frequency and thus the qubit state. Ideally, we could use arbitrarily strong resonator drives to achieve a target signal-to-noise ratio in the shortest possible time. However, experiments have shown that when the average resonator photon number exceeds a certain threshold, the qubit is excited out of its computational subspace, which we refer to as a measurement-induced state transition. These transitions degrade readout fidelity, and constitute leakage which precludes further operation of the qubit in, for example, error correction. Here we study these transitions using a transmon qubit by experimentally measuring their dependence on qubit frequency, average photon number, and qubit state, in the regime where the resonator frequency is lower than the qubit frequency. We observe signatures of resonant transitions between levels in the coupled qubit-resonator system that exhibit noisy behavior when measured repeatedly in time. We provide a semi-classical model of these transitions based on the rotating wave approximation and use it to predict the onset of state transitions in our experiments. Our results suggest the transmon is excited to levels near the top of its cosine potential following a state transition, where the charge dispersion of higher transmon levels explains the observed noisy behavior of state transitions. Moreover, occupation in these higher energy levels poses a major challenge for fast qubit reset.
We demonstrate a high dynamic range Josephson parametric amplifier (JPA) in which the active nonlinear element is implemented using an array of rf-SQUIDs. The device is matched to the50 Ω environment with a Klopfenstein-taper impedance transformer and achieves a bandwidth of 250-300 MHz, with input saturation powers up to -95 dBm at 20 dB gain. A 54-qubit Sycamore processor was used to benchmark these devices, providing a calibration for readout power, an estimate of amplifier added noise, and a platform for comparison against standard impedance matched parametric amplifiers with a single dc-SQUID. We find that the high power rf-SQUID array design has no adverse effect on system noise, readout fidelity, or qubit dephasing, and we estimate an upper bound on amplifier added noise at 1.6 times the quantum limit. Lastly, amplifiers with this design show no degradation in readout fidelity due to gain compression, which can occur in multi-tone multiplexed readout with traditional JPAs.
Scalable quantum computing can become a reality with error correction, provided coherent qubits can be constructed in large arrays. The key premise is that physical errors can remainboth small and sufficiently uncorrelated as devices scale, so that logical error rates can be exponentially suppressed. However, energetic impacts from cosmic rays and latent radioactivity violate both of these assumptions. An impinging particle ionizes the substrate, radiating high energy phonons that induce a burst of quasiparticles, destroying qubit coherence throughout the device. High-energy radiation has been identified as a source of error in pilot superconducting quantum devices, but lacking a measurement technique able to resolve a single event in detail, the effect on large scale algorithms and error correction in particular remains an open question. Elucidating the physics involved requires operating large numbers of qubits at the same rapid timescales as in error correction, exposing the event’s evolution in time and spread in space. Here, we directly observe high-energy rays impacting a large-scale quantum processor. We introduce a rapid space and time-multiplexed measurement method and identify large bursts of quasiparticles that simultaneously and severely limit the energy coherence of all qubits, causing chip-wide failure. We track the events from their initial localised impact to high error rates across the chip. Our results provide direct insights into the scale and dynamics of these damaging error bursts in large-scale devices, and highlight the necessity of mitigation to enable quantum computing to scale.
Two-level-system (TLS) defects in amorphous dielectrics are a major source of noise and decoherence in solid-state qubits. Gate-dependent non-Markovian errors caused by TLS-qubit couplingare detrimental to fault-tolerant quantum computation and have not been rigorously treated in the existing literature. In this work, we derive the non-Markovian dynamics between TLS and qubits during a SWAP-like two-qubit gate and the associated average gate fidelity for frequency-tunable Transmon qubits. This gate dependent error model facilitates using qubits as sensors to simultaneously learn practical imperfections in both the qubit’s environment and control waveforms. We combine the-state-of-art machine learning algorithm with Moiré-enhanced swap spectroscopy to achieve robust learning using noisy experimental data. Deep neural networks are used to represent the functional map from experimental data to TLS parameters and are trained through an evolutionary algorithm. Our method achieves the highest learning efficiency and robustness against experimental imperfections to-date, representing an important step towards in-situ quantum control optimization over environmental and control defects.
We develop a high speed on-chip flux measurement using a capacitively shunted SQUID as an embedded cryogenic transducer and apply this technique to the qualification of a near-termscalable printed circuit board (PCB) package for frequency tunable superconducting qubits. The transducer is a flux tunable LC resonator where applied flux changes the resonant frequency. We apply a microwave tone to probe this frequency and use a time-domain homodyne measurement to extract the reflected phase as a function of flux applied to the SQUID. The transducer response bandwidth is 2.6 GHz with a maximum gain of 1200∘/Φ0 allowing us to study the settling amplitude to better than 0.1%. We use this technique to characterize on-chip bias line routing and a variety of PCB based packages and demonstrate that step response settling can vary by orders of magnitude in both settling time and amplitude depending on if normal or superconducting materials are used. By plating copper PCBs in aluminum we measure a step response consistent with the packaging used for existing high-fidelity qubits.
Complex integrated circuits require multiple wiring layers. In complementary metal-oxide-semiconductor (CMOS) processing, these layers are robustly separated by amorphous dielectrics.These dielectrics would dominate energy loss in superconducting integrated circuits. Here we demonstrate a procedure that capitalizes on the structural benefits of inter-layer dielectrics during fabrication and mitigates the added loss. We separate and support multiple wiring layers throughout fabrication using SiO2 scaffolding, then remove it post-fabrication. This technique is compatible with foundry level processing and the can be generalized to make many different forms of low-loss multi-layer wiring. We use this technique to create freestanding aluminum vacuum gap crossovers (airbridges). We characterize the added capacitive loss of these airbridges by connecting ground planes over microwave frequency λ/4 coplanar waveguide resonators and measuring resonator loss. We measure a low power resonator loss of ∼3.9×10−8 per bridge, which is 100 times lower than dielectric supported bridges. We further characterize these airbridges as crossovers, control line jumpers, and as part of a coupling network in gmon and fuxmon qubits. We measure qubit characteristic lifetimes (T1’s) in excess of 30 μs in gmon devices.
We present a fabrication process for fully superconducting interconnects compatible with superconducting qubit technology. These interconnects allow for the 3D integration of quantumcircuits without introducing lossy amorphous dielectrics. They are composed of indium bumps several microns tall separated from an aluminum base layer by titanium nitride which serves as a diffusion barrier. We measure the whole structure to be superconducting (transition temperature of 1.1K), limited by the aluminum. These interconnects have an average critical current of 26.8mA, and mechanical shear and thermal cycle testing indicate that these devices are mechanically robust. Our process provides a method that reliably yields superconducting interconnects suitable for use with superconducting qubits.