A foundational assumption of quantum error correction theory is that quantum gates can be scaled to large processors without exceeding the error-threshold for fault tolerance. Two majorchallenges that could become fundamental roadblocks are manufacturing high performance quantum hardware and engineering a control system that can reach its performance limits. The control challenge of scaling quantum gates from small to large processors without degrading performance often maps to non-convex, high-constraint, and time-dependent control optimization over an exponentially expanding configuration space. Here we report on a control optimization strategy that can scalably overcome the complexity of such problems. We demonstrate it by choreographing the frequency trajectories of 68 frequency-tunable superconducting qubits to execute single- and two-qubit gates while mitigating computational errors. When combined with a comprehensive model of physical errors across our processor, the strategy suppresses physical error rates by ∼3.7× compared with the case of no optimization. Furthermore, it is projected to achieve a similar performance advantage on a distance-23 surface code logical qubit with 1057 physical qubits. Our control optimization strategy solves a generic scaling challenge in a way that can be adapted to other quantum algorithms, operations, and computing architectures.
As superconducting quantum processors increase in complexity, techniques to overcome constraints on frequency crowding are needed. The recently developed method of laser-annealing providesan effective post-fabrication method to adjust the frequency of superconducting qubits. Here, we present an automated laser-annealing apparatus based on conventional microscopy components and demonstrate preservation of highly coherent transmons. In one case, we observe a two-fold increase in coherence after laser-annealing and perform noise spectroscopy on this qubit to investigate the change in defect features, in particular two-level system defects. Finally, we present a local heating model as well as demonstrate aging stability for laser-annealing on the wafer scale. Our work constitutes an important first step towards both understanding the underlying physical mechanism and scaling up laser-annealing of superconducting qubits.
We propose a machine learning algorithm for continuous quantum error correction that is based on the use of a recurrent neural network to identity bit-flip errors from continuous noisysyndrome measurements. The algorithm is designed to operate on measurement signals deviating from the ideal behavior in which the mean value corresponds to a code syndrome value and the measurement has white noise. We analyze continuous measurements taken from a superconducting architecture using three transmon qubits to identify three significant practical examples of non-ideal behavior, namely auto-correlation at temporal short lags, transient syndrome dynamics after each bit-flip, and drift in the steady-state syndrome values over the course of many experiments. Based on these real-world imperfections, we generate synthetic measurement signals from which to train the recurrent neural network, and then test its proficiency when implementing active error correction, comparing this with a traditional double threshold scheme and a discrete Bayesian classifier. The results show that our machine learning protocol is able to outperform the double threshold protocol across all tests, achieving a final state fidelity comparable to the discrete Bayesian classifier.
The storage and processing of quantum information are susceptible to external noise, resulting in computational errors that are inherently continuous A powerful method to suppress theseeffects is to use quantum error correction. Typically, quantum error correction is executed in discrete rounds where errors are digitized and detected by projective multi-qubit parity measurements. These stabilizer measurements are traditionally realized with entangling gates and projective measurement on ancillary qubits to complete a round of error correction. However, their gate structure makes them vulnerable to errors occurring at specific times in the code and errors on the ancilla qubits. Here we use direct parity measurements to implement a continuous quantum bit-flip correction code in a resource-efficient manner, eliminating entangling gates, ancilla qubits, and their associated errors. The continuous measurements are monitored by an FPGA controller that actively corrects errors as they are detected. Using this method, we achieve an average bit-flip detection efficiency of up to 91%. Furthermore, we use the protocol to increase the relaxation time of the protected logical qubit by a factor of 2.7 over the relaxation times of the bare comprising qubits. Our results showcase resource-efficient stabilizer measurements in a multi-qubit architecture and demonstrate how continuous error correction codes can address challenges in realizing a fault-tolerant system.
Much of modern metrology and communication technology encodes information in electromagnetic waves, typically as an amplitude or phase. While current hardware can perform near-idealmeasurements of photon number or field amplitude, to date no device exists that can even in principle perform an ideal phase measurement. In this work, we implement a single-shot canonical phase measurement on a one-photon wave packet, which surpasses the current standard of heterodyne detection and is optimal for single-shot phase estimation. By applying quantum feedback to a Josephson parametric amplifier, our system adaptively changes its measurement basis during photon arrival and allows us to validate the detector’s performance by tracking the quantum state of the photon source. These results provide an important capability for optical quantum computing, and demonstrate that quantum feedback can both enhance the precision of a detector and enable it to measure new classes of physical observables.