A foundational assumption of quantum error correction theory is that quantum gates can be scaled to large processors without exceeding the error-threshold for fault tolerance. Two majorchallenges that could become fundamental roadblocks are manufacturing high performance quantum hardware and engineering a control system that can reach its performance limits. The control challenge of scaling quantum gates from small to large processors without degrading performance often maps to non-convex, high-constraint, and time-dependent control optimization over an exponentially expanding configuration space. Here we report on a control optimization strategy that can scalably overcome the complexity of such problems. We demonstrate it by choreographing the frequency trajectories of 68 frequency-tunable superconducting qubits to execute single- and two-qubit gates while mitigating computational errors. When combined with a comprehensive model of physical errors across our processor, the strategy suppresses physical error rates by ∼3.7× compared with the case of no optimization. Furthermore, it is projected to achieve a similar performance advantage on a distance-23 surface code logical qubit with 1057 physical qubits. Our control optimization strategy solves a generic scaling challenge in a way that can be adapted to other quantum algorithms, operations, and computing architectures.
Scalable quantum computing can become a reality with error correction, provided coherent qubits can be constructed in large arrays. The key premise is that physical errors can remainboth small and sufficiently uncorrelated as devices scale, so that logical error rates can be exponentially suppressed. However, energetic impacts from cosmic rays and latent radioactivity violate both of these assumptions. An impinging particle ionizes the substrate, radiating high energy phonons that induce a burst of quasiparticles, destroying qubit coherence throughout the device. High-energy radiation has been identified as a source of error in pilot superconducting quantum devices, but lacking a measurement technique able to resolve a single event in detail, the effect on large scale algorithms and error correction in particular remains an open question. Elucidating the physics involved requires operating large numbers of qubits at the same rapid timescales as in error correction, exposing the event’s evolution in time and spread in space. Here, we directly observe high-energy rays impacting a large-scale quantum processor. We introduce a rapid space and time-multiplexed measurement method and identify large bursts of quasiparticles that simultaneously and severely limit the energy coherence of all qubits, causing chip-wide failure. We track the events from their initial localised impact to high error rates across the chip. Our results provide direct insights into the scale and dynamics of these damaging error bursts in large-scale devices, and highlight the necessity of mitigation to enable quantum computing to scale.
Two-level-system (TLS) defects in amorphous dielectrics are a major source of noise and decoherence in solid-state qubits. Gate-dependent non-Markovian errors caused by TLS-qubit couplingare detrimental to fault-tolerant quantum computation and have not been rigorously treated in the existing literature. In this work, we derive the non-Markovian dynamics between TLS and qubits during a SWAP-like two-qubit gate and the associated average gate fidelity for frequency-tunable Transmon qubits. This gate dependent error model facilitates using qubits as sensors to simultaneously learn practical imperfections in both the qubit’s environment and control waveforms. We combine the-state-of-art machine learning algorithm with Moiré-enhanced swap spectroscopy to achieve robust learning using noisy experimental data. Deep neural networks are used to represent the functional map from experimental data to TLS parameters and are trained through an evolutionary algorithm. Our method achieves the highest learning efficiency and robustness against experimental imperfections to-date, representing an important step towards in-situ quantum control optimization over environmental and control defects.
Future quantum computing systems will require cryogenic integrated circuits to control and measure millions of qubits. In this paper, we report the design and characterization of aprototype cryogenic CMOS integrated circuit that has been optimized for the control of transmon qubits. The circuit has been integrated into a quantum measurement setup and its performance has been validated through multiple quantum control experiments.
A critical question for the field of quantum computing in the near future is whether quantum devices without error correction can perform a well-defined computational task beyond thecapabilities of state-of-the-art classical computers, achieving so-called quantum supremacy. We study the task of sampling from the output distributions of (pseudo-)random quantum circuits, a natural task for benchmarking quantum computers. Crucially, sampling this distribution classically requires a direct numerical simulation of the circuit, with computational cost exponential in the number of qubits. This requirement is typical of chaotic systems. We extend previous results in computational complexity to argue more formally that this sampling task must take exponential time in a classical computer. We study the convergence to the chaotic regime using extensive supercomputer simulations, modeling circuits with up to 42 qubits – the largest quantum circuits simulated to date for a computational task that approaches quantum supremacy. We argue that while chaotic states are extremely sensitive to errors, quantum supremacy can be achieved in the near-term with approximately fifty superconducting qubits. We introduce cross entropy as a useful benchmark of quantum circuits which approximates the circuit fidelity. We show that the cross entropy can be efficiently measured when circuit simulations are available. Beyond the classically tractable regime, the cross entropy can be extrapolated and compared with theoretical estimates of circuit fidelity to define a practical quantum supremacy test.
For a variety of superconducting qubits, tunable interactions are achieved through mutual inductive coupling to a coupler circuit containing a nonlinear Josephson element. In this paperwe derive the general interaction mediated by such a circuit under the Born-Oppenheimer approximation. This interaction naturally decomposes into a classical part with origin in the classical circuit equations and a quantum part associated with the zero-point energy of the coupler. Our result is non-perturbative in the qubit-coupler coupling strengths and circuit nonlinearities, leading to significant departures from previous treatments in the nonlinear or strong coupling regimes. Specifically, it displays no divergences for large coupler nonlinearities, and it can predict k-body and non-stoquastic interactions that are absent in linear theories. Our analysis provides explicit and efficiently computable series for any term in the interaction Hamiltonian and can be applied to any superconducting qubit type.
Quantum annealing (QA) has been proposed as a quantum enhanced optimization heuristic exploiting tunneling. Here, we demonstrate how finite range tunneling can provide considerablecomputational advantage. For a crafted problem designed to have tall and narrow energy barriers separating local minima, the D-Wave 2X quantum annealer achieves significant runtime advantages relative to Simulated Annealing (SA). For instances with 945 variables this results in a time-to-99\%-success-probability that is ∼108 times faster than SA running on a single processor core. We also compared physical QA with Quantum Monte Carlo (QMC), an algorithm that emulates quantum tunneling on classical processors. We observe a substantial constant overhead against physical QA: D-Wave 2X runs up to ∼108 times faster than an optimized implementation of QMC on a single core. To investigate whether finite range tunneling will also confer an advantage for problems of practical interest, we conduct numerical studies on binary optimization problems that cannot yet be represented on quantum hardware. For random instances of the number partitioning problem, we find numerically that QMC, as well as other algorithms designed to simulate QA, scale better than SA and better than the best known classical algorithms for this problem. We discuss the implications of these findings for the design of next generation quantum annealers.
Temperature determines the relative probability of observing a physical system in an energy state when that system is energetically in equilibrium with its environment. In this manuscript,we present a theory for engineering the temperature of a quantum system different from its ambient temperature, that is basically an analog version of the quantum metropolis algorithm. We define criteria for an engineered quantum bath that, when couples to a quantum system with Hamiltonian H, drives the system to the equilibrium state e−H/TTr(e−H/T) with a tunable parameter T. For a system of superconducting qubits, we propose a circuit-QED approximate realization of such an engineered thermal bath consisting of driven lossy resonators. We consider an artificial thermal bath as a simulator for many-body physics or a controllable temperature knob for a hybrid quantum-thermal annealer.
Quantum tunneling, a phenomenon in which a quantum state traverses energy barriers above the energy of the state itself, has been hypothesized as an advantageous physical resource foroptimization. Here we show that multiqubit tunneling plays a computational role in a currently available, albeit noisy, programmable quantum annealer. We develop a non-perturbative theory of open quantum dynamics under realistic noise characteristics predicting the rate of many-body dissipative quantum tunneling. We devise a computational primitive with 16 qubits where quantum evolutions enable tunneling to the global minimum while the corresponding classical paths are trapped in a false minimum. Furthermore, we experimentally demonstrate that quantum tunneling can outperform thermal hopping along classical paths for problems with up to 200 qubits containing the computational primitive. Our results indicate that many-body quantum phenomena could be used for finding better solutions to hard optimization problems.
Quantum tunneling is a phenomenon in which a quantum state traverses energy barriers above the energy of the state itself. Tunneling has been hypothesized as an advantageous physicalresource for optimization. Here we present the first experimental evidence of a computational role of multiqubit quantum tunneling in the evolution of a programmable quantum annealer. We develop a theoretical model based on a NIBA Quantum Master Equation to describe the multiqubit dissipative tunneling effects under the complex noise characteristics of such quantum devices. We start by considering a computational primitive, the simplest non-convex optimization problem consisting of just one global and one local minimum. The quantum evolutions enable tunneling to the global minimum while the corresponding classical paths are trapped in a false minimum. In our study the non-convex potentials are realized by frustrated networks of qubit clusters with strong intra-cluster coupling. We show that the collective effect of the quantum environment is suppressed in the „critical“ phase during the evolution where quantum tunneling „decides“ the right path to solution. In a later stage dissipation facilitates the multiqubit tunneling leading to the solution state. The predictions of the model accurately describe the experimental data from the D-Wave Two quantum annealer at NASA Ames. In our computational primitive the temperature dependence of the probability of success in the quantum model is opposite to that of the classical paths with thermal hopping. Specifically, we provide an analysis of an optimization problem with sixteen qubits, demonstrating eight qubit tunneling that increases success probabilities. Furthermore, we report results for larger problems with up to 200 qubits that contain the primitive as subproblems.