We present a method to optimize qubit control parameters during error detection which is compatible with large-scale qubit arrays. We demonstrate our method to optimize single or two-qubitgates in parallel on a nine-qubit system. Additionally, we show how parameter drift can be compensated for during computation by inserting a frequency drift and using our method to remove it. We remove both drift on a single qubit and independent drifts on all qubits simultaneously. We believe this method will be useful in keeping error rates low on all physical qubits throughout the course of a computation. Our method is O(1) scalable to systems of arbitrary size, providing a path towards controlling the large numbers of qubits needed for a fault-tolerant quantum computer
A major challenge in quantum computing is to solve general problems with limited physical hardware. Here, we implement digitized adiabatic quantum computing, combining the generalityof the adiabatic algorithm with the universality of the digital approach, using a superconducting circuit with nine qubits. We probe the adiabatic evolutions, and quantify the success of the algorithm for random spin problems. We find that the system can approximate the solutions to both frustrated Ising problems and problems with more complex interactions, with a performance that is comparable. The presented approach is compatible with small-scale systems as well as future error-corrected quantum computers.
Recent progress in quantum information has led to the start of several large national and industrial efforts to build a quantum computer. Researchers are now working to overcome manyscientific and technological challenges. The program’s biggest obstacle, a potential showstopper for the entire effort, is the need for high-fidelity qubit operations in a scalable architecture. This challenge arises from the fundamental fragility of quantum information, which can only be overcome with quantum error correction. In a fault-tolerant quantum computer the qubits and their logic interactions must have errors below a threshold: scaling up with more and more qubits then brings the net error probability down to appropriate levels ~ 10−18 needed for running complex algorithms. Reducing error requires solving problems in physics, control, materials and fabrication, which differ for every implementation. I explain here the common key driver for continued improvement – the metrology of qubit errors.
Leakage errors occur when a quantum system leaves the two-level qubit subspace. Reducing these errors is critically important for quantum error correction to be viable. To quantifyleakage errors, we use randomized benchmarking in conjunction with measurement of the leakage population. We characterize single qubit gates in a superconducting qubit, and by refining our use of Derivative Reduction by Adiabatic Gate (DRAG) pulse shaping along with detuning of the pulses, we obtain gate errors consistently below 10−3 and leakage rates at the 10−5 level. With the control optimized, we find that a significant portion of the remaining leakage is due to incoherent heating of the qubit.
Current quantum computing architectures lack the size and fidelity required for universal fault-tolerant operation, limiting the practical implementation of key quantum algorithms toall but the smallest problem sizes. In this work we propose an alternative method for general-purpose quantum computation that is ideally suited for such „prethreshold“ superconducting hardware. Computations are performed in the n-dimensional single-excitation subspace (SES) of a system of n tunably coupled superconducting qubits. The approach is not scalable, but allows many operations in the unitary group SU(n) to be implemented by a single application of the Hamiltonian, bypassing the need to decompose a desired unitary into elementary gates. This feature makes large, nontrivial quantum computations possible within the available coherence time. We show how to use a programmable SES chip to perform fast amplitude amplification and phase estimation, two versatile quantum subalgorithms. We also show that an SES processor is well suited for Hamiltonian simulation, specifically simulation of the Schrodinger equation with a real but otherwise arbitrary nxn Hamiltonian matrix. We discuss the utility and practicality of such a universal quantum simulator, and propose its application to the study of realistic atomic and molecular collisions.
The readout fidelity of superconducting transmon and Xmon qubits is partially limited by the qubit energy relaxation through the resonator into the transmission line, which is alsoknown as the Purcell effect. One way to suppress this energy relaxation is to employ a filter which impedes microwave propagation at the qubit frequency. We present semiclassical and quantum analyses for the bandpass Purcell filter realized by E.\ Jeffrey \textit{et al}.\ [Phys.\ Rev.\ Lett.\ 112, 190504 (2014)]. For typical experimental parameters, the bandpass filter suppresses the qubit relaxation rate by up to two orders of magnitude while maintaining the same measurement rate. We also show that in the presence of a microwave drive the qubit relaxation rate further decreases with increasing drive strength.
Since the inception of quantum mechanics, its validity as a complete description of reality has been challenged due to predictions that defy classical intuition. For many years it wasunclear whether predictions like entanglement and projective measurement represented real phenomena or artifacts of an incomplete model. Bell inequalities (BI) provided the first quantitative test to distinguish between quantum entanglement and a yet undiscovered classical hidden variable theory. The Leggett-Garg inequality (LGI) provides a similar test for projective measurement, and more recently has been adapted to include variable strength measurements to study the process of measurement itself. Here we probe the intersection of both entanglement and measurement through the lens of the hybrid Bell-Leggett-Garg inequality (BLGI). By correlating data from ancilla-based weak measurements and direct projective measurements, we for the first time quantify the effect of measurement strength on entanglement collapse. Violation of the BLGI, which we achieve only at the weakest measurement strengths, offers compelling evidence of the completeness of quantum mechanics while avoiding several loopholes common to previous experimental tests. This uniquely quantum result significantly constrains the nature of any possible classical theory of reality. Additionally, we demonstrate that with sufficient scale and fidelity, a universal quantum processor can be used to study richer fundamental physics.
Josephson parametric amplifiers have become a critical tool in superconducting device physics due to their high gain and quantum-limited noise. Traveling wave parametric amplifiers(TWPAs) promise similar noise performance while allowing for significant increases in both bandwidth and dynamic range. We present a TWPA device based on an LC-ladder transmission line of Josephson junctions and parallel plate capacitors using low-loss amorphous silicon dielectric. Crucially, we have inserted λ/4 resonators at regular intervals along the transmission line in order to maintain the phase matching condition between pump, signal, and idler and increase gain. We achieve an average gain of 12\,dB across a 4\,GHz span, along with an average saturation power of -92\,dBm with noise approaching the quantum limit.
Simulating quantum physics with a device which itself is quantum mechanical, a notion Richard Feynman originated, would be an unparallelled computational resource. However, the universalquantum simulation of fermionic systems is daunting due to their particle statistics, and Feynman left as an open question whether it could be done, because of the need for non-local control. Here, we implement fermionic interactions with digital techniques in a superconducting circuit. Focusing on the Hubbard model, we perform time evolution with constant interactions as well as a dynamic phase transition with up to four fermionic modes encoded in four qubits. The implemented digital approach is universal and allows for the efficient simulation of fermions in arbitrary spatial dimensions. We use in excess of 300 single-qubit and two-qubit gates, and reach global fidelities which are limited by gate errors. This demonstration highlights the feasibility of the digital approach and opens a viable route towards analog-digital quantum simulation of interacting fermions and bosons in large-scale solid state systems.
Quantum computing becomes viable when a quantum state can be preserved from environmentally-induced error. If quantum bits (qubits) are sufficiently reliable, errors are sparse andquantum error correction (QEC) is capable of identifying and correcting them. Adding more qubits improves the preservation by guaranteeing increasingly larger clusters of errors will not cause logical failure – a key requirement for large-scale systems. Using QEC to extend the qubit lifetime remains one of the outstanding experimental challenges in quantum computing. Here, we report the protection of classical states from environmental bit-flip errors and demonstrate the suppression of these errors with increasing system size. We use a linear array of nine qubits, which is a natural precursor of the two-dimensional surface code QEC scheme, and track errors as they occur by repeatedly performing projective quantum non-demolition (QND) parity measurements. Relative to a single physical qubit, we reduce the failure rate in retrieving an input state by a factor of 2.7 for five qubits and a factor of 8.5 for nine qubits after eight cycles. Additionally, we tomographically verify preservation of the non-classical Greenberger-Horne-Zeilinger (GHZ) state. The successful suppression of environmentally-induced errors strongly motivates further research into the many exciting challenges associated with building a large-scale superconducting quantum computer.