Quantum error correction will be a necessary component towards realizing scalable quantum computers with physical qubits. Theoretically, it is possible to perform arbitrarily long computationsif the error rate is below a threshold value. The two-dimensional surface code permits relatively high fault-tolerant thresholds at the ~1% level, and only requires a latticed network of qubits with nearest-neighbor interactions. Superconducting qubits have continued to steadily improve in coherence, gate, and readout fidelities, to become a leading candidate for implementation into larger quantum networks. Here we describe characterization experiments and calibration of a system of four superconducting qubits arranged in a planar lattice, amenable to the surface code. Insights into the particular qubit design and comparison between simulated parameters and experimentally determined parameters are given. Single- and two-qubit gate tune-up procedures are described and results for simultaneously benchmarking pairs of two-qubit gates are given. All controls are eventually used for an arbitrary error detection protocol described in separate work [Corcoles et al., Nature Communications, 6, 2015]
The ability to perform fast, high-fidelity readout of quantum bits (qubits) is essential for the goal of building a quantum computer. However, the parameters of a superconducting qubitdevice necessary to achieve this typically enhance qubit relaxation by spontaneous emission through the measurement channel. Here we design a broadband filter using impedance engineering to allow photons to leave the resonator at the cavity frequency but not at the qubit frequency. This broadband filter is implemented both in an on-chip and off-chip configuration.
Typical quantum gate tomography protocols struggle with a self-consistency problem: the gate operation cannot be reconstructed without knowledge of the initial state and final measurement,but such knowledge cannot be obtained without well-characterized gates. A recently proposed technique, known as randomized benchmarking tomography (RBT), sidesteps this self-consistency problem by designing experiments to be insensitive to preparation and measurement imperfections. We implement this proposal in a superconducting qubit system, using a number of experimental improvements including implementing each of the elements of the Clifford group in single `atomic‘ pulses and custom control hardware to enable large overhead protocols. We show a robust reconstruction of several single-qubit quantum gates, including a unitary outside the Clifford group. We demonstrate that RBT yields physical gate reconstructions that are consistent with fidelities obtained by randomized benchmarking.
With improved gate calibrations reducing unitary errors, we achieve a benchmarked single-qubit gate fidelity of 99.95% with superconducting qubits in a circuit quantum electrodynamicssystem. We present a method for distinguishing between unitary and non-unitary errors in quantum gates by interleaving repetitions of a target gate within a randomized benchmarking sequence. The benchmarking fidelity decays quadratically with the number of interleaved gates for unitary errors but linearly for non-unitary, allowing us to separate systematic coherent errors from decoherent effects. With this protocol we show that the fidelity of the gates is not limited by unitary errors, but by another drive-activated source of decoherence such as amplitude fluctuations.
Physical implementations of qubits can be extremely sensitive to environmental coupling, which can result in decoherence. While efforts are made for protection, coupling to the environmentis necessary to measure and manipulate the state of the qubit. As such, the goal of having long qubit energy relaxation times is in competition with that of achieving high-fidelity qubit control and measurement. Here we propose a method that integrates filtering techniques for preserving superconducting qubit lifetimes together with the dispersive coupling of the qubit to a microwave resonator for control and measurement. The result is a compact circuit that protects qubits from spontaneous loss to the environment, while also retaining the ability to perform fast, high-fidelity readout. Importantly, we show the device operates in a regime that is attainable with current experimental parameters and provide a specific example for superconducting qubits in circuit quantum electrodynamics.
Using a circuit QED device, we demonstrate a simple qubit measurement pulse shape that yields fast ring-up and ring-down of the readout resonator regardless of the qubit state. Thepulse differs from a square pulse only by the inclusion of additional constant-amplitude segments designed to effect a rapid transition from one steady-state population to another. Using a Ramsey experiment performed shortly after the measurement pulse to quantify the residual population, we find that compared to a square pulse followed by a delay, this pulse shape reduces the timescale for cavity ring-down by more than twice the cavity time constant. At low drive powers, this performance is achieved using pulse parameters calculated from a linear cavity model; at higher powers, empirical optimization of the pulse parameters leads to similar performance.
High-fidelity measurements are important for the physical implementation of quantum information protocols. Current methods for classifying measurement trajectories in superconductingqubit systems produce fidelities that are systematically lower than those predicted by experimental parameters. Here, we place current classification methods within the framework of machine learning algorithms and improve on them by investigating more sophisticated ML approaches. We find that non-linear algorithms and clustering methods produce significantly higher assignment fidelities that help close the gap to the fidelity achievable under ideal noise conditions. Clustering methods group trajectories into natural subsets within the data, which allows for the diagnosis of specific systematic errors. We find large clusters in the data associated with relaxation processes and show these are the main source of discrepancy between our experimental and achievable fidelities. These error diagnosis techniques help provide a concrete path forward to improve qubit measurements.
To build a fault-tolerant quantum computer, it is necessary to implement a quantum error correcting code. Such codes rely on the ability to extract information about the quantum errorsyndrome while not destroying the quantum information encoded in the system. Stabilizer codes are attractive solutions to this problem, as they are analogous to classical linear codes, have simple and easily computed encoding networks, and allow efficient syndrome extraction. In these codes, syndrome extraction is performed via multi-qubit stabilizer measurements, which are bit and phase parity checks up to local operations. Previously, stabilizer codes have been realized in nuclei, trapped-ions, and superconducting qubits. However these implementations lack the ability to perform fault-tolerant syndrome extraction which continues to be a challenge for all physical quantum computing systems. Here we experimentally demonstrate a key step towards this problem by using a two-by-two lattice of superconducting qubits to perform syndrome extraction and arbitrary error detection via simultaneous quantum non-demolition stabilizer measurements. This lattice represents a primitive tile for the surface code, which is a promising stabilizer code for scalable quantum computing. Furthermore, we successfully show the preservation of an entangled state in the presence of an arbitrary applied error through high-fidelity syndrome measurement. Our results bolster the promise of employing lattices of superconducting qubits for larger-scale fault-tolerant quantum computing.
Quantum error correction (QEC) is an essential step towards realising scalable quantum computers. Theoretically, it is possible to achieve arbitrarily long protection of quantum informationfrom corruption due to decoherence or imperfect controls, so long as the error rate is below a threshold value. The two-dimensional surface code (SC) is a fault-tolerant error correction protocol} that has garnered considerable attention for actual physical implementations, due to relatively high error thresholds ~1%, and restriction to planar lattices with nearest-neighbour interactions. Here we show a necessary element for SC error correction: high-fidelity parity detection of two code qubits via measurement of a third syndrome qubit. The experiment is performed on a sub-section of the SC lattice with three superconducting transmon qubits, in which two independent outer code qubits are joined to a central syndrome qubit via two linking bus resonators. With all-microwave high-fidelity single- and two-qubit nearest-neighbour entangling gates, we demonstrate entanglement distributed across the entire sub-section by generating a three-qubit Greenberger-Horne-Zeilinger (GHZ) state with fidelity ~94%. Then, via high-fidelity measurement of the syndrome qubit, we deterministically entangle the otherwise un-coupled outer code qubits, in either an even or odd parity Bell state, conditioned on the syndrome state. Finally, to fully characterize this parity readout, we develop a new measurement tomography protocol to obtain a fidelity metric (90% and 91%). Our results reveal a straightforward path for expanding superconducting circuits towards larger networks for the SC and eventually a primitive logical qubit implementation.
We present methods and results of shot-by-shot correlation of noisy measurements to extract entangled state and process tomography in a superconducting qubit architecture. We show thataveraging continuous values, rather than counting discrete thresholded values, is a valid tomographic strategy and is in fact the better choice in the low signal-to-noise regime. We show that the effort to measure N-body correlations from individual measurements scales exponentially with N, but with sufficient signal-to-noise the approach remains viable for few-body correlations. We provide a new protocol to optimally account for the transient behavior of pulsed measurements. Despite single-shot measurement fidelity that is less than perfect, we demonstrate appropriate processing to extract and verify entangled states and processes.