With improved gate calibrations reducing unitary errors, we achieve a benchmarked single-qubit gate fidelity of 99.95% with superconducting qubits in a circuit quantum electrodynamicssystem. We present a method for distinguishing between unitary and non-unitary errors in quantum gates by interleaving repetitions of a target gate within a randomized benchmarking sequence. The benchmarking fidelity decays quadratically with the number of interleaved gates for unitary errors but linearly for non-unitary, allowing us to separate systematic coherent errors from decoherent effects. With this protocol we show that the fidelity of the gates is not limited by unitary errors, but by another drive-activated source of decoherence such as amplitude fluctuations.
Physical implementations of qubits can be extremely sensitive to environmental coupling, which can result in decoherence. While efforts are made for protection, coupling to the environmentis necessary to measure and manipulate the state of the qubit. As such, the goal of having long qubit energy relaxation times is in competition with that of achieving high-fidelity qubit control and measurement. Here we propose a method that integrates filtering techniques for preserving superconducting qubit lifetimes together with the dispersive coupling of the qubit to a microwave resonator for control and measurement. The result is a compact circuit that protects qubits from spontaneous loss to the environment, while also retaining the ability to perform fast, high-fidelity readout. Importantly, we show the device operates in a regime that is attainable with current experimental parameters and provide a specific example for superconducting qubits in circuit quantum electrodynamics.
Using a circuit QED device, we demonstrate a simple qubit measurement pulse shape that yields fast ring-up and ring-down of the readout resonator regardless of the qubit state. Thepulse differs from a square pulse only by the inclusion of additional constant-amplitude segments designed to effect a rapid transition from one steady-state population to another. Using a Ramsey experiment performed shortly after the measurement pulse to quantify the residual population, we find that compared to a square pulse followed by a delay, this pulse shape reduces the timescale for cavity ring-down by more than twice the cavity time constant. At low drive powers, this performance is achieved using pulse parameters calculated from a linear cavity model; at higher powers, empirical optimization of the pulse parameters leads to similar performance.
The resonator-induced phase gate is a multi-qubit controlled-phase gate for fixed-frequency superconducting qubits. Through off-resonant driving of a bus resonator, statically coupledqubits acquire a state-dependent phase. However, photon loss leads to dephasing during the gate, and any residual entanglement between the resonator and qubits after the gate leads to decoherence. Here we consider how to shape the drive pulse to minimize these unwanted effects. First, we review how the gate’s entangling and dephasing rates depend on the system parameters and validate closed-form solutions against direct numerical solution of a master equation. Next, we propose spline pulse shapes that reduce residual qubit-bus entanglement, are robust to imprecise knowledge of the resonator shift, and can be shortened by using higher-degree polynomials. Finally, we present a procedure that optimizes over the subspace of pulses that leave the resonator unpopulated. This finds shaped drive pulses that further reduce the gate duration. Assuming realistic parameters, we exhibit shaped pulses that have the potential to realize ~212 ns spline pulse gates and ~120 ns optimized gates with ~6e-4 average gate infidelity. These examples do not represent fundamental limits of the gate and in principle even shorter gates may be achievable.
High-fidelity measurements are important for the physical implementation of quantum information protocols. Current methods for classifying measurement trajectories in superconductingqubit systems produce fidelities that are systematically lower than those predicted by experimental parameters. Here, we place current classification methods within the framework of machine learning algorithms and improve on them by investigating more sophisticated ML approaches. We find that non-linear algorithms and clustering methods produce significantly higher assignment fidelities that help close the gap to the fidelity achievable under ideal noise conditions. Clustering methods group trajectories into natural subsets within the data, which allows for the diagnosis of specific systematic errors. We find large clusters in the data associated with relaxation processes and show these are the main source of discrepancy between our experimental and achievable fidelities. These error diagnosis techniques help provide a concrete path forward to improve qubit measurements.
Quantum codes excel at correcting local noise but fail to correct leakage faults that excite qubits to states outside the computational space. Aliferis and Terhal have shown that anaccuracy threshold exists for leakage faults using gadgets called leakage reduction units (LRUs). However, these gadgets reduce the accuracy threshold and can increase overhead and experimental complexity, and these costs have not been thoroughly understood. Our work explores a variety of techniques for leakage-resilient, fault-tolerant error correction in the context of topological codes. Our contributions are threefold. First, we develop a leakage model that differs in critical details from earlier models. Second, we use Monte-Carlo simulations to survey several syndrome extraction circuits. Third, given the capability to perform three-outcome measurements, we present a dramatically improved syndrome processing algorithm. Our simulation results show that simple circuits with one extra CNOT per qubit and no additional ancillas reduce the accuracy threshold by less than a factor of 4 when leakage and depolarizing noise rates are comparable. This becomes a factor of 2 when the decoder uses 3-outcome measurements. Finally, when the physical error rate is less than 2 x 10^-4, placing LRUs after every gate may achieve the lowest logical error rates of all of the circuits we considered. We expect the closely related planar and rotated codes to exhibit the same accuracy thresholds and that the ideas may generalize naturally to other topological codes.
To build a fault-tolerant quantum computer, it is necessary to implement a quantum error correcting code. Such codes rely on the ability to extract information about the quantum errorsyndrome while not destroying the quantum information encoded in the system. Stabilizer codes are attractive solutions to this problem, as they are analogous to classical linear codes, have simple and easily computed encoding networks, and allow efficient syndrome extraction. In these codes, syndrome extraction is performed via multi-qubit stabilizer measurements, which are bit and phase parity checks up to local operations. Previously, stabilizer codes have been realized in nuclei, trapped-ions, and superconducting qubits. However these implementations lack the ability to perform fault-tolerant syndrome extraction which continues to be a challenge for all physical quantum computing systems. Here we experimentally demonstrate a key step towards this problem by using a two-by-two lattice of superconducting qubits to perform syndrome extraction and arbitrary error detection via simultaneous quantum non-demolition stabilizer measurements. This lattice represents a primitive tile for the surface code, which is a promising stabilizer code for scalable quantum computing. Furthermore, we successfully show the preservation of an entangled state in the presence of an arbitrary applied error through high-fidelity syndrome measurement. Our results bolster the promise of employing lattices of superconducting qubits for larger-scale fault-tolerant quantum computing.
We analyze the Purcell relaxation rate of a superconducting qubit coupled to a resonator, which is coupled to a transmission line and pumped by an external microwave drive. Consideringthe typical regime of the qubit measurement, we focus on the case when the qubit frequency is significantly detuned from the resonator frequency. Surprisingly, the Purcell rate decreases when the strength of the microwave drive is increased. This suppression becomes significant in the nonlinear regime. In the presence of the microwave drive, the loss of photons to the transmission line also causes excitation of the qubit; however, the excitation rate is typically much smaller than the relaxation rate. Our analysis also applies to a more general case of a two-level quantum system coupled to a cavity.
Quantum error correction (QEC) is an essential step towards realising scalable quantum computers. Theoretically, it is possible to achieve arbitrarily long protection of quantum informationfrom corruption due to decoherence or imperfect controls, so long as the error rate is below a threshold value. The two-dimensional surface code (SC) is a fault-tolerant error correction protocol} that has garnered considerable attention for actual physical implementations, due to relatively high error thresholds ~1%, and restriction to planar lattices with nearest-neighbour interactions. Here we show a necessary element for SC error correction: high-fidelity parity detection of two code qubits via measurement of a third syndrome qubit. The experiment is performed on a sub-section of the SC lattice with three superconducting transmon qubits, in which two independent outer code qubits are joined to a central syndrome qubit via two linking bus resonators. With all-microwave high-fidelity single- and two-qubit nearest-neighbour entangling gates, we demonstrate entanglement distributed across the entire sub-section by generating a three-qubit Greenberger-Horne-Zeilinger (GHZ) state with fidelity ~94%. Then, via high-fidelity measurement of the syndrome qubit, we deterministically entangle the otherwise un-coupled outer code qubits, in either an even or odd parity Bell state, conditioned on the syndrome state. Finally, to fully characterize this parity readout, we develop a new measurement tomography protocol to obtain a fidelity metric (90% and 91%). Our results reveal a straightforward path for expanding superconducting circuits towards larger networks for the SC and eventually a primitive logical qubit implementation.
We present methods and results of shot-by-shot correlation of noisy measurements to extract entangled state and process tomography in a superconducting qubit architecture. We show thataveraging continuous values, rather than counting discrete thresholded values, is a valid tomographic strategy and is in fact the better choice in the low signal-to-noise regime. We show that the effort to measure N-body correlations from individual measurements scales exponentially with N, but with sufficient signal-to-noise the approach remains viable for few-body correlations. We provide a new protocol to optimally account for the transient behavior of pulsed measurements. Despite single-shot measurement fidelity that is less than perfect, we demonstrate appropriate processing to extract and verify entangled states and processes.