We present and demonstrate a general 3-step method for extracting the quantum efficiency of dispersive qubit readout in circuit QED. We use active depletion of post-measurement photonsand optimal integration weight functions on two quadratures to maximize the signal-to-noise ratio of non-steady-state homodyne measurement. We derive analytically and demonstrate experimentally that the method robustly extracts the quantum efficiency for arbitrary readout conditions in the linear regime. We use the proven method to optimally bias a Josephon traveling-wave parametric amplifier and to quantify the different noise contributions in the readout amplification chain.
We present a scalable scheme for executing the error-correction cycle of a monolithic surface-code fabric composed of fast-flux-tuneable transmon qubits with nearest-neighbor coupling.An eight-qubit unit cell forms the basis for repeating both the quantum hardware and coherent control, enabling spatial multiplexing. This control uses three fixed frequencies for all single-qubit gates and a unique frequency detuning pattern for each qubit in the cell. By pipelining the interaction and readout steps of ancilla-based X- and Z-type stabilizer measurements, we can engineer detuning patterns that avoid all second-order transmon-transmon interactions except those exploited in controlled-phase gates, regardless of fabric size. Our scheme is applicable to defect-based and planar logical qubits, including lattice surgery.
A critical ingredient for realizing large-scale quantum information processors will be the ability to make economical use of qubit control hardware. We demonstrate an extensible strategyfor reusing control hardware on same-frequency transmon qubits in a circuit QED chip with surface-code-compatible connectivity. A vector switch matrix enables selective broadcasting of input pulses to multiple transmons with individual tailoring of pulse quadratures for each, as required to minimize the effects of leakage on weakly anharmonic qubits. Using randomized benchmarking, we compare multiple broadcasting strategies that each pass the surface-code error threshold for single-qubit gates. In particular, we introduce a selective-broadcasting control strategy using five pulse primitives, which allows independent, simultaneous Clifford gates on arbitrary numbers of qubits.
Quantum data is susceptible to decoherence induced by the environment and to errors in the hardware processing it. A future fault-tolerant quantum computer will use quantum error correction(QEC) to actively protect against both. In the smallest QEC codes, the information in one logical qubit is encoded in a two-dimensional subspace of a larger Hilbert space of multiple physical qubits. For each code, a set of non-demolition multi-qubit measurements, termed stabilizers, can discretize and signal physical qubit errors without collapsing the encoded information. Experimental demonstrations of QEC to date, using nuclear magnetic resonance, trapped ions, photons, superconducting qubits, and NV centers in diamond, have circumvented stabilizers at the cost of decoding at the end of a QEC cycle. This decoding leaves the quantum information vulnerable to physical qubit errors until re-encoding, violating a basic requirement for fault tolerance. Using a five-qubit superconducting processor, we realize the two parity measurements comprising the stabilizers of the three-qubit repetition code protecting one logical qubit from physical bit-flip errors. We construct these stabilizers as parallelized indirect measurements using ancillary qubits, and evidence their non-demolition character by generating three-qubit entanglement from superposition states. We demonstrate stabilizer-based quantum error detection (QED) by subjecting a logical qubit to coherent and incoherent bit-flip errors on its constituent physical qubits. While increased physical qubit coherence times and shorter QED blocks are required to actively safeguard quantum information, this demonstration is a critical step toward larger codes based on multiple parity measurements.
Quantum process tomography is a necessary tool for verifying quantum gates
and diagnosing faults in architectures and gate design. We show that the
standard approach of process tomographyis grossly inaccurate in the case where
the states and measurement operators used to interrogate the system are
generated by gates that have some systematic error, a situation all but
unavoidable in any practical setting. These errors in tomography can not be
fully corrected through oversampling or by performing a larger set of
experiments. We present an alternative method for tomography to reconstruct an
entire library of gates in a self-consistent manner. The essential ingredient
is to define a likelihood function that assumes nothing about the gates used
for preparation and measurement. In order to make the resulting optimization
tractable we linearize about the target, a reasonable approximation when
benchmarking a quantum computer as opposed to probing a black-box function.
The control and handling of errors arising from cross-talk and unwanted
interactions in multi-qubit systems is an important issue in quantum
information processing architectures. Weintroduce a benchmarking protocol that
provides information about the amount of addressability present in the system
and implement it on coupled superconducting qubits. The protocol consists of
randomized benchmarking each qubit individually and then simultaneously, and
the amount of addressability is related to the difference of the average gate
fidelities of those experiments. We present the results on two similar samples
with different amounts of cross-talk and unwanted interactions, which agree
with predictions based on simple models for the amount of residual coupling.