Crosstalk is a leading source of failure in multiqubit quantum information processors. It can arise from a wide range of disparate physical phenomena, and can introduce subtle correlationsin the errors experienced by a device. Several hardware characterization protocols are able to detect the presence of crosstalk, but few provide sufficient information to distinguish various crosstalk errors from one another. In this article we describe how gate set tomography, a protocol for detailed characterization of quantum operations, can be used to identify and characterize crosstalk errors in quantum information processors. We demonstrate our methods on a two-qubit trapped-ion processor and a two-qubit subsystem of a superconducting transmon processor.
As the number of qubits in nascent quantum processing units increases, the connectorized RF (radio frequency) analog circuits used in first generation experiments become exceedinglycomplex. The physical size, cost and electrical failure rate all become limiting factors in the extensibility of control systems. We have developed a series of compact RF mixing boards to adresss this challenge by integrating I/Q quadrature mixing, IF(intermediate frequency)/LO(local oscillator)/RF power level adjustments, and DC (direct current) bias fine tuning on a 40 mm × 80 mm 4-layer PCB (printed circuit board) board with EMI (electromagnetic interference) shielding. The RF mixing module is designed to work with RF and LO frequencies between 2.5 and 8.5 GHz. The typical image rejection and adjacent channel isolation are measured to be ∼27 dBc and ∼50 dB. By scanning the drive phase in a loopback test, the module short-term amplitude and phase stability are typically measured to be 5×10−4 (Vpp/Vmean) and 1×10−3 radian (pk-pk). The operation of RF mixing board was validated by integrating it into the room temperature control system of a superconducting quantum processor and executing randomized benchmarking characterization of single and two qubit gates. We measured a single-qubit process infidelity of 0.0020±0.0001 and a two-qubit process infidelity of 0.052±0.004.
As quantum information processors grow in quantum bit (qubit) count and functionality, the control and measurement system becomes a limiting factor to large scale extensibility. Totackle this challenge and keep pace with rapidly evolving classical control requirements, full control stack access is essential to system level optimization. We design a modular FPGA (field-programmable gate array) based system called QubiC to control and measure a superconducting quantum processing unit. The system includes room temperature electronics hardware, FPGA gateware, and engineering software. A prototype hardware module is assembled from several commercial off-the-shelf evaluation boards and in-house developed circuit boards. Gateware and software are designed to implement basic qubit control and measurement protocols. System functionality and performance are demonstrated by performing qubit chip characterization, gate optimization, and randomized benchmarking sequences on a superconducting quantum processor operating at the Advanced Quantum Testbed at Lawrence Berkeley National Laboratory. The single-qubit and two-qubit process fidelities are measured to be 0.9980±0.0001 and 0.948±0.004 by randomized benchmarking. With fast circuit sequence loading capability, the QubiC performs randomized compiling experiments efficiently and improves the feasibility of executing more complex algorithms.
Quantum sensing and computation can be realized with superconducting microwave circuits. Qubits are engineered quantum systems of capacitors and inductors with non-linear Josephsonjunctions. They operate in the single-excitation quantum regime, photons of 27μeV at 6.5 GHz. Quantum coherence is fundamentally limited by materials defects, in particular atomic-scale parasitic two-level systems (TLS) in amorphous dielectrics at circuit interfaces.[1] The electric fields driving oscillating charges in quantum circuits resonantly couple to TLS, producing phase noise and dissipation. We use coplanar niobium-on-silicon superconducting resonators to probe decoherence in quantum circuits. By selectively modifying interface dielectrics, we show that most TLS losses come from the silicon surface oxide, and most non-TLS losses are distributed throughout the niobium surface oxide. Through post-fabrication interface modification we reduced TLS losses by 85% and non-TLS losses by 72%, obtaining record single-photon resonator quality factors above 5 million and approaching a regime where non-TLS losses are dominant.
[1]Müller, C., Cole, J. H. & Lisenfeld, J. Towards understanding two-level-systems in amorphous solids: insights from quantum circuits. Rep. Prog. Phys. 82, 124501 (2019)
The successful implementation of algorithms on quantum processors relies on the accurate control of quantum bits (qubits) to perform logic gate operations. In this era of noisy intermediate-scalequantum (NISQ) computing, systematic miscalibrations, drift, and crosstalk in the control of qubits can lead to a coherent form of error which has no classical analog. Coherent errors severely limit the performance of quantum algorithms in an unpredictable manner, and mitigating their impact is necessary for realizing reliable quantum computations. Moreover, the average error rates measured by randomized benchmarking and related protocols are not sensitive to the full impact of coherent errors, and therefore do not reliably predict the global performance of quantum algorithms, leaving us unprepared to validate the accuracy of future large-scale quantum computations. Randomized compiling is a protocol designed to overcome these performance limitations by converting coherent errors into stochastic noise, dramatically reducing unpredictable errors in quantum algorithms and enabling accurate predictions of algorithmic performance from error rates measured via cycle benchmarking. In this work, we demonstrate significant performance gains under randomized compiling for the four-qubit quantum Fourier transform algorithm and for random circuits of variable depth on a superconducting quantum processor. Additionally, we accurately predict algorithm performance using experimentally-measured error rates. Our results demonstrate that randomized compiling can be utilized to maximally-leverage and predict the capabilities of modern-day noisy quantum processors, paving the way forward for scalable quantum computing.
Detecting traveling photons is an essential primitive for many quantum information processing tasks. We introduce a single-photon detector design operating in the microwave domain,based on a weakly nonlinear metamaterial where the nonlinearity is provided by a large number of Josephson junctions. The combination of weak nonlinearity and large spatial extent circumvents well-known obstacles limiting approaches based on a localized Kerr medium. Using numerical many-body simulations we show that the single-photon detection fidelity increases with the length of the metamaterial to approach one at experimentally realistic lengths. A remarkable feature of the detector is that the metamaterial approach allows for a large detection bandwidth. In stark contrast to conventional photon detectors operating in the optical domain, the photon is not destroyed by the detection and the photon wavepacket is minimally disturbed. The detector design we introduce offers new possibilities for quantum information processing, quantum optics and metrology in the microwave frequency domain.
Much of modern metrology and communication technology encodes information in electromagnetic waves, typically as an amplitude or phase. While current hardware can perform near-idealmeasurements of photon number or field amplitude, to date no device exists that can even in principle perform an ideal phase measurement. In this work, we implement a single-shot canonical phase measurement on a one-photon wave packet, which surpasses the current standard of heterodyne detection and is optimal for single-shot phase estimation. By applying quantum feedback to a Josephson parametric amplifier, our system adaptively changes its measurement basis during photon arrival and allows us to validate the detector’s performance by tracking the quantum state of the photon source. These results provide an important capability for optical quantum computing, and demonstrate that quantum feedback can both enhance the precision of a detector and enable it to measure new classes of physical observables.
At it’s core, Quantum Mechanics is a theory developed to describe fundamental observations in the spectroscopy of solids and gases. Despite these practical roots, however, quantumtheory is infamous for being highly counterintuitive, largely due to its intrinsically probabilistic nature. Neural networks have recently emerged as a powerful tool that can extract non-trivial correlations in vast datasets. They routinely outperform state-of-the-art techniques in language translation, medical diagnosis and image recognition. It remains to be seen if neural networks can be trained to predict stochastic quantum evolution without a priori specifying the rules of quantum theory. Here, we demonstrate that a recurrent neural network can be trained in real time to infer the individual quantum trajectories associated with the evolution of a superconducting qubit under unitary evolution, decoherence and continuous measurement from raw observations only. The network extracts the system Hamiltonian, measurement operators and physical parameters. It is also able to perform tomography of an unknown initial state without any prior calibration. This method has potential to greatly simplify and enhance tasks in quantum systems such as noise characterization, parameter estimation, feedback and optimization of quantum control.
We consider the effect of phase backaction on the correlator ⟨I(t)I(t+τ)⟩ for the output signal I(t) from continuous measurement of a qubit. We demonstrate that the interplay betweeninformational and phase backactions in the presence of Rabi oscillations can lead to the correlator becoming larger than 1, even though |⟨I⟩|≤1. The correlators can be calculated using the generalized „collapse recipe“ which we validate using the quantum Bayesian formalism. The recipe can be further generalized to the case of multi-time correlators and arbitrary number of detectors, measuring non-commuting qubit observables. The theory agrees well with experimental results for continuous measurement of a transmon qubit. The experimental correlator exceeds the bound of 1 for a sufficiently large angle between the amplified and informational quadratures, causing the phase backaction. The demonstrated effect can be used to calibrate the quadrature misalignment.
We consider multi-time correlators for output signals from linear detectors, continuously measuring several qubit observables at the same time. Using the quantum Bayesian formalism,we show that for unital (symmetric) evolution in the absence of phase backaction, an N-time correlator can be expressed as a product of two-time correlators when N is even. For odd N, there is a similar factorization, which also includes a single-time average. Theoretical predictions agree well with experimental results for two detectors, which simultaneously measure non-commuting qubit observables.