In the gate model of quantum computing, a program is typically decomposed into a sequence of 1- and 2-qubit gates that are realized as control pulses acting on the system. A key requirementfor a scalable control system is that the qubits are addressable – that control pulses act only on the targeted qubits. The presence of control crosstalk makes this addressability requirement difficult to meet. In order to provide metrics that can drive requirements for decreasing crosstalk, we present three measurements that directly quantify the DC and AC flux crosstalk present between tunable transmons, with sensitivities as fine as 0.001%. We develop the theory to connect AC flux crosstalk measures to the infidelity of a parametrically activated two-qubit gate. We employ quantum process tomography in the presence of crosstalk to provide an empirical study of the effects of crosstalk on two-qubit gate fidelity.
With superconducting transmon qubits — a promising platform for quantum information processing — two-qubit gates can be performed using AC signals to modulate a tunabletransmon’s frequency via magnetic flux through its SQUID loop. However, frequency tunablity introduces an additional dephasing mechanism from magnetic fluctuations. In this work, we experimentally study the contribution of instrumentation noise to flux instability and the resulting error rate of parametrically activated two-qubit gates. Specifically, we measure the qubit coherence time under flux modulation while injecting broadband noise through the flux control channel. We model the noise’s effect using a dephasing rate model that matches well to the measured rates, and use it to prescribe a noise floor required to achieve a desired two-qubit gate infidelity. Finally, we demonstrate that low-pass filtering the AC signal used to drive two-qubit gates between the first and second harmonic frequencies can reduce qubit sensitivity to flux noise at the AC sweet spot (ACSS), confirming an earlier theoretical prediction. The framework we present to determine instrumentation noise floors required for high entangling two-qubit gate fidelity should be extensible to other quantum information processing systems.
Cross-resonance interactions are a promising way to implement all-microwave two-qubit gates with fixed-frequency qubits. In this work, we study the dependence of the cross-resonanceinteraction rate on qubit-qubit detuning and compare with a model that includes the higher levels of a transmon system. To carry out this study we employ two transmon qubits–one fixed frequency and the other flux tunable–to allow us to vary the detuning between qubits. We find that the interaction closely follows a three-level model of the transmon, thus confirming the presence of an optimal regime for cross-resonance gates.
In state-of-the-art quantum computing platforms, including superconducting qubits and trapped ions, imperfections in the 2-qubit entangling gates are the dominant contributions of errorto system-wide performance. Recently, a novel 2-qubit parametric gate was proposed and demonstrated with superconducting transmon qubits. This gate is activated through RF modulation of the transmon frequency and can be operated at an amplitude where the performance is first-order insensitive to flux-noise. In this work we experimentally validate the existence of this AC sweet spot and demonstrate its dependence on white noise power from room temperature electronics. With these factors in place, we measure coherence-limited entangling-gate fidelities as high as 99.2 ± 0.15%.
Machine learning techniques have led to broad adoption of a statistical model of computing. The statistical distributions natively available on quantum processors are a superset ofthose available classically. Harnessing this attribute has the potential to accelerate or otherwise improve machine learning relative to purely classical performance. A key challenge toward that goal is learning to hybridize classical computing resources and traditional learning techniques with the emerging capabilities of general purpose quantum processors. Here, we demonstrate such hybridization by training a 19-qubit gate model processor to solve a clustering problem, a foundational challenge in unsupervised learning. We use the quantum approximate optimization algorithm in conjunction with a gradient-free Bayesian optimization to train the quantum machine. This quantum/classical hybrid algorithm shows robustness to realistic noise, and we find evidence that classical optimization can be used to train around both coherent and incoherent imperfections.
We describe the hardware, gateware, and software developed at Raytheon BBN Technologies for dynamic quantum information processing experiments on superconducting qubits. In dynamicexperiments, real-time qubit state information is fedback or fedforward within a fraction of the qubits‘ coherence time to dynamically change the implemented sequence. The hardware presented here covers both control and readout of superconducting qubits. For readout we created a custom signal processing gateware and software stack on commercial hardware to convert pulses in a heterodyne receiver into qubit state assignments with minimal latency, alongside data taking capability. For control, we developed custom hardware with gateware and software for pulse sequencing and steering information distribution that is capable of arbitrary control flow on a fraction superconducting qubit coherence times. Both readout and control platforms make extensive use of FPGAs to enable tailored qubit control systems in a reconfigurable fabric suitable for iterative development.
The main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer. Most problems with a proven quantum advantageinvolve the repeated use of a black box, or oracle, whose structure encodes the solution. One measure of the algorithmic performance is the query complexity, i.e., the scaling of the number of oracle calls needed to find the solution with a given probability. Few-qubit demonstrations of quantum algorithms, such as Deutsch-Jozsa and Grover, have been implemented across diverse physical systems such as nuclear magnetic resonance, trapped ions, optical systems, and superconducting circuits. However, at the small scale, these problems can already be solved classically with a few oracle queries, and the attainable quantum advantage is modest. Here we solve an oracle-based problem, known as learning parity with noise, using a five-qubit superconducting processor. Running classical and quantum algorithms on the same oracle, we observe a large gap in query count in favor of quantum processing. We find that this gap grows by orders of magnitude as a function of the error rates and the problem size. This result demonstrates that, while complex fault-tolerant architectures will be required for universal quantum computing, a quantum advantage already emerges in existing noisy systems
Typical quantum gate tomography protocols struggle with a self-consistency problem: the gate operation cannot be reconstructed without knowledge of the initial state and final measurement,but such knowledge cannot be obtained without well-characterized gates. A recently proposed technique, known as randomized benchmarking tomography (RBT), sidesteps this self-consistency problem by designing experiments to be insensitive to preparation and measurement imperfections. We implement this proposal in a superconducting qubit system, using a number of experimental improvements including implementing each of the elements of the Clifford group in single `atomic‘ pulses and custom control hardware to enable large overhead protocols. We show a robust reconstruction of several single-qubit quantum gates, including a unitary outside the Clifford group. We demonstrate that RBT yields physical gate reconstructions that are consistent with fidelities obtained by randomized benchmarking.
We present methods and results of shot-by-shot correlation of noisy measurements to extract entangled state and process tomography in a superconducting qubit architecture. We show thataveraging continuous values, rather than counting discrete thresholded values, is a valid tomographic strategy and is in fact the better choice in the low signal-to-noise regime. We show that the effort to measure N-body correlations from individual measurements scales exponentially with N, but with sufficient signal-to-noise the approach remains viable for few-body correlations. We provide a new protocol to optimally account for the transient behavior of pulsed measurements. Despite single-shot measurement fidelity that is less than perfect, we demonstrate appropriate processing to extract and verify entangled states and processes.