As superconducting quantum circuits scale to larger sizes, the problem of frequency crowding proves a formidable task. Here we present a solution for this problem in fixed-frequencyqubit architectures. By systematically adjusting qubit frequencies post-fabrication, we show a nearly ten-fold improvement in the precision of setting qubit frequencies. To assess scalability, we identify the types of ‚frequency collisions‘ that will impair a transmon qubit and cross-resonance gate architecture. Using statistical modeling, we compute the probability of evading all such conditions, as a function of qubit frequency precision. We find that without post-fabrication tuning, the probability of finding a workable lattice quickly approaches 0. However with the demonstrated precisions it is possible to find collision-free lattices with favorable yield. These techniques and models are currently employed in available quantum systems and will be indispensable as systems continue to scale to larger sizes.
As quantum circuits increase in size, it is critical to establish scalable multiqubit fidelity metrics. Here we investigate three-qubit randomized benchmarking (RB) with fixed-frequencytransmon qubits coupled to a common bus with pairwise microwave-activated interactions (cross-resonance). We measure, for the first time, a three-qubit error per Clifford of 0.106 for all-to-all gate connectivity and 0.207 for linear gate connectivity. Furthermore, by introducing mixed dimensionality simultaneous RB — simultaneous one- and two-qubit RB — we show that the three-qubit errors can be predicted from the one- and two-qubit errors. However, by introducing certain coherent errors to the gates we can increase the three-qubit error to 0.302, an increase that is not predicted by a proportionate increase in the one- and two-qubit errors from simultaneous RB. This demonstrates three-qubit RB as a unique multiqubit metric.
The main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer. Most problems with a proven quantum advantageinvolve the repeated use of a black box, or oracle, whose structure encodes the solution. One measure of the algorithmic performance is the query complexity, i.e., the scaling of the number of oracle calls needed to find the solution with a given probability. Few-qubit demonstrations of quantum algorithms, such as Deutsch-Jozsa and Grover, have been implemented across diverse physical systems such as nuclear magnetic resonance, trapped ions, optical systems, and superconducting circuits. However, at the small scale, these problems can already be solved classically with a few oracle queries, and the attainable quantum advantage is modest. Here we solve an oracle-based problem, known as learning parity with noise, using a five-qubit superconducting processor. Running classical and quantum algorithms on the same oracle, we observe a large gap in query count in favor of quantum processing. We find that this gap grows by orders of magnitude as a function of the error rates and the problem size. This result demonstrates that, while complex fault-tolerant architectures will be required for universal quantum computing, a quantum advantage already emerges in existing noisy systems
The SSSV model is a simple classical model that achieves excellent correlation with published experimental data on the D-Wave machine’s behavior on random instances of its nativeproblem, thus raising questions about how „quantum“ the D-Wave machine is at large scales. In response, a recent preprint by Vinci et al. proposes a particular set of instances on which the D-Wave machine behaves differently from the SSSV model. In this short note, we explain how a simple modeling of systematic errors in the machine allows the SSSV model to reproduce the behavior reported in the experiments of Vinci et al.
Recently there has been intense interest in claims about the performance of the D-Wave machine. Scientifically the most interesting aspect was the claim in Boixo et al., based on extensiveexperiments, that the D-Wave machine exhibits large-scale quantum behavior. Their conclusion was based on the strong correlation of the input-output behavior of the D-Wave machine with a quantum model called simulated quantum annealing, in contrast to its poor correlation with two classical models: simulated annealing and classical spin dynamics. In this paper, we outline a simple new classical model, and show that on the same data it yields correlations with the D-Wave input-output behavior that are at least as good as those of simulated quantum annealing. Based on these results, we conclude that classical models for the D-Wave machine are not ruled out. Further analysis of the new model provides additional algorithmic insights into the nature of the problems being solved by the D-Wave machine.
Quantum process tomography is a necessary tool for verifying quantum gates
and diagnosing faults in architectures and gate design. We show that the
standard approach of process tomographyis grossly inaccurate in the case where
the states and measurement operators used to interrogate the system are
generated by gates that have some systematic error, a situation all but
unavoidable in any practical setting. These errors in tomography can not be
fully corrected through oversampling or by performing a larger set of
experiments. We present an alternative method for tomography to reconstruct an
entire library of gates in a self-consistent manner. The essential ingredient
is to define a likelihood function that assumes nothing about the gates used
for preparation and measurement. In order to make the resulting optimization
tractable we linearize about the target, a reasonable approximation when
benchmarking a quantum computer as opposed to probing a black-box function.
We implement a complete randomized benchmarking protocol on a system of two
superconducting qubits. The protocol consists of randomizing over gates in the
Clifford group, which experimentallyare generated via an improved two-qubit
cross-resonance gate implementation and single-qubit unitaries. From this we
extract an optimal average error per Clifford of 0.0936. We also perform an
interleaved experiment, alternating our optimal two-qubit gate with random
two-qubit Clifford gates, to obtain a two-qubit gate error of 0.0653. We
compare these values with a two-qubit gate error of ~0.12 obtained from quantum
process tomography, which is likely limited by state preparation and
measurement errors.
The control and handling of errors arising from cross-talk and unwanted
interactions in multi-qubit systems is an important issue in quantum
information processing architectures. Weintroduce a benchmarking protocol that
provides information about the amount of addressability present in the system
and implement it on coupled superconducting qubits. The protocol consists of
randomized benchmarking each qubit individually and then simultaneously, and
the amount of addressability is related to the difference of the average gate
fidelities of those experiments. We present the results on two similar samples
with different amounts of cross-talk and unwanted interactions, which agree
with predictions based on simple models for the amount of residual coupling.
We report a superconducting artificial atom with an observed quantum
coherence time of T2*=95us and energy relaxation time T1=70us. The system
consists of a single Josephson junctiontransmon qubit embedded in an otherwise
empty copper waveguide cavity whose lowest eigenmode is dispersively coupled to
the qubit transition. We attribute the factor of four increase in the coherence
quality factor relative to previous reports to device modifications aimed at
reducing qubit dephasing from residual cavity photons. This simple device holds
great promise as a robust and easily produced artificial quantum system whose
intrinsic coherence properties are sufficient to allow tests of quantum error
correction.