Implementing fast and high-fidelity quantum operations using open-loop quantum optimal control relies on having an accurate model of the quantum dynamics. Any deviations between thismodel and the complete dynamics of the device, such as the presence of spurious modes or pulse distortions, can degrade the performance of optimal controls in practice. Here, we propose an experimentally simple approach to realize optimal quantum controls tailored to the device parameters and environment while specifically characterizing this quantum system. Concretely, we use physics-inspired machine learning to infer an accurate model of the dynamics from experimentally available data and then optimize our experimental controls on this trained model. We show the power and feasibility of this approach by optimizing arbitrary single-qubit operations on a superconducting transmon qubit, using detailed numerical simulations. We demonstrate that this framework produces an accurate description of the device dynamics under arbitrary controls, together with the precise pulses achieving arbitrary single-qubit gates with a high fidelity of about 99.99%.
We present a framework that combines the adjoint state method together with reverse-time back-propagation to solve otherwise prohibitively large open-system quantum control problems.Our approach enables the optimization of arbitrary cost functions with fully general controls applied on large open quantum systems described by a Lindblad master equation. It is scalable, computationally efficient, and has a low memory footprint. We apply this framework to optimize two inherently dissipative operations in superconducting qubits which lag behind in terms of fidelity and duration compared to other unitary operations: the dispersive readout and all-microwave reset of a transmon qubit. Our results show that, given a fixed set of system parameters, shaping the control pulses can yield 2x improvements in the fidelity and duration for both of these operations compared to standard strategies. Our approach can readily be applied to optimize quantum controls in a vast range of applications such as reservoir engineering, autonomous quantum error correction, and leakage-reduction units.
Quantum computers hold the promise of solving computational problems which are intractable using conventional methods. For fault-tolerant operation quantum computers must correct errorsoccurring due to unavoidable decoherence and limited control accuracy. Here, we demonstrate quantum error correction using the surface code, which is known for its exceptionally high tolerance to errors. Using 17 physical qubits in a superconducting circuit we encode quantum information in a distance-three logical qubit building up on recent distance-two error detection experiments. In an error correction cycle taking only 1.1μs, we demonstrate the preservation of four cardinal states of the logical qubit. Repeatedly executing the cycle, we measure and decode both bit- and phase-flip error syndromes using a minimum-weight perfect-matching algorithm in an error-model-free approach and apply corrections in postprocessing. We find a low error probability of 3% per cycle when rejecting experimental runs in which leakage is detected. The measured characteristics of our device agree well with a numerical model. Our demonstration of repeated, fast and high-performance quantum error correction cycles, together with recent advances in ion traps, support our understanding that fault-tolerant quantum computation will be practically realizable.
The controls enacting logical operations on quantum systems are described by time-dependent Hamiltonians that often include rapid oscillations. In order to accurately capture the resultingtime dynamics in numerical simulations, a very small integration time step is required, which can severely impact the simulation run-time. Here, we introduce a semi-analytic method based on the Dyson expansion that allows us to time-evolve driven quantum systems much faster than standard numerical integrators. This solver, which we name Dysolve, efficiently captures the effect of the highly oscillatory terms in the system Hamiltonian, significantly reducing the simulation’s run time as well as its sensitivity to the time-step size. Furthermore, this solver provides the exact derivative of the time-evolution operator with respect to the drive amplitudes. This key feature allows for optimal control in the limit of strong drives and goes beyond common pulse-optimization approaches that rely on rotating-wave approximations. As an illustration of our method, we show results of the optimization of a two-qubit gate using transmon qubits in the circuit QED architecture.