Quantum error mitigation; intermediate-term schemes for handling quantum errors
Can realistic physical priors help in learning quantum states, unitaries and Hamiltonians?
Quantum error mitigation has been proposed as a means to combat unwanted and unavoidable errors in near-term quantum computing using no or few additional quantum resources, in contrast to fault-tolerant schemes that come along with heavy overheads. Error mitigation has been successfully applied to reduce noise in near-term applications. In this work, however, we identify strong limitations to the degree to which quantum noise can be effectively `undone' for larger system sizes. We set up a framework that rigorously captures large classes of error mitigation schemes in use today. The core of our argument combines fundamental limits of statistical inference with a construction of families of random circuits that are highly sensitive to noise.
Random classical codes have good error correcting properties, and yet they are notoriously hard to decode in practice. Despite many decades of extensive study, the fastest known algorithms still run in exponential time. The Learning Parity with Noise (LPN) problem, which can be seen as the task of decoding a random linear code in the presence of noise, has thus emerged as a prominent hardness assumption with numerous applications in both cryptography and learning theory.
Is there a natural quantum analog of the LPN problem? In this work, we introduce the Learning Stabilizers with Noise (LSN) problem, the task of decoding a random stabilizer code in the presence of local depolarizing noise. We give both polynomial-time and exponential-time quantum algorithms for solving LSN in various depolarizing noise regimes, ranging from extremely low noise, to low constant noise rates, and even higher noise rates up to a threshold. Next, we provide concrete evidence that LSN is hard. First, we show that LSN includes LPN as a special case, which suggests that it is at least as hard as its classical counterpart. Second, we prove a worst-case to average-case reduction for variants of LSN. We then ask: what is the computational complexity of solving LSN? Because the task features quantum inputs, its complexity cannot be characterized by traditional complexity classes. Instead, we show that the LSN problem lies in a recently introduced (distributional and oracle) unitary synthesis class. Finally, we identify several applications of our LSN assumption, ranging from the construction of quantum bit commitment schemes to the computational limitations of learning from quantum data.
While quantum state tomography is notoriously hard, most states hold little interest to practically-minded tomographers. Given that states and unitaries appearing in Nature are of bounded gate complexity, it is natural to ask if efficient learning becomes possible. In this work, we prove that to learn a state generated by a quantum circuit with G two-qubit gates to a small trace distance, a sample complexity scaling linearly in G is necessary and sufficient. We also prove that the optimal query complexity to learn a unitary generated by G gates to a small average-case error scales linearly in G. While sample-efficient learning can be achieved, we show that under reasonable cryptographic conjectures, the computational complexity for learning states and unitaries of gate complexity G must scale exponentially in G. We illustrate how these results establish fundamental limitations on the expressivity of quantum machine learning models and provide new perspectives on no-free-lunch theorems in unitary learning. Together, our results answer how the complexity of learning quantum states and unitaries relate to the complexity of creating these states and unitaries.
In which we study the learnability of the output distributions of local quantum circuits, and find surprising conclusions for quantum circuit Born machines, a cornerstone of quantum machine learning.
In which we studied the same setting, but with a focus on learning from Statistical Queries (SQ) instead of samples.
In which we show information-theoretic implications between quantum learning models, with applications to shadow tomography on specific classes on quantum states.
Quantum chaos is a quantum many-body phenomenon that is associated with a number of intricate properties, such as level repulsion in energy spectra or distinct scalings of out-of-time ordered correlation functions. In this work, we introduce a novel class of "pseudochaotic" quantum Hamiltonians that fundamentally challenges the conventional understanding of quantum chaos and its relationship to computational complexity. Our ensemble is computationally indistinguishable from the Gaussian unitary ensemble (GUE) of strongly-interacting Hamiltonians, widely considered to be a quintessential model for quantum chaos. Surprisingly, despite this effective indistinguishability, our Hamiltonians lack all conventional signatures of chaos: it exhibits Poissonian level statistics, low operator complexity, and weak scrambling properties. This stark contrast between efficient computational indistinguishability and traditional chaos indicators calls into question fundamental assumptions about the nature of quantum chaos. We, furthermore, give an efficient quantum algorithm to simulate Hamiltonians from our ensemble, even though simulating Hamiltonians from the true GUE is known to require exponential time. Our work establishes fundamental limitations on Hamiltonian learning and testing protocols and derives stronger bounds on entanglement and magic state distillation. These results reveal a surprising separation between computational and information-theoretic perspectives on quantum chaos, opening new avenues for research at the intersection of quantum chaos, computational complexity, and quantum information. Above all, it challenges conventional notions of what it fundamentally means to actually observe complex quantum systems.
Notions of so-called magic quantify how non-classical quantum states are in a precise sense: low values of magic preclude quantum advantage; they also play a key role in quantum error correction. In this work, we introduce the phenomenon of ‘pseudomagic’ – wherein certain ensembles of quantum states with low magic are computationally indistinguishable from quantum states with high magic. Previously, such computational indistinguishability has been studied with respect to entanglement, by introducing the notion of pseudoentanglement. However, we show that pseudomagic neither follows from pseudoentanglement, nor implies it. In terms of applications, pseudomagic sheds new light on the theory of quantum chaos: it reveals the existence of states that, although built from non-chaotic unitaries, cannot be distinguished from random chaotic states by any physical observer. Further applications include new lower bounds on state synthesis problems, property testing protocols, as well as implications for quantum cryptography. Our results have the conceptual implication that magic is a ‘hide-able’ property of quantum states: some states have a lot more magic than meets the (computationally-bounded) eye. From the physics perspective, it advocates the mindset that the only physical properties that can be measured in a laboratory are those that are efficiently computationally detectable.
In which we propose a constant-depth quantum circuit to estimate quantities like Tr(ϱ1ϱ2…ϱm), using ideas from Shor error-correcting codes. This brings such estimations closer to the capabilities of near-term quantum processors.
In which we use the Quantum Singular Value Transform to devise an algorithm to implement these two vaunted theoretical tools, useful in approximately reversing a quantum channel and near-optimal state discrimination.
In which we show that superquantum correlations can yield an advantage over quantum and classical correlations for communication over an interference channel!
Shannon established that for fully classical channels, feedback never helps. Quantumly, though, we have seen examples that feedback can help. We introduce measures that bound just how much feedback can increase classical capacity.