We introduce Hamiltonian Decoded Quantum Interferometry (HDQI), a quantum algorithm that utilizes coherent Bell measurements and the symplectic representation of the Pauli group to reduce Gibbs sampling and Hamiltonian optimization to classical decoding. For a signed Pauli Hamiltonian 𝐻 and any degree-ℓ polynomial 𝒫, HDQI prepares a purification of the density matrix 𝜌_𝒫 (𝐻) = 𝒫^2 (𝐻)/ Tr[︀ 𝒫^2(𝐻) ]︀ by solving a combination of two tasks: decoding ℓ errors on a classical code defined by 𝐻, and preparing a pilot state that encodes the anti-commutation structure of 𝐻. Choosing 𝒫(𝑥) to approximate exp(−𝛽𝑥/2) yields Gibbs states at inverse temperature 𝛽; other choices of 𝒫 prepare approximate ground states, microcanonical ensembles, and other spectral filters. The decoding problem inherits structural properties of 𝐻; in particular, local Hamiltonians map to LDPC codes. Preparing the pilot state is always efficient for commuting Hamiltonians, but highly non-trivial for non-commuting Hamiltonians. Nevertheless, we prove that this state admits an efficient matrix product state representation for Pauli Hamiltonians whose anti-commutation graph decomposes into connected components of logarithmic size.
HDQI efficiently prepares Gibbs states at arbitrary temperatures for a class of physically motivated commuting Hamiltonians – including the toric code, color code, and Haah’s cubic code – but we also develop a matching efficient classical algorithm for this task, thereby delineating the boundary of efficient classical simulation. For a non-commuting semiclassical spin glass and commuting stabilizer Hamiltonians with quantum defects, HDQI provably prepares Gibbs states up to a constant inverse-temperature threshold using polynomial quantum resources and quasi-polynomial classical pre-processing. These results position HDQI as a versatile new algorithmic primitive and the first extension of Regev’s reduction to non-abelian groups.
Random classical codes have good error correcting properties, and yet they are notoriously hard to decode in practice. Despite many decades of extensive study, the fastest known algorithms still run in exponential time. The Learning Parity with Noise (LPN) problem, which can be seen as the task of decoding a random linear code in the presence of noise, has thus emerged as a prominent hardness assumption with numerous applications in both cryptography and learning theory.
Is there a natural quantum analog of the LPN problem? In this work, we introduce the Learning Stabilizers with Noise (LSN) problem, the task of decoding a random stabilizer code in the presence of local depolarizing noise. We give both polynomial-time and exponential-time quantum algorithms for solving LSN in various depolarizing noise regimes, ranging from extremely low noise, to low constant noise rates, and even higher noise rates up to a threshold. Next, we provide concrete evidence that LSN is hard. First, we show that LSN includes LPN as a special case, which suggests that it is at least as hard as its classical counterpart. Second, we prove a worst-case to average-case reduction for variants of LSN. We then ask: what is the computational complexity of solving LSN? Because the task features quantum inputs, its complexity cannot be characterized by traditional complexity classes. Instead, we show that the LSN problem lies in a recently introduced (distributional and oracle) unitary synthesis class. Finally, we identify several applications of our LSN assumption, ranging from the construction of quantum bit commitment schemes to the computational limitations of learning from quantum data.
In which we propose a constant-depth quantum circuit to estimate quantities like Tr(ϱ1ϱ2…ϱm), using ideas from Shor error-correcting codes. This brings such estimations closer to the capabilities of near-term quantum processors.
In which we use the Quantum Singular Value Transform to devise an algorithm to implement these two vaunted theoretical tools, useful in approximately reversing a quantum channel and near-optimal state discrimination.
In a variety of physically relevant settings for learning from quantum data, designing protocols that can computationally efficiently extract information remains largely an art, and there are important cases where we believe this to be impossible, that is, where there is an information-computation gap. While there is a large array of tools in the classical literature for giving evidence for average-case hardness of statistical inference problems, the corresponding tools in the quantum literature are far more limited. One such framework in the classical literature, the low-degree method, makes predictions about hardness of inference problems based on the failure of estimators given by low-degree polynomials. In this work, we extend this framework to the quantum setting.
We establish a general connection between state designs and low-degree hardness. We use this to obtain the first information-computation gaps for learning Gibbs states of random, sparse, non-local Hamiltonians. We also use it to prove hardness for learning random shallow quantum circuit states in a challenging model where states can be measured in adaptively chosen bases. To our knowledge, the ability to model adaptivity within the low-degree framework was open even in classical settings. In addition, we also obtain a low-degree hardness result for quantum error mitigation against strategies with single-qubit measurements.
We define a new quantum generalization of the planted biclique problem and identify the threshold at which this problem becomes computationally hard for protocols that perform local measurements. Interestingly, the complexity landscape for this problem shifts when going from local measurements to more entangled single-copy measurements.
We show average-case hardness for the "standard" variant of Learning Stabilizers with Noise and for agnostically learning product states.
While quantum state tomography is notoriously hard, most states hold little interest to practically-minded tomographers. Given that states and unitaries appearing in Nature are of bounded gate complexity, it is natural to ask if efficient learning becomes possible. In this work, we prove that to learn a state generated by a quantum circuit with G two-qubit gates to a small trace distance, a sample complexity scaling linearly in G is necessary and sufficient. We also prove that the optimal query complexity to learn a unitary generated by G gates to a small average-case error scales linearly in G. While sample-efficient learning can be achieved, we show that under reasonable cryptographic conjectures, the computational complexity for learning states and unitaries of gate complexity G must scale exponentially in G. We illustrate how these results establish fundamental limitations on the expressivity of quantum machine learning models and provide new perspectives on no-free-lunch theorems in unitary learning. Together, our results answer how the complexity of learning quantum states and unitaries relate to the complexity of creating these states and unitaries.
In which we study the learnability of the output distributions of local quantum circuits, and find surprising conclusions for quantum circuit Born machines, a cornerstone of quantum machine learning.
In which we studied the same setting, but with a focus on learning from Statistical Queries (SQ) instead of samples.
In which we show information-theoretic implications between quantum learning models, with applications to shadow tomography on specific classes on quantum states.
Quantum chaos is a quantum many-body phenomenon that is associated with a number of intricate properties, such as level repulsion in energy spectra or distinct scalings of out-of-time ordered correlation functions. In this work, we introduce a novel class of "pseudochaotic" quantum Hamiltonians that fundamentally challenges the conventional understanding of quantum chaos and its relationship to computational complexity. Our ensemble is computationally indistinguishable from the Gaussian unitary ensemble (GUE) of strongly-interacting Hamiltonians, widely considered to be a quintessential model for quantum chaos. Surprisingly, despite this effective indistinguishability, our Hamiltonians lack all conventional signatures of chaos: it exhibits Poissonian level statistics, low operator complexity, and weak scrambling properties. This stark contrast between efficient computational indistinguishability and traditional chaos indicators calls into question fundamental assumptions about the nature of quantum chaos. We, furthermore, give an efficient quantum algorithm to simulate Hamiltonians from our ensemble, even though simulating Hamiltonians from the true GUE is known to require exponential time. Our work establishes fundamental limitations on Hamiltonian learning and testing protocols and derives stronger bounds on entanglement and magic state distillation. These results reveal a surprising separation between computational and information-theoretic perspectives on quantum chaos, opening new avenues for research at the intersection of quantum chaos, computational complexity, and quantum information. Above all, it challenges conventional notions of what it fundamentally means to actually observe complex quantum systems.
Notions of so-called magic quantify how non-classical quantum states are in a precise sense: low values of magic preclude quantum advantage; they also play a key role in quantum error correction. In this work, we introduce the phenomenon of ‘pseudomagic’ – wherein certain ensembles of quantum states with low magic are computationally indistinguishable from quantum states with high magic. Previously, such computational indistinguishability has been studied with respect to entanglement, by introducing the notion of pseudoentanglement. However, we show that pseudomagic neither follows from pseudoentanglement, nor implies it. In terms of applications, pseudomagic sheds new light on the theory of quantum chaos: it reveals the existence of states that, although built from non-chaotic unitaries, cannot be distinguished from random chaotic states by any physical observer. Further applications include new lower bounds on state synthesis problems, property testing protocols, as well as implications for quantum cryptography. Our results have the conceptual implication that magic is a ‘hide-able’ property of quantum states: some states have a lot more magic than meets the (computationally-bounded) eye. From the physics perspective, it advocates the mindset that the only physical properties that can be measured in a laboratory are those that are efficiently computationally detectable.
Quantum error mitigation has been proposed as a means to combat unwanted and unavoidable errors in near-term quantum computing using no or few additional quantum resources, in contrast to fault-tolerant schemes that come along with heavy overheads. Error mitigation has been successfully applied to reduce noise in near-term applications. In this work, however, we identify strong limitations to the degree to which quantum noise can be effectively `undone' for larger system sizes. We set up a framework that rigorously captures large classes of error mitigation schemes in use today. The core of our argument combines fundamental limits of statistical inference with a construction of families of random circuits that are highly sensitive to noise.
In which we show that superquantum correlations can yield an advantage over quantum and classical correlations for communication over an interference channel!
Shannon established that for fully classical channels, feedback never helps. Quantumly, though, we have seen examples that feedback can help. We introduce measures that bound just how much feedback can increase classical capacity.