
Accepted Papers
PC’s Long Plenaries
- Group Order is in QCMAFrançois Le Gall (Nagoya University); Harumichi Nishimura (Nagoya University); Dhara Thakkar (Nagoya University)[abstract]Abstract: In this work, we show that verifying the order of a finite group given as a black-box is in the complexity class QCMA. This solves an open problem asked by Watrous in 2000 in his seminal paper on quantum proofs and directly implies that the Group Non-Membership problem is also in the class QCMA, which further proves a conjecture proposed by Aaronson and Kuperberg in 2006. Our techniques also give improved quantum upper bounds on the complexity of many other group-theoretical problems, such as group isomorphism in black-box groups.
- merged with #441:Representations of f-Divergences and their role in Quantum Hypothesis TestingSalman Beigi (Institute for Research in Fundamental Sciences (IPM)); Hao-Chung Cheng (National Taiwan University); Christoph Hirche (Leibniz University Hannover); Po-Chieh Liu (National Taiwan University); Marco Tomamichel (National University of Singapore)[abstract]Abstract: Divergences lie at the core of information-theoretic applications. A recently introduced family of f-divergences, defined via an integral representation, has exhibited remarkable properties --- for instance, for the study of contraction coefficients. However, many familiar properties of their classical analogous have remained elusive. In this work, we develop alternative representations of the quantum f-divergences by leveraging the recently established quantum layer-cake theorem. These new formulations enable us to establish several key properties, including monotonicity and connections to other divergences. As our main application, we show how these representations unify and streamline various proofs in quantum hypothesis testing, yielding tighter achievability bounds through conceptually simple arguments that apply across different error regimes.Quantum channel coding with a few code lengthsHao-Chung Cheng (National Taiwan University); Po-Chieh Liu (National Taiwan University)[abstract]Abstract: A central problem in quantum channel coding is determining the code length needed to achieve a given error tolerance ε. We improve the previous second-order bound with quadratic dependence O(1/ε^2) to a logarithmic dependence O(log 1/ε) for small ε. This is achieved by proving a one-shot random coding bound for classical--quantum channel coding, a conjecture of Burnashev and Holevo (1998). Optimizing the input distribution, our result yields the optimal error exponent for classical--quantum channels at rates above the critical rate, even in infinite-dimensional Hilbert spaces. Furthermore, these results apply to classical communication over general quantum channels, with or without entanglement assistance. A key technical tool is our new operator layer cake theorem. This theorem shows that a form of the pretty-good measurement is equivalent to a randomized Holevo–Helstrom measurement, providing an operational explanation of why the pretty-good measurement is pretty good.
- Multi-qubit Toffoli with exponentially fewer T gatesDavid Gosset (University of Waterloo); Robin Kothari (Google Quantum AI); Chenyi Zhang (Stanford University & Google Quantum AI)[abstract]Abstract: Prior work of Beverland et al. has shown that any exact Clifford+T implementation of the n-qubit Toffoli gate must use at least n T gates. Here we show how to get away with exponentially fewer T gates, at the cost of incurring a tiny 1/poly(n) error that can be neglected in most practical situations. More precisely, the n-qubit Toffoli gate can be implemented to within error ϵ in the diamond distance by a randomly chosen Clifford+T circuit with at most O(log(1/ϵ)) T gates. We also give a matching Ω(log(1/ϵ)) lower bound that establishes optimality, and we show that any purely unitary implementation achieving even constant error must use Ω(n) T gates. We also extend our sampling technique to implement other Boolean functions. Finally, we describe upper and lower bounds on the T-count of Boolean functions in terms of non-adaptive parity decision tree complexity and its randomized analogue.
(Short) Plenaries
- Few Single-Qubit Measurements Suffice to Certify Any Quantum StateMeghal Gupta (UC Berkeley); William He (Carnegie Mellon University); Ryan O'Donnell (Carnegie Mellon University)[abstract]Abstract: A fundamental task in quantum information science is \emph{state certification}: testing whether a lab-prepared $n$-qubit state is close to a given hypothesis state. In this work, we show that \emph{every} pure hypothesis state can be certified using only $O(n^2)$ single-qubit measurements applied to $O(n)$ copies of the lab state. Prior to our work, it was not known whether even subexponentially many single-qubit measurements could suffice to certify arbitrary states. This resolves the main open question of Huang, Preskill, and Soleimanifar (FOCS 2024, QIP 2024). Our algorithm also showcases the power of \emph{adaptive measurements}: within each copy of the lab state, previous measurement outcomes dictate how subsequent qubit measurements are made. We show that the adaptivity is necessary, by proving an exponential lower bound on the number of copies needed for any nonadaptive single-qubit measurement algorithm.
- A Constant Rate Quantum Computer on a LineCraig Gidney (Google Quantum AI); Thiago Bergamaschi (UC Berkeley)[abstract]Abstract: We prove by construction that the Bravyi-Poulin-Terhal bound on the spatial density of stabilizer codes does not generalize to stabilizer circuits. To do so, we construct a fault tolerant quantum computer with a coding rate above 5$\%$ and quasi-polylog time overhead, out of a line of qubits with nearest-neighbor connectivity, and prove it has a threshold. The construction is based on modifications to the tower of Hamming codes of Yamasaki and Koashi (Nature Physics, 2024), with operators measured using a variant of Shor’s measurement gadget.
- Constructive counterexamples to the additivity of minimum output Rényi entropy of quantum channels for all p>1Harm Derksen (Northeastern University); Benjamin Lovitz (Concordia University)[abstract]Abstract: We present explicit quantum channels with stictly sub-additive minimum output Rényi entropy for all p>1, improving upon prior constructions which handled p>2. Our example is provided by explicit constructions of linear subspaces with high geometric measure of entanglement. Our construction applies in both the bipartite and multipartite settings. As further applications, we use our construction to find entanglement witnesses with many highly negative eigenvalues, and to construct highly entangled mixed quantum states.
- Quantum precomputation: how to parallelize cascade circuits and the Moore–Nilsson conjecture is falseAdam Bene Watts (University of Calgary); Charles R. Chen (University of California, San Diego); J. William Helton (University of California, San Diego); Joseph Slote (University of Washington)[abstract]Abstract: In their seminal work on quantum circuit complexity, Moore and Nilsson conjectured that unitaries of a deceptively simple form—controlled-unitary "staircases"—require circuits of minimum depth Ω(n). If true, this lower bound would answer a quantum-native analogue of the famous NC ≠ P conjecture. We settle the Moore–Nilsson conjecture in the negative by compressing all circuits in the class to depth O(log n), which is the best possible. The parallelization is exact, ancilla-free, and can be computed in poly(n) time. We also parallelize Moore–Nilsson circuits to the best-possible depth of O(√n) in circuits restricted to 2D connectivity. The main ingredient in these proofs is a blockwise precomputation technique that is in some sense a quantum analogue of the method introduced by Arlazarov, Dinic, Kronrod, and Faradzev in classical dynamic programming, sometimes called the "Four-Russians speedup." We also show that this quantum precomputation technique can be applied to more-general classes of circuits. As examples, we construct both exact and approximate parallelizations which give polynomial depth reductions for cascades of controlled log(n)-qubit unitaries.
- merged with #764:Constant-Overhead Addressable Gates via Single-Shot Code SwitchingLouis Golowich (UC Berkeley & IBM Quantum); Kathleen (Katie) Chang (Yale University & IBM Quantum); Guanyu Zhu (IBM Quantum)[abstract]Abstract: It is a major challenge to perform addressable logical operations on constant-rate quantum LDPC (qLDPC) codes. Indeed, the overhead of targeting specific logical qubits represents a crucial bottleneck in many quantum fault-tolerance schemes. We introduce a protocol for performing fully addressable logical $CNOT$ or Hadamard gates with constant quantum space-time overhead, on a family of constant-rate and polynomial-distance qLDPC codes. Specifically, our gadgets perform Hadamard on any chosen logical qubit within a code block, and $CNOT$ between any pair of logical qubits, either within a block or across blocks. We also construct constant-overhead gadgets for highly parallel logical operations, including a large class of permutations of logical qubits. Prior protocols for such operations required polynomial space-time overhead with respect to the distance, or else relied on codes with certain symmetries that lack known asymptotic constructions. Our codes are given by tensor (i.e. hypergraph) products of classical codes constructed from lossless expander graphs. To address individual logical qubits, we develop a constant-overhead code-switching procedure between 2- and 3-dimensional product codes, which generalizes Bombin’s dimensional jump (arXiv:1412.5079). We provide rigorous fault-tolerance proofs for our gadgets, and specifically prove a constant threshold under locally stochastic noise. Along the way, we develop a small-set flip decoder for high-dimensional product codes from lossless expanders. Our techniques yield additional interesting consequences, such as single-shot state preparation of 2-dimensional product codes with constant space-time overhead.Single-Shot, Universal Protocols via Code SwitchingMichael Gullans (NIST & U. Maryland); Yifan Hong (University of Maryland, College Park); Min-Hsiu Hsieh (Hon Hai Research Institute); Ting-Chun Lin (UC San Diego & Hon Hai Research Institute); Shi Jie Samuel Tan (University of Maryland, College Park)[abstract]Abstract: Code switching is a powerful technique in quantum error correction that allows one to leverage the complementary strengths of different codes to achieve fault-tolerant universal quantum computation. However, existing code-switching protocols which encapsulate recent generalized lattice surgery approaches often either require many rounds of measurements to ensure fault-tolerance or suffer from low code rates. We present a single-shot, universal protocol that uses code-switching between high-rate quantum codes to perform fault-tolerant quantum computation. To our best knowledge, our work contains the first universal fault-tolerant quantum computation protocol that achieves what we term single-shot universality that is characterized by (i) single-shot error correction, (ii) single-shot state preparation, as well as (iii) logical gates and logical measurements with constant depth circuits. We achieve this by showing how to perform single-shot code switching between high-rate homological product codes by developing a generalization of Bombin's dimensional jump for color codes and Hillmann et al.'s single-shot lattice surgery for higher-dimensional topological codes. We introduce a vastly simpler recipe to construct 3D homological product codes with transversal CCZ gates that grants immense flexibility in the choice of expander graphs and local codes, allowing us to expand the search space for codes with good parameters and interesting logical gates. Our work opens an alternative path towards universal fault-tolerant quantum computation with low space-time overhead by circumventing the need for magic state distillation.
- Compressed Permutation OraclesJoseph Carolan (University of Maryland)[abstract]Abstract: The analysis of quantum algorithms which query random, invertible permutations has been a long-standing challenge in cryptography. Many techniques which apply to random oracles fail, or are not known to generalize to this setting. As a result, foundational cryptographic constructions involving permutations often lack quantum security proofs. With the aim of closing this gap, we develop and prove soundness of a compressed permutation oracle. Our construction shares many of the attractive features of Zhandry's original compressed function oracle: the purification is a small list of input-output pairs which meaningfully reflect an algorithm's knowledge of the oracle. We then apply this framework to show that the seven-round balanced Feistel construction is a strong quantum PRP, resolving an open question of (Zhandry, 2012). We further re-prove essentially all known quantum query lower bounds in the random permutation model, notably the collision and preimage resistance of both Sponge and Davies-Meyer, hardness of double-sided zero search and sparse predicate search, and give new lower bounds for cycle finding and the one-more problem.
- Strong random unitaries and fast scramblingThomas Schuster (Caltech, Google); Fermi Ma (UC Berkeley, NYU); Alex Lombardi (Princeton); Fernando Brandao (AWS, Caltech); Hsin-Yuan Huang (Caltech, Google)[abstract]Abstract: Understanding how fast physical systems can resemble Haar-random unitaries is a fundamental question in physics. Many experiments of interest in quantum gravity and many-body physics, including the butterfly effect in quantum information scrambling and the Hayden-Preskill thought experiment, involve queries to a random unitary~$U$ alongside its inverse~$U^\dagger$, conjugate~$U^*$, and transpose~$U^T$. However, conventional notions of approximate unitary designs and pseudorandom unitaries (PRUs) fail to capture these experiments. In this work, we introduce and construct strong unitary designs and strong PRUs that remain robust under all such queries. Our constructions achieve the optimal circuit depth of $\mathcal{O}(\log n)$ for systems of $n$ qubits. We further show that strong unitary designs can form in circuit depth $\mathcal{O}(\log^2 n)$ in circuits composed of independent two-qubit Haar-random gates, and that strong PRUs can form in circuit depth $\poly(\log n)$ in circuits with no ancilla qubits. Our results provide an operational proof of the fast scrambling conjecture from black hole physics: every observable feature of the fastest scrambling quantum systems reproduces Haar-random behavior at logarithmic times.
Regular Contributed Talks
(in order of submission)
- Uncloneable Encryption from DecouplingArchishna Bhattacharyya (University of Ottawa); Eric Culf (University of Waterloo)[abstract]Abstract: We show for the first time that uncloneable encryption exists with no computational assumptions, with security inverse-polynomial in the security parameter. We use properties of a monogamy-of-entanglement game associated with the Haar measure encryption to guarantee that any state that succeeds with high probability cannot be close to maximally-entangled between the referee and either of the players, whence we can apply the decoupling principle to show that either player becomes completely uncorrelated, and therefore cannot win significantly better than random guessing.
- A Quantum Approach For Reducing Communications in Classical Secure Computations with Long OutputsJiayu Zhang (Zhongguancun Laboratory)[abstract]Abstract: How could quantum cryptography help us achieve what are not achievable in classical cryptography? In this work we study the classical cryptographic problem that two parties would like to perform secure computations with long outputs. As a basic primitive and example, we first consider the following problem which we call secure function sampling with long outputs: suppose $f:\{0,1\}^n\rightarrow \{0,1\}^m$ is a public, efficient classical function, where $m$ is big; Alice would like to sample $x$ from its domain and sends $f(x)$ to Bob; what Bob knows should be no more than $f(x)$ even if it behaves maliciously. Classical cryptography, like FHE and succinct arguments [Gen09,Kil92,HW15], allows us to achieve this task within communication complexity $O(n+m)$; could we achieve this task with communication complexity independent of $m$? In this work, we first design a quantum cryptographic protocol that achieves secure function sampling with approximate security, within $O(n)$ communication (omitting the dependency on the security parameter and error tolerance). We also prove the classical impossibility using techniques in [HW15], which means that our protocol indeed achieves a type of quantum advantage. Building on the secure function sampling protocol, we further construct protocols for general secure two-party computations [Yao86,GB01] with approximate security, with communication complexity only depending on the input length and the targeted security. In terms of the assumptions, we construct protocols for these problems assuming only the existence of collapsing hash functions [Unr16]; what's more, we also construct a classical-channel protocol for these problems additionally assuming the existence of noisy trapdoor claw-free functions [BCMVV,BKVV].
- Fault Tolerance by ConstructionBenjamin Rodatz (University of Oxford); Boldizsár Poór (University of Oxford); Maximilian Rüsch (University of Oxford); Aleks Kissinger (University of Oxford)[abstract]Abstract: A key challenge in fault-tolerant quantum computing is synthesising and optimising circuits in a noisy environment, as traditional techniques often fail to account for the effect of noise on circuits. In this work, we propose a framework for designing fault-tolerant quantum circuits that are correct by construction. The framework starts with idealised specifications of fault-tolerant gadgets and refines them using provably sound basic transformations. To reason about manipulating circuits while preserving their error correction properties, we define fault equivalence; two circuits are considered fault-equivalent if all undetectable faults on one circuit have a corresponding fault on the other. This guarantees that the effect of undetectable faults on both circuits is the same. We argue that fault equivalence is a concept that is already implicitly present in the literature. Many problems, such as state preparation and syndrome extraction, can be naturally expressed as finding an implementable circuit that is fault-equivalent to an idealised specification. To utilise fault equivalence in a computationally tractable manner, we adapt the ZX calculus, a diagrammatic language for quantum computing. We restrict its rewrite system to not only preserve the underlying linear map but also fault equivalence, i.e. the circuit's behaviour under noise. We show that this rewriting system is complete, meaning that two circuits are fault-equivalent if and only if one can be transformed into the other using fault-equivalent ZX rewrites. Enabled by our framework, we verify, optimise and synthesise new and efficient circuits for syndrome extraction and cat state preparation. We anticipate that fault equivalence can capture and unify different approaches in fault-tolerant quantum computing, paving the way for an end-to-end circuit compilation framework.
- Parallel Repetition for Post-Quantum ArgumentsAndrew Huang (Massachusetts Institute of Technology); Yael Tauman Kalai (Massachusetts Institute of Technology)[abstract]Abstract: In this work, we prove that parallel repetition of public-coin interactive arguments reduces the soundness error at an exponential rate even in the post-quantum setting. Moreover, we generalize this result to hold for threshold verifiers, where the parallel repeated verifier accepts if and only if at least t of the executions are accepted (for some threshold t). Prior to this work, these results were known only when the cheating prover is assumed to be classical. We also prove a similar result for three-message private-coin arguments. Previously, Bostanci, Qian, Spooner, and Yuen (STOC 2024) proved such a parallel repetition result in the more general setting of quantum protocols, where the communication may be quantum. We consider only protocols where the verifier is classical, but obtain a more simplified analysis, and for the more general setting of threshold verifiers.
- Quantum circuit lower bounds in the magic hierarchyNatalie Parham (Columbia University)[abstract]Abstract: We introduce the \emph{magic hierarchy}, a quantum circuit model that alternates between arbitrary-sized Clifford circuits and constant-depth circuits with two-qubit gates ($\textsf{QNC}^0$). This model unifies existing circuit models, such as $\QACZF$ and models with adaptive intermediate measurements. Despite its generality, we are able to prove nontrivial lower bounds. We prove new lower bounds in the first level of the hierarchy, showing that certain explicit quantum states cannot be approximately prepared by circuits consisting of a Clifford circuit followed by $\textsf{QNC}^0$. These states include ground states of some topologically ordered Hamiltonians and nonstabilizer quantum codes. Our techniques exploit the rigid structure of stabilizer codes and introduce an infectiousness property: if even a single state in a high distance code can be approximately prepared by one of these circuits, then the entire subspace must lie close to a perturbed stabilizer code. We also show that proving state preparation lower bounds beyond a certain level of the hierarchy would imply \emph{classical} circuit lower bounds beyond the reach of current techniques in complexity theory. More broadly, our techniques go beyond lightcone-based methods and highlight how the magic hierarchy provides a natural framework for connecting circuit complexity, condensed matter, and Hamiltonian complexity.
- Unified Framework for Quantum Code EmbeddingAndrew C. Yuan (University of Maryland, College Park)[abstract]Abstract: Given a Calderbank-Shor-Steane (CSS) code, it is sometimes necessary to modify the code by adding an arbitrary number of physical qubits and parity checks. Motivations may include concatenating codes, embedding low-density parity check (LDPC) codes into finite-dimensional Euclidean space, or reducing the weights of parity checks. During this embedding, it is essential that the modified code possesses an isomorphic set of logical qubits as the original code. However, despite numerous explicit constructions, the conditions of when such a property holds true is not known in general. Therefore, using the language of homological algebra, we provide a unified framework that guarantees a natural isomorphism between the output and input codes. In particular, we explicitly show how previous works fit into our framework.
- High-Temperature Fermionic Gibbs States are Mixtures of Gaussian StatesAkshar Ramkumar (California Institute of Technology); Yiyi Cai (California Institute of Technology); Yu Tong (Duke University); Jiaqing Jiang (California Institute of Technology & University of California, Berkeley)[abstract]Abstract: Efficient simulation of a quantum system generally relies on structural properties of the quantum state. Motivated by the recent results by Bakshi et al. on the sudden death of entanglement in high-temperature Gibbs states of quantum spin systems, we study the high-temperature Gibbs states of bounded-degree local fermionic Hamiltonians, which include the special case of geometrically local fermionic systems. We prove that at a sufficiently high temperature that is independent of the system size, the Gibbs state is a probabilistic mixture of fermionic Gaussian states. This forms the basis of an efficient classical algorithm to prepare the Gibbs state by sampling from a distribution of fermionic Gaussian states. As a contrasting example, we show that high-temperature Gibbs states of the Sachdev-Ye-Kitaev (SYK) model are not convex mixtures of Gaussian states.
- On the Cryptographic Foundations of Interactive Quantum AdvantageKabir Tomer (University of Illinois Urbana-Champaign); Mark Zhandry (Stanford University)[abstract]Abstract: In this work, we study the hardness required to achieve proofs of quantumness (PoQ), which in turn capture (potentially interactive) quantum advantage. A ``trivial'' PoQ is to simply assume an average-case hard problem for classical computers that is easy for quantum computers. However, there is much interest in ``non-trivial'' PoQ that actually rely on quantum hardness assumptions, as these are often a starting point for more sophisticated protocols such as classical verification of quantum computation (CVQC). We show several lower-bounds for the hardness required to achieve non-trivial PoQ, in particular showing that they likely require cryptographic hardness, with different types of cryptographic hardness being required for different variations of non-trivial PoQ. In particular, our results help explain the challenges in using lattices to build publicly verifiable PoQ and its various extensions such as CVQC.
- Catalytic z-rotations in constant T-depthIsaac Kim (UC Davis)[abstract]Abstract: We show that the $T$-depth of any single-qubit $z$-rotation can be reduced to $3$ if a certain catalyst state is available. To achieve an $\epsilon$-approximation, it suffices to have a catalyst state of size polynomial in $\log(1/\epsilon)$. This implies that $\mathsf{QNC}^0_f/\mathsf{qpoly}$ admits a finite universal gate set consisting of Clifford+$T$. In particular, there are catalytic constant $T$-depth circuits that approximate multi-qubit Toffoli, adder, and quantum Fourier transform arbitrarily well. We also show that the catalyst state can be prepared in time polynomial in $\log (1/\epsilon)$.
- Causal decompositions of 1D quantum cellular automataAugustin Vanrietvelde (Télécom Paris -- Institut Polytechnique de Paris); Octave Mestoudjian (Université Paris-Saclay); Pablo Arrighi (Inria Saclay)[abstract]Abstract: Understanding quantum theory's causal structure stands out as a major matter, since it radically departs from classical notions of causality. We present advances in the research program of causal decompositions, which investigates the existence of an equivalence between the causal and the compositional structures of unitary channels. Our results concern one-dimensional Quantum Cellular Automata (1D QCAs), i.e.\ unitary channels over a line of N quantum systems (with or without periodic boundary conditions) that feature a causality radius r: a given input cannot causally influence outputs at a distance more than r. We prove that, for N ≥ 4r + 1, 1D QCAs all admit causal decompositions: a unitary channel is a 1D QCA if and only if it can be decomposed into a unitary routed circuit of nearest-neighbour interactions, in which its causal structure is compositionally obvious. This provides the first constructive form of 1D QCAs with causality radius one or more, fully elucidating their structure. In addition, we show that this decomposition can be taken to be translation-invariant for the case of translation-invariant QCAs. Our proof of these results makes use of innovative algebraic techniques, leveraging a new framework for capturing partitions into non-factor sub-C* algebras.
- Fourier Spectrum of Noisy Quantum AlgorithmsUma Girish (Columbia University)[abstract]Abstract: Quantum computing promises exponential speedups for certain problems, yet fully universal quantum computers remain out of reach and near-term devices are inherently noisy. Motivated by this, we study noisy quantum algorithms and the landscape between BQP and BPP. We build on a powerful technique to differentiate quantum and classical algorithms called the level-$\ell$ Fourier growth (the sum of absolute values of Fourier coefficients of sets of size $\ell$) and show that it can also be used to differentiate quantum algorithms based on the types of resources used. We show that noise acting on a quantum algorithm dampens its Fourier growth in ways intricately linked to the type of noise. Concretely, we study noisy models of quantum computation where highly mixed states are prevalent, namely: DQC_k algorithms, where k qubits are clean and the rest are maximally mixed, and 1/2-BQP algorithms, where the initial state is maximally mixed, but the algorithm is given knowledge of the initial state at the end of the computation. We establish upper bounds on the Fourier growth of DQC_k, 1/2-BQP and BQP algorithms and leverage the differences between these bounds to derive oracle separations between these models. In particular, we show that the 2-Forrelation and 3-Forrelation problems require $N^{\Omega(1)}$ queries in the DQC_1 and 1/2-BQP models respectively. Our results are proved using a new matrix decomposition lemma that might be of independent interest.
- Efficient and simple Gibbs sampling state preparation of the 2D toric code via duality to classical Ising chainsPablo Páez Velasco (Universidad Complutense de Madrid / ICMAT); Niclas Schilling (University of Tübingen); Samuel O. Scalet (University of Cambridge / IBM); Frank Verstraete (University of Cambridge / Ghent University); Ángela Capel (University of Cambridge / University of Tübingen)[abstract]Abstract: We introduce the notion of polynomial-depth duality transformations, which relates two sets of operator algebras through a conjugation by a poly-depth quantum circuit, and make use of this to construct efficient Gibbs samplers for a variety of interesting quantum Hamiltonians as they are poly-depth dual to classical Hamiltonians. This is for example the case for the 2D toric code, which is demonstrated to be poly-depth dual to two decoupled classical Ising spin chains for any system size, and we give evidence that such dualities hold for a wide class of stabilizer Hamiltonians. Additionally, we extend the above notion of duality to Lindbladians in order to show that mixing times and other quantities such as the spectral gap or the modified logarithmic Sobolev inequality are preserved under duality.
- Random Unitaries in Constant (Quantum) TimeBenjamin Foxman (Yale University); Natalie Parham (Columbia University); Francisca Vasconcelos (UC Berkeley); Henry Yuen (Columbia University)[abstract]Abstract: Random unitaries are a central object of study in quantum information, with applications to quantum computation, quantum many-body physics, and quantum cryptography. Recent work has constructed unitary designs and pseudorandom unitaries (PRUs) using $\Theta(\log \log n)$-depth unitary circuits with two-qubit gates. In this work, we show that unitary designs and PRUs can be efficiently constructed in several well-studied models of \emph{constant-time} quantum computation (i.e., the time complexity on the quantum computer is independent of the system size). These models are constant-depth circuits augmented with certain nonlocal operations, such as (a) many-qubit TOFFOLI gates, (b) many-qubit FANOUT gates, or (c) mid-circuit measurements with classical feedforward control. Recent advances in quantum computing hardware suggest experimental feasibility of these models in the near future. Our results demonstrate that unitary designs and PRUs can be constructed in much weaker circuit models than previously thought. Furthermore, our construction of PRUs in constant-depth with many-qubit TOFFOLI gates shows that, under cryptographic assumptions, there is no polynomial-time learning algorithm for the circuit class $\mathsf{QAC^0}$. Finally, our results suggest a new approach towards proving that PARITY is not computable in $\mathsf{QAC^0}$, a long-standing question in quantum complexity theory.
- Can effective descriptions of bosonic systems be considered complete?Francesco Arzani (ENS - INRIA); Robert Booth (University of Oxford); Ulysse Chabaud (ENS - INRIA)[abstract]Abstract: Bosonic statistics give rise to remarkable phenomena, from the Hong-Ou-Mandel effect to Bose-Einstein condensation, with applications spanning fundamental science to quantum technologies. Modelling bosonic systems relies heavily on effective descriptions: typically, truncating their infinite-dimensional state space or restricting their dynamics to a simple class of Hamiltonians, such as polynomials of canonical operators. However, many natural bosonic Hamiltonians do not belong to these simple classes, and some quantum effects harnessed by bosonic computers inherently require infinite-dimensional spaces. Can we trust results obtained with such simplifying assumptions to capture real effects? We solve this outstanding problem, showing that these effective descriptions do correctly capture the physics of bosonic systems. Our technical contributions are twofold: first, we prove that any physical bosonic unitary evolution can be accurately approximated by a finite-dimensional unitary evolution; second, we show that any finite-dimensional unitary evolution can be generated exactly by a bosonic Hamiltonian that is a polynomial of canonical operators. Beyond their fundamental significance, our results have implications for classical and quantum simulations of bosonic systems, provide universal methods for engineering bosonic quantum states and Hamiltonians, show that polynomial Hamiltonians generate universal gate sets for quantum computing over bosonic modes, and lead to a bosonic Solovay-Kitaev theorem.
- Tile codesVincent Steffan (IQM Germany); Shin Ho Choe (IQM Germany); Nikolas P. Breuckmann (Breuqmann Ltd.); Francisco Revson Fernandes Pereira (IQM Germany); Jens Niklas Eberhardt (Johannes Gutenberg-Universität Mainz); Zijian Liang (Peking University); Yu-An Chen (Peking University)[abstract]Abstract: We introduce tile codes, a simple yet powerful way of constructing quantum codes that are local on a planar 2D-lattice. Tile codes generalize the usual surface code by allowing for a bit more flexibility in terms of locality and stabilizer weight. Our construction does not compromise on the fact that the codes are local on a lattice with open boundary conditions. Despite its simplicity, we use our construction to find codes with parameters [[288,8,12]] using weight-6 stabilizers and [[288,8,14]] using weight-8 stabilizers, outperforming all previously known constructions in this direction. Allowing for a slightly higher non-locality, we find a [[512,18,19]] code using weight-8 stabilizers, which outperforms the rotated surface code by a factor of more than 12. Our approach provides a unified framework for understanding the structure of codes that are local on a 2D planar lattice and offers a systematic way to explore the space of possible code parameters. In particular, due to its simplicity, the construction naturally accommodates various types of boundary conditions and stabilizer configurations, making it a versatile tool for quantum error correction code design.
- Unfolded distillation: very low-cost magic state preparation for biased-noise qubitsDiego Ruiz (Inria - Alice&Bob); Jérémie Guillaud (Alice&Bob); Christophe Vuillot (Alice&Bob); Mazyar Mirrahimi (Inria)[abstract]Abstract: Magic state distillation enables universal fault-tolerant quantum computation by implementing non-Clifford gates via the preparation of high-fidelity magic states. However, it comes at the cost of substantial logical-level overhead in both space and time. In this work, we propose a very low-cost magic state distillation scheme for biased-noise qubits. By leveraging the noise bias, our scheme enables the preparation of a magic state with a logical error rate of 3×10^(−7), using only 53 qubits and 5.5 error correction rounds, under a noise bias of η ≳ 5×10^6 and a phase-flip noise rate of 0.1%. This reduces the circuit volume by more than one order of magnitude relative to magic state cultivation for unbiased-noise qubits and by more than two orders of magnitude relative to standard magic state distillation. Moreover, our scheme provides three key advantages over previous proposals for biased-noise qubits. First, it only requires nearest-neighbor two-qubit gates on a 2D lattice. Second, the logical fidelity remains nearly identical even at a more modest noise bias of η ≳ 80, at the cost of a slightly increased circuit volume. Third, the scheme remains effective even at high physical phase-flip rates, in contrast to previously proposed approaches whose circuit volume grows exponentially with the error rate. Our construction is based on unfolding the X stabilizer group of the Hadamard 3D quantum Reed-Muller code in 2D, enabling distillation at the physical level rather than the logical level, and is therefore referred to as unfolded distillation.
- Obfuscation of Unitary Quantum ProgramsMiryam Mi-Ying Huang (University of Southern California); Er-Cheng Tang (University of Washington)[abstract]Abstract: Program obfuscation aims to hide the inner workings of a program while preserving its functionality. In the quantum setting, recent works have obtained obfuscation schemes for specialized classes of quantum circuits. For instance, Bartusek, Brakerski, and Vaikuntanathan (STOC 2024) constructed a quantum state obfuscation scheme, which supports the obfuscation of quantum programs represented as quantum states for pseudo-deterministic quantum programs with classical inputs and outputs in the classical oracle model. In this work, we improve upon existing results by constructing the first quantum state obfuscation scheme for unitary (or approximately unitary) quantum programs supporting quantum inputs and outputs in the classical oracle model. At the core of our obfuscation scheme are two novel ingredients: a functional quantum authentication scheme that allows key holders to learn specific functions of the authenticated quantum state with simulation-based security, and a compiler that represents an arbitrary quantum circuit as a projective linear-plus-measurement quantum program described by a sequence of non-adaptive Clifford gates interleaved with adaptive and compatible measurements.
- Optimal quantum simulation of linear non-unitary dynamicsGuang Hao Low (Google); Rolando D. Somma (Google)[abstract]Abstract: We present a quantum algorithm for simulating the time evolution generated by any bounded, time-dependent operator $-A$ with non-positive logarithmic norm, thereby serving as a natural generalization of the Hamiltonian simulation problem. Our method generalizes the recent Linear-Combination-of-Hamiltonian-Simulation (LCHS) framework. In instances where $A$ is time-independent, we provide a block-encoding of the evolution operator $e^{-At}$ with $\mathcal{O}\big(t\log\frac{1}{\epsilon})$ queries to the block-encoding oracle for $A$. We also show how the normalized evolved state can be prepared with $\mathcal{O}(1/\|e^{-At}\ket{\vec{u}_0}\|)$ queries to the oracle that prepares the normalized initial state $\ket{\vec{u}_0}$. These complexities are optimal in all parameters and improve the error scaling over prior results. Furthermore, we show that any improvement of our approach exceeding a constant factor of approximately 3 is infeasible. For general time-dependent operators $A$, we also prove that a uniform trapezoidal rule on our LCHS construction yields exponential convergence, leading to simplified quantum circuits with improved gate complexity compared to prior nonuniform-quadrature methods.
- merged with #377:Quantum Lifting for Invertible Permutations and Ideal CiphersAlexandru Cojocaru (University of Edinburgh); Minki Hhan (University of Texas at Austin); Qipeng Liu (UC San Diego); Takashi Yamakawa (NTT Social Informatics Laboratories); Aaram Yun (Ewha Womans University)[abstract]Abstract: In this work, we derive the first lifting theorems for establishing security in the quantum random permutation and ideal cipher models. These theorems relate the success probability of an arbitrary quantum adversary to that of a classical algorithm making only a small number of classical queries. By applying these lifting theorems, we improve previous results and obtain new quantum query complexity bounds and post-quantum security results. Notably, we derive tight bounds for the quantum hardness of the double-sided zero search game and establish the post-quantum security for the preimage resistance, one-wayness, and multi-collision resistance of constant-round sponge, as well as the collision resistance of the Davies-Meyer construction.The Sponge is Quantum IndifferentiableGorjan Alagic (University of Maryland and NIST); Joseph Carolan (University of Maryland, College Park); Christian Majenz (Technical University of Denmark); Saliha Tokat (Technical University of Denmark)[abstract]Abstract: The sponge is a cryptographic construction that turns a public permutation into a hash function. When instantiated with the Keccak permutation, the sponge forms the NIST SHA-3 standard. SHA-3 is a core component of most post-quantum public-key cryptography schemes slated for worldwide adoption. While one can consider many security properties for the sponge, the ultimate one is indifferentiability from a random oracle, or simply indifferentiability. The sponge was proved indifferentiable against classical adversaries by Bertoni et al. in 2008. Despite significant efforts in the years since, little is known about sponge security against quantum adversaries, even for simple properties like preimage or collision resistance beyond a single round. This is primarily due to the lack of a satisfactory quantum analog of the lazy sampling technique for permutations. In this work, we develop a specialized technique that overcomes this barrier in the case of the sponge. We prove that the sponge is in fact indifferentiable from a random oracle against quantum adversaries. Our result establishes that the domain extension technique behind SHA-3 is secure in the post-quantum setting. Our indifferentiability bound for the sponge is a loose O(poly(q)2^(−min(r,c)/4)), but we also give bounds on preimage and collision resistance that are tighter.
- merged with #517:Quantitative Quantum Soundness for Bipartite Compiled Bell Games via the Sequential NPA HierarchyXiangling Xu (Inria Paris-Saclay); Igor Klep (University of Ljubljana, Faculty of Mathematics and Physics); Connor Paddock (University of Ottawa); Marc-Olivier Renou (Inria Paris-Saclay, CPHT, Ecole polytechnique, Institut Polytechnique de Paris, Palaiseau); Simon Schmidt (Ruhr University Bochum); Lucas Tendick (Inria Paris-Saclay); Yuming Zhao (University of Copenhagen)[abstract]Abstract: Compiling Bell games under cryptographic assumptions replaces the need for physical separation, allowing nonlocality to be probed with a single untrusted device. While Kalai et al. (STOC'23) showed that this compilation preserves quantum advantages, its quantitative quantum soundness has remained an open problem. We address this gap with two primary contributions. First, we establish the first quantitative quantum soundness bounds for every bipartite compiled Bell game whose optimal quantum strategy is finite-dimensional: any polynomial-time prover's score in the compiled game is negligibly close to the game's ideal quantum value. More generally, for all bipartite games we show that the compiled score cannot significantly exceed the bounds given by a newly formalized convergent sequential Navascués-Pironio-Acín (NPA) hierarchy. Second, we provide a full characterization of this sequential NPA hierarchy, establishing it as a robust numerical tool that is of independent interest. Finally, for games without finite-dimensional optimal strategies, we explore the necessity of NPA approximation error for quantitatively bounding their compiled scores, linking these considerations to the complexity conjecture $\mathrm{MIP}^{\mathrm{co}}=\mathrm{coRE}$ and open challenges such as quantum homomorphic encryption correctness for "weakly commuting" quantum registers.A convergent sum-of-squares hierarchy for compiled nonlocal gamesDavid Cui (MIT); Chirag Falor (MIT/Citadel Securities); Anand Natarajan (MIT); Tina Zhang (MIT)[abstract]Abstract: We continue the line of work initiated by Kalai et al. (STOC '23), studying "compiled" nonlocal games played between a classical verifier and a single quantum prover, with cryptography simulating the spatial separation between the players. The central open question in this area is to understand the soundness of this compiler against quantum strategies, and apart from results for specific games, all that is known is the recent "qualitative" result of Kulpe et al. (STOC '25) showing that the success probability of a quantum prover in the compiled game is bounded by the game's quantum commuting-operator value in the limit as the cryptographic security parameter goes to infinity. In this work, we make progress towards a quantitative understanding of quantum soundness for general games, by giving a concrete framework to bound the quantum value of compiled nonlocal games. Building on the result of Kulpe et al. together with the notion of "nice" sum-of-squares certificates, introduced by Natarajan and Zhang (FOCS '23) to bound the value of the compiled CHSH game, we extend the niceness framework and construct a hierarchy of semidefinite programs that searches exclusively over nice certificates. We show that this hierarchy converges to the optimal quantum value of the game. Additionally, we present a transformation to make any degree-1 sum-of-squares certificate nice. This approach provides a systematic method to reproduce all known bounds for special classes of games together with Kulpe et al.'s bound for general games from the same framework.
- Universal quantum computing in two dimensions without getting tied in knotsJulio Magdalena de la Fuente (Freie Universität Berlin); Margarita Davydova (Caltech); Andreas Bauer (MIT); Mark Webster (University College London); Dominic Williamson (University of Sydney); Benjamin Brown (IBM)[abstract]Abstract: We show how to perform scalable fault-tolerant non-Clifford gates in two dimensions by introducing domain walls between the surface code and a non-Abelian topological code whose codespace is stabilized by Clifford operators. We formulate a path integral framework which provides both a macroscopic picture for different logical gates as well as a way to derive the associated microscopic circuits. We present explicit protocols and planar non-Clifford circuits that implement non-Clifford logic gates on both surface codes as well as color codes on different geometries. The logical action of the protocol is determined by the spacetime geometry, using the same bulk circuit, composed of simple 2D local circuits of similar complexity to commonly used stabilizer-readout circuits. We present fault-tolerant schemes for logical Clifford measurements as well as diagonal unitary gates in the third level of the Clifford hierarchy such as T, CS and CCZ gate. We also show an equivalence between our approach and prior proposals where a 2D array of qubits reproduces the action of a transversal gate in a 3D stabilizer code over time, thus, establishing a new connection between 3D codes and 2D non-Abelian topological phases. We prove a threshold theorem for our protocols under local stochastic circuit noise using a just-in-time decoder to correct the non-Abelian code.
- merged with #303:Complexity of mixed Schatten norms of quantum mapsJan Kochanowski (Institut Polytechnique de Paris, INRIA Saclay); Omar Fawzi (ENS Lyon, INRIA Lyon); Cambyse Rouzé (Institut Polytechnique de Paris, INRIA Saclay)[abstract]Abstract: We study the complexity of computing the mixed Schatten $\|\Phi\|_{q\to p}$ norms of linear maps $\Phi$ between matrix spaces. When $\Phi$ is completely positive, we show that $\| \Phi \|_{q \to p}$ can be computed efficiently when $q \geq p$. The regime $q \geq p$ is known as the non-hypercontractive regime and is also known to be easy for the mixed vector norms $\ell_{q} \to \ell_{p}$ [Boyd, 1974]. However, even for entanglement-breaking completely-positive trace-preserving maps $\Phi$, we show that computing $\| \Phi \|_{1 \to p}$ is $\NP$-complete when $p>1$. Moving beyond the completely-positive case and considering $\Phi$ to be difference of entanglement breaking completely-positive trace-preserving maps, we prove that computing $\| \Phi \|^+_{1 \to 1}$ is $\NP$-complete. In contrast, for the completely-bounded (cb) case, we describe a polynomial-time algorithm to compute $\|\Phi\|_{cb,1\to p}$ and $\|\Phi\|^+_{cb,1\to p}$ for any linear map $\Phi$ and $p\geq1$.Computational aspects of the trace norm contraction coefficientIdris Delsol (Inria, ENS Lyon); Omar Fawzi (Inria, ENS Lyon); Jan Kochanowski (Inria, Télécom Paris - LTCI, Institut Polytechnique de Paris); Akshay Ramachandran (Inria, ENS Lyon)[abstract]Abstract: We show that approximating the trace norm contraction coefficient of a quantum channel within a constant factor is NP-hard. Equivalently, this shows that determining the optimal success probability for encoding a bit in a quantum system undergoing noise is NP-hard. This contrasts with the classical analogue of this problem that can clearly be solved efficiently. Our hardness results also hold for deciding if the contraction coefficient is equal to 1. As a consequence, we show that deciding if a non-commutative graph has an independence number of at least 2 is NP-hard. In addition, we establish a converging hierarchy of semidefinite programming upper bounds on the contraction coefficient.
- Distributed Quantum Advantage for Local ProblemsAlkida Balliu (Gran Sasso Science Institute); Sebastian Brandt (CISPA Helmholtz Center for Information Security); Filippo Casagrande (Gran Sasso Science Institute); Xavier Coiteux-Roy (University of Calgary /Technical University of Munich); Francesco d'Amore (Gran Sasso Science Institute); Barbara Keller (Aalto University); Massimo Equi (Aalto University); François Le Gall (Nagoya University); Henrik Lievonen (Aalto University); Augusto Modanese (Aalto University); Dennis Olivetti (Gran Sasso Science Institute); Marc-Olivier Renou (Inria Paris-Saclay / Ecole Polytechnique); Jukka Suomela (Aalto University); Gustav Schmid (University of Freiburg); Lucas Tendick (Inria Paris-Saclay / Ecole Polytechnique); Isadora Veeren (Inria Paris-Saclay / Ecole Polytechnique)[abstract]Abstract: We present the first \emph{local} problems that show a super-constant separation between the classical randomized LOCAL model of distributed computing and its quantum counterpart. By prior work, such a separation was known only for an artificial graph problem with an inherently \emph{global} definition [Le Gall et al.\ 2019]. This submission consists of two works: \begin{itemize} \item The first work\footnote{{\bf Distributed Quantum Advantage for Local Problems} by Balliu, Brandt, Coiteux-Roy, d'Amore, Equi, Le Gall, Lievonen, Modanese, Olivetti, Renou, Suomela, Tendick and Veeren. \emph{Proceedings of the 57th ACM Symposium on Theory of Computing (STOC 2025)}, pp. 451-462, 2025.} presents a problem that we call \emph{iterated GHZ}, which is defined using only local constraints. We show that in graphs of maximum degree $\Delta$, any classical (deterministic or randomized) LOCAL model algorithm will require $\Omega(\Delta)$ rounds to solve the iterated GHZ problem, while the problem can be solved in $1$ round in quantum-LOCAL. \item The second work\footnote{{\bf Distributed Quantum Advantage in Locally Checkable Labeling Problems} by Balliu, Casagrande, d'Amore, Equi, Keller, Lievonen, Olivetti, Schmid and Suomela. arXiv:2504.05191, 2025.} shows the first known example of a local problem \emph{over a constant-degree graph} that admits asymptotic distributed quantum advantage in the LOCAL model of distributed computing: our problem can be solved in $O(\log n)$ communication rounds in the quantum-LOCAL model, but it requires $\Omega(\log n\cdot\log^{0.99}\log n)$ communication rounds in the classical randomized-LOCAL model. \end{itemize}
- Symmetric localizable multiparty quantum measurementsJef Pauwels (Université de Genève); Cyril Branciard (Université Grenoble Alpes); Alejandro Pozas-Kerstjens (Université de Genève); Nicolas Gisin (Université de Genève)[abstract]Abstract: We construct and classify entangled measurements with tetrahedral symmetry as group-covariant orbits of a fiducial state. For this class, localizability follows from the Clifford level of a diagonal phase in the fiducial, yielding an explicit hierarchy and efficient constructions. The framework recovers the two-qubit Elegant Joint Measurement (EJM) uniquely and extends it to multiqubit EJMs with identical local geometry but inequivalent global entanglement classes for n\ge3. A continuous family interpolating between the Bell measurement and the EJM realizes an infinite set of localizable tetrahedral bases.
- Trading Mathematical for Physical Simplicity: Bialgebraic Structures in Matrix Product Operator SymmetriesYuhan Liu (Max Planck Institute of Quantum Optics); Andras Molnar (University of Vienna); Xiao-Qi Sun (Max Planck Institute of Quantum Optics); Frank Verstraete (University of Cambridge, Ghent University); Kohtaro Kato (Nagoya University); Laurens Lootens (University of Cambridge)[abstract]Abstract: Despite recent advances in the lattice representation theory of (generalized) symmetries, many simple quantum spin chains of physical interest are not included in the rigid framework of fusion categories and weak Hopf algebras. We demonstrate that this problem can be overcome by relaxing the requirements on the underlying algebraic structure, and show that general matrix product operator symmetries are described by a pre-bialgebra. As a guiding example, we focus on the anomalous $\mathbb Z_2$ symmetry of the XX model, which manifests the mixed anomaly between its $U(1)$ momentum and winding symmetry. We show how this anomaly is embedded into the non-semisimple corepresentation category, providing a novel mechanism for realizing such anomalous symmetries on the lattice. Additionally, the representation category which describes the renormalization properties is semisimple and semi-monoidal, which provides a new class of mixed state renormalization fixed points. Finally, we show that up to a quantum channel, this anomalous $\mathbb Z_2$ symmetry is equivalent to a more conventional MPO symmetry obtained on the boundary of a double semion model. In this way, our work provides a bridge between well-understood topological defect symmetries and those that arise in more realistic models.
- Entanglement sharing schemesAlex May (Perimeter Institute); Zahra Khanian (Perimeter Institute); Dongjin Lee (Perimeter Institute); Debbie Leung (University of Waterloo, Perimeter Institute); Zhi Li (IBM Quantum, National Research Council of Canada, Perimeter Institute); Takato Mori (Perimeter Institute, Rikkyo University); Stanley Miao (Perimeter Institute, Institute for Quantum Computing); Farzin Salek (Institute for Quantum Computing, Freie Universit\"{a}t Berlin); Jinmin Yi (Perimeter Institute); Beni Yoshida (Perimeter Institute)[abstract]Abstract: We ask how quantum correlations can be distributed among many subsystems. To address this, we define entanglement sharing schemes (ESS) where certain pairs of subsystems allow entanglement to be recovered via local operations, while other pairs must not. ESS schemes come in two variants, one where the partner system with which entanglement should be prepared is known, and one where it is not. In the case of known partners, we fully characterize the access structures realizable for ESS when using stabilizer states, and construct efficient schemes for threshold access structures, and give a conjecture for the access structures realizable with general states. In the unknown partner case, we again give a complete characterization in the stabilizer setting, additionally give a complete characterization of the case where there are no restrictions on unauthorized pairs, and we prove a set of necessary conditions on general schemes which we conjecture are also sufficient. Finally, we give an application of the theory of entanglement sharing to resolve an open problem related to the distribution of entanglement in response to time sensitive requests in quantum networks.
- Sum of Squares Spectral AmplificationRobbie King (Google & UC Berkeley); Guang Hao Low (Google); Dominic Berry (Macquarie University); Qiushi Han (University of Illinios Urbana-Champaign); Eugene DePrince (Florida State University); Alec White (Google); Ryan Babbush (Google); Rolando Somma (Google); Nick Rubin (Google)[abstract]Abstract: We present sum-of-squares spectral amplification (SOSSA), a framework for improving quantum simulation relevant to low-energy problems. We show how SOSSA can be applied to problems like energy and phase estimation and provide fast quantum algorithms for these problems that significantly improve over prior art. We analyze the performance of SOSSA on the Sachdev-Ye-Kitaev model, a representative strongly correlated system, and demonstrate asymptotic speedups over generic simulation methods by a factor of the square root of the system size. We then apply SOSSA to electronic structure problems in quantum chemistry, yielding a factor of 4 to 195 speedup over the state of the art in ground-state energy estimation for models of Iron-Sulfur complexes and a CO2-fixation catalyst.
- merged with #384:Quantitative quantum soundness for all multipartite compiled nonlocal gamesXiangling Xu (Inria Paris-Saclay); Matilde Baroni (Sorbonne Université, CNRS, LIP6); Igor Klep (University of Ljubljana, Faculty of Mathematics and Physics); Dominik Leichtle (School of Informatics, University of Edinburgh); Marc-Olivier Renou (Inria Paris-Saclay, CPHT, Ecole polytechnique, Institut Polytechnique de Paris, Palaiseau); Ivan Šupić (Université Grenoble Alpes, CNRS, Grenoble INP, LIG); Lucas Tendick (Inria Paris-Saclay)[abstract]Abstract: Compiled nonlocal games transfer the power of Bell-type multi-prover tests into a single-device setting by replacing spatial separation with cryptography. Concretely, the KLVY compiler (STOC'23) maps any multi-prover game to an interactive single-prover protocol, using quantum homomorphic encryption. A crucial security property of such compilers is quantum soundness, which ensures a dishonest quantum prover cannot exceed the original game's quantum value. For practical cryptographic implementations, this soundness must be quantitative, providing concrete bounds, rather than merely asymptotic. While quantitative quantum soundness has been established for the KLVY compiler in the bipartite case, it has only been shown asymptotically for multipartite games. This is a significant gap, as multipartite nonlocality exhibits phenomena with no bipartite analogue, and the difficulty of enforcing space-like separation makes single-device compilation especially compelling. This work closes this gap by showing the quantitative quantum soundness of the KLVY compiler for all multipartite nonlocal games. On the way, we introduce an NPA-like hierarchy for quantum instruments and prove its completeness, thereby characterizing correlations from non-signaling sequential strategies. We further develop novel geometric arguments for the decomposition of sequential strategies into their signaling and non-signaling parts, which might be of independent interest.Bounding the asymptotic quantum value of all multipartite compiled non-local gamesMatilde Baroni (Sorbonne Université, CNRS, LIP6); Dominik Leichtle (School of Informatics, University of Edinburgh); Siniša Janković (Faculty of Physics, University of Belgrade); Ivan Šupić (Université Grenoble Alpes, CNRS, Grenoble INP, LIG)[abstract]Abstract: Non-local games are a powerful tool to distinguish between correlations possible in classical and quantum worlds. Kalai et al. (STOC'23) proposed a compiler that converts multipartite non-local games into interactive protocols with a single prover, relying on cryptographic tools to remove the assumption of physical separation of the players. While quantum completeness and classical soundness of the construction have been established for all multipartite games, quantum soundness is known only in the special case of bipartite games. In this paper, we prove that the Kalai \emph{et al.}'s compiler indeed achieves quantum soundness for all multipartite compiled non-local games, by showing that any correlations that can be generated in the asymptotic case correspond to quantum commuting strategies. Our proof uses techniques from the theory of operator algebras, and relies on a characterisation of sequential operationally no-signalling strategies as quantum commuting operator strategies in the multipartite case, thereby generalising several previous results. On the way, we construct universal C*-algebras of sequential PVMs and prove a new chain rule for Radon-Nikodym derivatives of completely positive maps on C*-algebras which may be of independent interest.
- Gap-preserving reductions and RE-completeness of independent set gamesLaura Mančinska (University of Copenhagen); Pieter Spaas (University of Copenhagen); Taro Spirig (University of Copenhagen); Matthijs Vernooij (TU Delft)[abstract]Abstract: In complexity theory, gap-preserving reductions play a crucial role in studying hardness of approximation and in analyzing the relative complexity of multiprover interactive proof systems. In the quantum setting, multiprover interactive proof systems with entangled provers correspond to gapped promise problems for nonlocal games, and the recent result MIP*=RE shows that these are in general undecidable. However, the relative complexity of problems within MIP* is still not well-understood, as establishing gap-preserving reductions in the quantum setting presents new challenges. In this paper, we introduce a framework to study such reductions and use it to establish MIP*-completeness of the gapped promise problem for the natural class of independent set games. In such a game, the goal is to determine whether a given graph contains an independent set of a specified size. We construct families of independent set games with constant question size for which the gapped promise problem is undecidable. In contrast, the same problem is decidable in polynomial time in the classical setting. To carry out our reduction, we establish a new stability theorem, which could be of independent interest, allowing us to perturb families of almost PVMs to genuine PVMs.
- merged with #267:Entanglement theory with limited computational resourcesLorenzo Leone (Freie Universität Berlin); Jacopo Rizzo (Freie Universität Berlin); Jens Eisert (Freie Universität Berlin); Sofiene Jerbi (Freie Universität Berlin)[abstract]Abstract: The precise quantification of the limits to manipulating quantum resources lies at the core of quantum information theory. However, standard information-theoretic analyses do not consider the actual computational complexity involved in performing certain tasks. Here, we address this issue within the realm of entanglement theory, finding that accounting for computational efficiency substantially changes what can be achieved using entangled resources. We consider two key figures of merit: the computational distillable entanglement and the computational entanglement cost. These measures quantify the optimal rates of entangled bits that can be extracted from or used to dilute many identical copies of n-qubit bipartite pure states, using computationally efficient local operations and classical communication. We demonstrate that computational entanglement measures diverge significantly from their information-theoretic counterparts. While the information-theoretic distillable entanglement is determined by the von Neumann entropy of the reduced state, we show that the min-entropy governs the computationally efficient setting. On the other hand, computationally efficient entanglement dilution requires maximal consumption of entangled bits, even for nearly unentangled states. Furthermore, in the worst-case scenario, even when an efficient description of the state exists and is fully known, one gains no advantage over state-agnostic protocols. Our findings establish sample-complexity bounds for measuring and testing the von Neumann entropy, fundamental limitations on efficient state compression, and efficient local tomography protocols.merged with #387:Quantum Computational EntropiesNoam Avidan (Weizmann Institute of Science); Thomas A. Hahn (Weizmann Institute of Science); Rotem Arnon (Weizmann Institute of Science); Joseph M. Renes (Institute for Theoretical Physics, ETH Zurich)[abstract]Abstract: Quantum information theory, particularly its entropic formulations, has made remarkable strides in characterizing quantum systems and tasks. However, a critical dimension remains underexplored: computational efficiency. While classical computational entropies integrate complexity and feasibility into information measures, analogous concepts have yet to be rigorously developed in the quantum setting. In this joint submission, we advance a quantum computational information theory through two complementary works. The first introduces the quantum computational unpredictability entropy, a natural generalization of the min entropy for classical-quantum states and of the classical unpredictability entropy that quantifies the guessing probability of classical randomness using quantum side information and bounded computational power. The second work extends this to the fully quantum setting by defining fully quantum computational min- and max-entropies. The computational min-entropy generalizes unpredictability entropy and retains essential properties, including data processing, a fully quantum leakage chain rule, and it satisfies a novel purification chain rule. The computational max-entropy is defined via a canonical duality relation and it captures a notion of efficient entanglement distillation under bounded quantum circuits. With the introduction of these computational entropies and their analysis, this work marks a critical step toward a quantum information theory that incorporates computational elements.Computational relative entropyJohannes Jakob Meyer (Dahlem Center for Complex Quantum Systems, Freie Universität Berlin); Asad Raza (Dahlem Center for Complex Quantum Systems, Freie Universität Berlin); Jacopo Rizzo (Dahlem Center for Complex Quantum Systems, Freie Universität Berlin); Lorenzo Leone (Dahlem Center for Complex Quantum Systems, Freie Universität Berlin); Sofiene Jerbi (Dahlem Center for Complex Quantum Systems, Freie Universität Berlin); Jens Eisert (Dahlem Center for Complex Quantum Systems, Freie Universität Berlin)[abstract]Abstract: Our capacity to process information depends on the computational power at our disposal. Information theory captures our ability to distinguish states or communicate messages when it is unconstrained with unrivaled beauty and elegance. For computationally bounded observers the situation is quite different -- they can, for example, be fooled to believe that distributions are more random than they actually are. Existing mathematical approaches in computational information theory largely follow the single-shot paradigm that, while being operationally meaningful, also gives complicated statements and is difficult to build intuition for. In our work, we take a new direction in computational quantum information theory that captures the essence of complexity-constrained information theory while retaining the look and feel of the unbounded asymptotic theory. As our foundational quantity, we define the computational relative entropy as the optimal error exponent in asymmetric hypothesis testing when restricted to polynomially many copies and quantum gates, defined in a mathematically rigorous way. Building on this foundation, we prove a computational analogue of Stein's lemma, establish computational versions of fundamental inequalities like Pinsker's bound, and demonstrate a computational smoothing property showing that computationally indistinguishable states yield equivalent information measures. We derive a computational entropy that operationally characterizes optimal compression rates for quantum states under computational limitations and show that our quantities apply to computational entanglement theory, proving a computational version of the Rains bound. Our framework reveals striking separations between computational and unbounded information measures, including quantum-classical gaps that arise from cryptographic assumptions, demonstrating that computational constraints fundamentally alter the information-theoretic landscape and open new research directions at the intersection of quantum information, complexity theory, and cryptography.
- MIPco=coREJunqiao Lin (CWI & Qusoft)[abstract]Abstract: In 2020, a landmark result by Ji, Natarajan, Vidick, Wright, and Yuen showed that MIP∗, the class of languages that can be decided by a classical verifier interacting with multiple computationally unbounded provers sharing entanglement in the tensor product model, is equal to RE. We show that the class MIPco, a complexity class defined similarly to MIP∗ except with provers sharing the commuting operator model of entanglement instead, is equal to the class coRE. This shows that giving the provers two different models of entanglement leads to two completely different computational powers for interactive proof systems. Our proof builds upon the compression theorem used in the proof of MIP∗ = RE, and we use the tracially embeddable strategies framework to show that the same compression procedure in MIP∗ = RE also has the same desired property in the commuting operator setting. We also give a more streamlined proof of the compression theorem for non-local games by incorporating the synchronous framework used by Mousavi et al. [STOC 2022], as well as the improved Pauli basis test introduced by de la Salle [ArXiv:2204.07084]. We introduce a new equivalence condition for RE/coRE-complete problems, which we call the weakly compressible condition. We show that both MIP∗ and MIPco satisfy this condition through the compression theorem, and thereby establish that the uncomputability for MIP∗ and MIPco can be proved under a unified framework (despite these two complexity classes being different). Notably, this approach also gives an alternative proof of the MIP∗ = RE theorem, which does not rely on the preservation of the entanglement bound. In addition to non-local games, this new condition could also potentially be applicable to other decision problems.
- Information-Computation Gaps in Quantum Learning via Low-Degree LikelihoodSitan Chen (Harvard University); Weiyuan Gong (Harvard University); Jonas Haferkamp (Saarland University); Yihui Quek (EPFL)[abstract]Abstract: In a variety of physically relevant settings for learning from quantum data, there is an established recipe for measuring polynomially many copies of that data such that the resulting measurement readouts contain enough information to reconstruct the underlying system. Yet designing protocols that can computationally efficiently extract that information remains largely an art, and there are important cases where we believe this to be impossible, that is, where there is an information-computation gap. While there is a large array of tools in the classical literature for giving evidence for average-case hardness of statistical inference problems, the corresponding tools in the quantum literature are far more limited. One such framework in the classical literature, the low-degree method, makes predictions about hardness of inference problems based on the failure of estimators given by low-degree polynomials. In this work, we extend this framework to the quantum setting and show a number of new information-computation gaps for quantum learning. We establish a general connection between state designs and low-degree hardness. We use this to obtain the first information-computation gaps for learning Gibbs states of random, sparse, non-local Hamiltonians. We also use it to prove hardness for learning random shallow quantum circuit states in a challenging model where states can be measured in adaptively chosen bases. To our knowledge, the ability to model adaptivity within the low-degree framework was open even in classical settings. In addition, we also obtain a low-degree hardness result for quantum error mitigation against strategies with single-qubit measurements. We define a new quantum generalization of the planted biclique problem and identify the threshold at which this problem becomes computationally hard for protocols that perform local measurements. Interestingly, the complexity landscape for this problem shifts when going from local measurements to more entangled single-copy measurements. We show average-case hardness for the ``standard'' variant of Learning Stabilizers with Noise and for agnostically learning product states.
- The firewall paradox is Wigner's friend paradoxLadina Hausmann (ETH Zürich); Renato Renner (ETH Zurich)[abstract]Abstract: In Wigner's friend-type experiments, unlike in standard QIP setups, the participating agents are modelled as quantum systems. Recent versions of such experiments have revealed that the usual rules for combining information held by different agents are inconsistent with quantum theory. Here, we show that this insight is relevant to paradoxes in quantum gravity, such as the black hole firewall paradox. This is because their structure is similar to Wigner's friend experiments: they rely on combining knowledge of multiple agents, some of whom enter the black hole, which is treated as a quantum system.
- Continuous-Variable Quantum MacWilliams IdentitiesAnsgar G. Burchards (Freie Universität Berlin)[abstract]Abstract: We derive bounds on general continuous-variable quantum error correcting codes against the displacement noise channel. The bounds limit the distances attainable by codes and also apply in an approximate setting. Our main result is a quantum analogue of the classical Cohn-Elkies bound on sphere packing densities attainable in Euclidean space. We further derive a quantum version of Levenshtein's sphere packing bound and argue that Gottesman--Kitaev--Preskill (GKP) codes based on the $E_8$ and Leech lattices achieve optimal distances. The main technical tool is a continuous-variable version of the quantum MacWilliams identities, which we introduce. The identities relate a pair of weight distributions which can be obtained for any two trace-class operators. General properties of these weight distributions are discussed, along with several examples.
- Efficient Learning Implies Quantum GlassinessEric Anschuetz (Caltech)[abstract]Abstract: We show a relation between quantum learning theory and algorithmic hardness. We demonstrate that finding near-ground states of certain sparse disordered quantum systems is average-case hard for Lipschitz quantum algorithms if there exists an efficient, local learning algorithm---such as the classical shadows algorithm---for estimating the energy of a state of the system. A corollary of our result is that many standard quantum algorithms fail to find near-ground states of these systems, including short-time Lindbladian dynamics, short-time quantum annealing, phase estimation, and shallow-depth variational quantum algorithms. Our results are unconditional. To achieve this, we introduce a topological property of quantum systems that we call the quantum overlap gap property (QOGP). This property is only satisfied by systems with an efficient local learning algorithm for the energy. We prove that systems which exhibit this topological property in their low-energy space are intractable for quantum algorithms whose outputs are stable under perturbations to their inputs. We then prove that the QOGP is satisfied for a sparsified variant of the quantum p-spin model, giving the first known algorithmic hardness-of-approximation result for quantum algorithms in finding the ground state of a non-stoquastic, noncommuting quantum system. Our resulting lower bound for quantum algorithms optimizing this model using Lindbladian evolution matches (up to constant factors) the best-known time lower bound for classical Langevin dynamics optimizing classical p-spin models. For this reason we suspect that finding ground states of typical quantum p-spin models using quantum algorithms is, in practice, as intractable as the classical p-spin model is for classical algorithms. Inversely, we show that the Sachdev--Ye--Kitaev (SYK) model does not exhibit the QOGP, consistent with previous evidence that the model is rapidly mixing at low temperatures.
- A Meta-Complexity Characterization of Minimal Quantum CryptographyBruno Cavalar (University of Oxford); Boyang Chen (Tsinghua University); Andrea Coladangelo (University of Washington); Matthew Gray (University of Oxford); Zihan Hu (EPFL); Zhengfeng Ji (Tsinghua University); Xingjian Li (Tsinghua University)[abstract]Abstract: We give a meta-complexity characterization of EFI pairs, which are considered the “minimal” primitive in quantum cryptography (due to their equivalence to quantum commitments and for being implied by almost all other known quantum cryptographic primitives). More precisely, we show that the existence of EFI pairs is equivalent to the following: there exists a non-uniformly samplable distribution over pure states such that the problem of estimating a certain Kolmogorov-like complexity measure is hard given a single copy. The complexity measure that we consider is a smoothed version of the algorithmic entropy notion introduced by Gács [Gác01]. A key technical step in our proof, which may be of independent interest, is to show that the existence of EFI pairs is equivalent to the existence of non-uniform single-copy secure pseudorandom state generators (nu 1-PRS). As a corollary, we get an alternative, arguably simpler, construction of a universal EFI pair.
- On One-Shot Signatures, Quantum vs Classical Binding, and Obfuscating PermutationsOmri Shmueli (NTT Research); Mark Zhandry (NTT Research & Stanford University)[abstract]Abstract: One-shot signatures (OSS) were defined by Amos, Georgiou, Kiayias, and Zhandry (STOC'20). These allow for signing exactly one message, after which the signing key self-destructs, preventing a second message from ever being signed. While such an object is impossible classically, Amos et al observe that OSS may be possible using quantum signing keys by leveraging the no-cloning principle. OSS has since become an important conceptual tool with many applications in decentralized settings and for quantum cryptography with classical communication. OSS are also closely related to separations between classical-binding and collapse-binding for post-quantum hashing and commitments. Unfortunately, the only known OSS construction due to Amos et al. was only justified in a classical oracle model, and moreover their justification was ultimately found to contain a fatal bug. Thus, the existence of OSS, even in a classical idealized model, has remained open. We give the first standard-model OSS, with provable security assuming (sub-exponential) indistinguishability obfuscation (iO) and LWE. This also gives the first standard-model separation between classical and collapse-binding post-quantum commitments/hashing, solving a decade-old open problem. Along the way, we also give the first construction with unconditional security relative to a classical oracle. To achieve our standard-model construction, we develop a notion of permutable pseudorandom permutations (permutable PRPs), and show how they are useful for translating oracle proofs involving random permutations into obfuscation-based proofs. In particular, obfuscating permutable PRPs gives a trapdoor one-way permutation that is \emph{full-domain}, solving another decade-old-problem of constructing this object from (sub-exponential) iO and one-way functions.
- Tight and self-testing multipartite quantum Bell inequalities from the renormalization groupPaolo Abiuso (IQOQI - Vienna); Julian Fischer (Johannes Kepler University Linz); Miguel Navascues (IQOQI - Vienna)[abstract]Abstract: Attempts to understand multipartite quantum nonlocality are thwarted by the difficulty of devising quantum Bell inequalities (QBI) for systems composed of more than a few separate parties. In this work, we introduce the notion of ``tight connectors", a class of tensors which, if contracted according to some simple rules, result in tight QBIs. The new inequalities are saturated by tensor network states, whose structure mimics the corresponding network of connectors. Some tight connectors are furthermore ``fully self-testing'', which implies that the QBI they generate through contractions can only be maximized with such a tensor network state and specific measurement operators (modulo local isometries). We provide large analytic families of tight, fully self-testing connectors that allow the generation, via contraction, of N-partite QBIs with a ratio between the maximum quantum and classical values that grows exponentially with N. In turn, our method provides a modular recipe for the generation of extremal quantum behaviours and associated tight bounds in arbitrarily large systems.
- Parent Lindbladians for Matrix Product Density OperatorsYuhan Liu (Max Planck Institute of Quantum Optics); Alberto Ruiz-de-Alarcon (Universidad Complutense de Madrid, CUNEF Universidad); Georgios Styliaris (Max Planck Institute of Quantum Optics); Xiao-Qi Sun (Max Planck Institute of Quantum Optics); David Perez-Garcia (Universidad Complutense de Madrid, nstituto de Ciencias Matematicas); Ignacio Cirac (Max Planck Institute of Quantum Optics)[abstract]Abstract: Understanding quantum phases of matter is a fundamental goal in physics. For pure states, the representatives of phases are the ground states of locally interacting Hamiltonians, which are also renormalization fixed points (RFPs). These RFP states are exactly described by tensor networks. Extending this framework to mixed states, matrix product density operators (MPDOs) which are RFPs are believed to encapsulate mixed-state phases of matter in one dimension, where non-trivial topological phases have already been shown to exist. However, to better motivate the physical relevance of those states, and in particular the physical relevance of the recently found non-trivial phases, it remains an open question whether such MPDO RFPs can be realized as steady states of local Lindbladians. In this work, we resolve this question by analytically constructing parent Lindbladians for MPDO RFPs. These Lindbladians are local, frustration-free, and exhibit minimal steady-state degeneracy. Interestingly, we find that parent Lindbladians possess a rich structure that distinguishes them from their Hamiltonian counterparts. In particular, we uncover an intriguing connection between the non-commutativity of the Lindbladian terms and the fact that the corresponding MPDO RFP belongs to a non-trivial phase.
- Cyclic quantum causal modelling with a graph separation theoremCarla Ferradini (Institute for Theoretical Physics, ETH Zurich); Victor Gitton (Institute for Theoretical Physics, ETH Zurich); V. Vilasini (Inria University Grenoble Alpes)[abstract]Abstract: Causal modelling frameworks link observable correlations to causal explanations, which is a crucial aspect of science. These models represent causal relationships through directed graphs, with vertices and edges denoting systems and transformations within a theory. Most studies focus on acyclic causal graphs, where well-defined probability rules and powerful graph-theoretic properties like the d-separation theorem apply. However, understanding complex feedback processes and exotic fundamental scenarios with causal loops requires cyclic causal models, where such results do not generally hold. While progress has been made in classical cyclic causal models, challenges remain in uniquely fixing probability distributions and identifying graph-separation properties applicable in general cyclic models. In cyclic quantum scenarios, existing frameworks have focussed on a subset of possible cyclic causal scenarios, with graph-separation properties yet unexplored. This work proposes a framework applicable to all consistent quantum and classical cyclic causal models on finite-dimensional systems. We address these challenges by introducing a robust probability rule and a novel graph-separation property, p-separation, which we prove to be sound and complete for all such models. Our approach maps cyclic causal models to acyclic ones with post-selection, leveraging the post-selected quantum teleportation protocol. We characterize these protocols and their success probabilities along the way. We also establish connections between this formalism and other classical and quantum frameworks to inform a more unified perspective on causality. This provides a foundation for more general cyclic causal discovery algorithms and to systematically extend open problems and techniques from acyclic informational networks (e.g., certification of non-classicality) to cyclic causal structures and networks.
- Cloning Games, Black Holes and CryptographyAlexander Poremba (Boston University); Seyoon Ragavan (MIT); Vinod Vaikuntanathan (MIT)[abstract]Abstract: In this work, we introduce a new toolkit for analyzing \emph{cloning games}, a notion that captures stronger and more quantitative versions of the celebrated quantum no-cloning theorem. This framework allows us to analyze a new cloning game based on \emph{binary phase states}. Our results provide evidence that these games may be able to overcome important limitations of previous candidates based on BB84 states and subspace coset states: in a model where the adversaries are restricted to making a single oracle query, we show that the binary phase variant is $t$-copy secure when $t=o(n/\log n)$. Moreover, for constant $t$, we obtain the \emph{first} optimal bounds of $O(2^{-n})$, asymptotically matching the value attained by a trivial adversarial strategy. We also show a worst-case to average-case reduction which allows us to show the same quantitative results for the new and natural notion of \emph{Haar cloning games}. Our analytic toolkit, which we believe will find further applications, is based on binary subtypes and uses novel bounds on the operator norms of block-wise tensor products of matrices. To illustrate the effectiveness of these new techniques, we present two applications: first, in black-hole physics, where our asymptotically optimal bound offers quantitative insights into information scrambling in idealized models of black holes; and second, in unclonable cryptography, where we (a) construct succinct unclonable encryption schemes from the existence of pseudorandom unitaries, and (b) propose and provide evidence for the security of multi-copy unclonable encryption schemes.
- The abelian state hidden subgroup problem: Learning stabilizer groups and beyondMarcel Hinsche (Freie Universität Berlin); Jose Carrasco (Freie Universität Berlin); Jens Eisert (Freie Universität Berlin)[abstract]Abstract: Identifying the symmetry properties of quantum states is a central theme in quantum information theory and quantum many-body physics. In this work, we investigate quantum learning problems in which the goal is to identify a hidden symmetry of an unknown quantum state. Building on the recent formulation of the state hidden subgroup problem (StateHSP), we focus on abelian groups and develop an efficient quantum algorithm that learns any hidden symmetry subgroup using a generalized form of Fourier sampling. We showcase the versatility of the approach in three concrete applications: These are learning (i) qubit and qudit stabilizer groups, (ii) cuts along which a state is unentangled, and (iii) hidden translation symmetries. Through these applications, we reveal that well-known quantum learning primitives, such as Bell sampling and Bell difference sampling, are in fact special cases of Fourier sampling. Our results highlight the broad potential of the StateHSP framework for symmetry-based quantum learning tasks.
- The Jacobi Factoring Circuit: Quantum Factoring in Near-Linear Gates and Sublinear SpaceGregory D. Kahanamoku-Meyer (MIT); Seyoon Ragavan (MIT); Vinod Vaikuntanathan (MIT); Katherine Van Kirk (Harvard)[abstract]Abstract: We present a compact quantum circuit for factoring a large class of integers, including some whose classical hardness is expected to be equivalent to RSA (but not including RSA integers themselves). Most notably, we factor $n$-bit integers of the form $P^2 Q$ with $\log Q = \Theta(n^a)$ for $a \in (2/3, 1)$ in space and depth sublinear in n (specifically, $\widetilde{O}(\log Q)$) using $\widetilde{O}(n)$ quantum gates; for these integers, no known classical algorithms exploit the relatively small size of $Q$ to run asymptotically faster than general-purpose factoring algorithms. To our knowledge, this is the first polynomial-time circuit to achieve sublinear qubit count for a classically-hard factoring problem. We thus believe that factoring such numbers has potential to be the most concretely efficient classically-verifiable proof of quantumness currently known. Our circuit builds on the quantum algorithm for squarefree decomposition discovered by Li, Peng, Du, and Suter (Nature Scientific Reports 2012), which relies on computing the Jacobi symbol in quantum superposition. The technical core of our contribution is a new space-efficient quantum algorithm to compute the Jacobi symbol of $A$ mod $B$, in the regime where $B$ is classical and much larger than $A$. Our circuit for computing the Jacobi symbol generalizes to related problems such as computing the greatest common divisor and modular inverses, and thus could be of independent interest.
- Quantum Advantage from Sampling Shallow Circuits: Beyond Hardness of MarginalsDaniel Grier (UC San Diego); Daniel M. Kane (UC San Diego); Jackson Morris (UC San Diego); Anthony Ostuni (UC San Diego); Kewen Wu (Institute for Advanced Study)[abstract]Abstract: We construct a family of distributions $\{\mathcal{D}_n\}_n$ with $\mathcal{D}_n$ over $\{0,1\}^n$ and a family of depth-$7$ quantum circuits $\{C_n\}_n$ such that $\mathcal{D}_n$ is produced exactly by $C_n$ with the all zeros state as input, yet any constant-depth classical circuit with bounded fan-in gates evaluated on any binary product distribution has total variation distance $1 - e^{-\Omega(n)}$ from $\mathcal{D}_n$. Moreover, the quantum circuits we construct are geometrically local and use a relatively standard gate set: Hadamard, controlled-phase, CNOT, and Toffoli gates. All previous separations of this type suffer from some undesirable constraint on the classical circuit model or the quantum circuits witnessing the separation. Our family of distributions is inspired by the Parity Halving Problem of Watts, Kothari, Schaeffer, and Tal (STOC, 2019), which built on the work of Bravyi, Gosset, and K\"onig (Science, 2018) to separate shallow quantum and classical circuits for relational problems.
- Heisenberg-limited Hamiltonian learning continuous variable systems via engineered dissipationTim Möbus (University of Tübingen); Andreas Bluhm (Univ. Grenoble Alpes, CNRS); Tuvia Gefen (The Hebrew University of Jerusalem); Yu Tong (Duke University); Albert H. Werner (University of Copenhagen); Cambyse Rouzé (INRIA)[abstract]Abstract: Discrete and continuous variables oftentimes require different treatments in many learning tasks. Identifying the Hamiltonian governing the evolution of a quantum system is a fundamental task in quantum learning theory. While previous works mostly focused on quantum spin systems, where quantum states can be seen as superpositions of discrete bit-strings, relatively little is known about Hamiltonian learning for continuous-variable quantum systems. In this work we focus on learning the Hamiltonian of a bosonic quantum system, a common type of continuous-variable quantum system. This learning task involves an infinite-dimensional Hilbert space and unbounded operators, making mathematically rigorous treatments challenging. We introduce an analytic framework to study the effects of strong dissipation in such systems, enabling a rigorous analysis of cat qubit stabilization via engineered dissipation. This framework also supports the development of Heisenberg-limited algorithms for learning general bosonic Hamiltonians with higher-order terms of the creation and annihilation operators. Notably, our scheme requires a total Hamiltonian evolution time that scales only logarithmically with the number of modes and inversely with the precision of the reconstructed coefficients. On a theoretical level, we derive a new quantitative adiabatic approximation estimate for general Lindbladian evolutions with unbounded generators. Finally, we discuss possible experimental implementations.
- Quantum Computing Enhanced SensingRichard R. Allen (MIT); Francisco Machado (ITAMP, Harvard University); Isaac L. Chuang (MIT); Robert Huang (Caltech and Google); Soonwon Choi (MIT)[abstract]Abstract: Quantum computing and quantum sensing represent two distinct frontiers of quantum information science. In this work, we harness quantum computing to solve a fundamental and practically important sensing problem: the detection of weak oscillating fields with unknown strength and frequency. We present a quantum computing enhanced sensing protocol that outperforms all existing approaches. Furthermore, we prove our approach is optimal by establishing the Grover-Heisenberg limit — a fundamental lower bound on the minimum sensing time. The key idea is to robustly digitize the continuous, analog signal into a discrete operation, which is then integrated into a quantum algorithm. Our metrological gain originates from quantum computation, distinguishing our protocol from conventional sensing approaches. Indeed, we prove that broad classes of protocols based on quantum Fisher information, finite-lifetime quantum memory, or classical signal processing are strictly less powerful. Our protocol is compatible with multiple experimental platforms. We propose and analyze a proof-of-principle experiment using nitrogen-vacancy centers, where meaningful improvements are achievable using current technology. This work establishes quantum computation as a powerful new resource for advancing sensing capabilities.
- Two bases suffice for QMA1-completenessHenry Ma (MIT); Anand Natarajan (MIT)[abstract]Abstract: We introduce a basis-restricted variant of the Quantum-k-Sat problem, in which each term in the input Hamiltonian is required to be diagonal in either the standard or Hadamard basis. Our main result is that the Quantum-6-Sat problem with this basis restriction is already QMA1-complete, defined with respect to a natural gateset. Our construction is based on the Feynman-Kitaev circuit-to-Hamiltonian construction, with a modified clock encoding that interleaves two clocks in the standard and Hadamard bases. In light of the central role played by CSS codes and the uncertainty principle in the proof of the NLTS theorem of Anshu, Breuckmann, and Nirkhe (STOC ’23), we hope that the CSS-like structure of our Hamiltonians will make them useful for progress towards a quantum PCP theorem.
- The NPA hierarchy does not always attain the commuting operator valueMarco Fanizza (Inria de Saclay, IPP); Larissa Kroell (Department of Pure Mathematics, University of Waterloo); Arthur Mehta (Department of Mathematics and Statistics, University of Ottawa); Connor Paddock (Department of Mathematics and Statistics, University of Ottawa); Denis Rochette (Department of Mathematics and Statistics, University of Ottawa); William Slofstra (Institute for Quantum Computing and Department of Pure Mathematics, University of Waterloo); Yuming Zhao (QMATH, Department of Mathematical Sciences, University of Copenhagen)[abstract]Abstract: We show that it is undecidable to determine whether the commuting operator value of a nonlocal game is strictly greater than 1/2. As a corollary, there is a boolean constraint system (BCS) nonlocal game for which the value of the Navascués, Pironio, and Acín (NPA) hierarchy does not attain the commuting operator value at any finite level. Our contribution involves establishing a computable mapping from Turing machines to BCS nonlocal games in which the halting property of the machine is encoded as a decision problem for the commuting operator value of the game. Our techniques are algebraic and distinct from those used to establish MIP*=RE. As a first step, we construct a mapping from Turing machines to elements of the tensor product of free algebras, showing that deciding positivity of those elements is coRE-hard. As a second step, we extend this mapping to further realize these elements as game polynomials for BCS games.
- Efficient Quantum Optimization via Dynamical SimulationAhmet Burak Catli (University of Toronto); Sophia Simon (University of Toronto); Nathan Wiebe (University of Toronto)[abstract]Abstract: We provide several quantum algorithms for continuous optimization that do not require gradient estimation. Instead, we encode the optimization problem into the dynamics of a physical system and coherently simulate the time evolution. Our first two algorithms can find local optima of a differentiable function $f: \mathbb{R}^N \rightarrow \mathbb{R}$ by simulating either classical or quantum dynamics with friction via a time-dependent Hamiltonian. We show that for the benchmark problem of optimizing a locally quadratic objective function, these methods require a total of $O(N^2\kappa^2/h_x^2\epsilon)$ queries to a phase oracle to find an $\epsilon$-approximate local optimum, where $\kappa$ is the condition number of the Hessian matrix and $h_x$ is the discretization spacing. In contrast, we show that methods based on gradient descent require $O(N^{3/2}(1/\epsilon)^{\kappa \log(3)/4})$ queries. This corresponds to an exponential separation between the query upper bounds for the benchmark problem. Our third algorithm can find the global optimum of $f$ by preparing a classical low-temperature thermal state via simulation of the classical Liouvillian operator associated with the Nosé Hamiltonian. We use results from the quantum thermodynamics literature to bound the thermalization time for the discrete system. Additionally, we analyze barren plateau effects that commonly plague quantum optimization algorithms and observe that our approach is vastly less sensitive to this problem than standard gradient-based optimization. Our results suggests that these dynamical optimization approaches may be far more scalable for future quantum machine learning, optimization and variational experiments than was widely believed.
- Adversarially robust quantum state learning and testingMaryam Aliakbarpour (Rice University); Nai-Hui Chia (Rice University); Vladimir Braverman (Johns Hopkins University); Yuhan Liu (Rice University)[abstract]Abstract: Quantum state learning is a fundamental problem in physics and computer science. As near-term quantum devices are error-prone, it is important to design error-resistant algorithms. Apart from device errors, other unexpected factors could also affect the algorithm, such as careless human read-out error, or even a malicious hacker deliberately altering the measurement results. Thus, we want our algorithm to work even in the worst case when things go against our favor. We consider the practical setting of single-copy measurements and propose the $\gamma$-adversarial corruption model where an imaginary adversary can arbitrarily change $\gamma$-fraction of the measurement outcomes. This is stronger than the $\gamma$-bounded SPAM noise model, where the post-measurement state changes by at most $\gamma$ in trace distance. Under our stronger model of corruption, we design an algorithm using non-adaptive measurements that can learn an unknown rank-$r$ state up to $\tilde{O}(\gamma\sqrt{r})$ in trace distance, provided that the number of copies is sufficiently large. We further prove an information-theoretic lower bound of $\Omega(\gamma\sqrt{r})$ for non-adaptive measurements, demonstrating the optimality of our algorithm. Our upper and lower bounds also hold for quantum state testing, where the goal is to test whether an unknown state is equal to a given state or far from it. Our results are intriguingly optimistic and pessimistic at the same time. For general states, the error is dimension-dependent and $\gamma\sqrt{d}$ in the worst case, meaning that only corrupting a very small fraction ($1/\sqrt{d}$) of the outcomes could totally destroy any non-adaptive learning algorithm. However, for constant-rank states that are useful in many quantum algorithms, it is possible to achieve dimension-independent error, even in the worst-case adversarial setting.
- Fast quantum computation with all-to-all HamiltoniansChao Yin (Stanford University)[abstract]Abstract: All-to-all interactions arise naturally in many areas of theoretical physics and across diverse experimental quantum platforms, motivating a systematic study of their information-processing power. Assuming each pair of qubits interacts with $\mathcal{O}(1)$ strength, time-dependent all-to-all Hamiltonians can simulate arbitrary all-to-all quantum circuits, performing quantum computation in time proportional to the circuit depth. In this work, we show that this naive correspondence is far from optimal: all-to-all Hamiltonians can process information on much faster timescales. Our first main result establishes that any two-qubit gate can be simulated by all-to-all Hamiltonians on $N$ qubits in time $\mathrm{O}(1/N)$ (up to factor $N^{\delta}$ with an arbitrarily small constant $\delta>0$), with polynomially small error $1/\mathrm{poly}(N)$. Immediate consequences include: 1) Certain $\mathrm{O}(N)$-qubit unitaries, such as the multiply-controlled Toffoli gate, can be realized in $\mathrm{O}(1/N)$ time. 2) Globally entangled states such as the GHZ and W states of $\mathrm{O}(N)$ qubits can be prepared in $\mathrm{O}(1/N)$ time. 3) Any depth-$D$ circuit on $N_{\rm d}$ qubits can be simulated in arbitrarily short time $T=\mathrm{O}(DN_{\rm d}/N)$, inversely proportional to the space overhead $N/N_{\rm d}$. 4) The existing Lieb-Robinson bound for strong power-law interactions $H_{ij}\sim r_{ij}^{-\gamma}$ in spatial dimension $\mathsf{d}>\gamma$ is tight, requiring time $T=\Omega(N^{\frac{\gamma}{\mathsf{d}}-1})$ for information to propagate. Our second main result shows that any depth-$D$ quantum circuit can be simulated in time $T=\mathrm{O}(D/\sqrt{N})$ with constant space overhead, with error $1/\mathrm{poly}(N)$ for almost all inputs. This “optimistic” simulation may suffice for practical purposes: for instance, building on prior work, we demonstrate that Shor’s factoring algorithm can be implemented by $\mathrm{O}(\sqrt{N})$-time Hamiltonian evolution with constant space overhead. The techniques underlying our results depart fundamentally from the existing literature on parallelizing commuting gates: We rely crucially on non-commuting Hamiltonians and draw on diverse ideas from semiclassical quantum mechanics, (spin) squeezing, Mølmer–Sørensen gates, and spin-wave physics.
- Quantum Codes with Addressable and Transversal Non-Clifford GatesZhiyang (Sunny) He (MIT); Vinod Vaikuntanathan (MIT); Adam Wills (MIT); Rachel Yun Zhang (MIT)[abstract]Abstract: The development of quantum codes with good error correction parameters and useful sets of transversal gates is an area of major interest in quantum error correction. Abundant prior works have studied transversal gates which are restricted to acting on all logical qubits simultaneously. In this work, we study codes that support transversal gates which induce addressable logical gates, i.e., the logical gates act only on logical qubits of our choice. As we consider scaling from low-rate to high-rate codes, the study and design of low-overhead, addressable logical operations presents an important problem for both theoretical and practical purposes. In this work, we construct the first quantum codes to support transversally addressable non-Clifford gates. Concretely, given any three logical qubits across one or multiple codeblocks, one can execute the logical CCZ on those qubits via a depth-one physical circuit of CCZ gates. We present a simple, explicit construction based on Reed-Solomon codes that is nearly asymptotically good, and a more involved, asymptotically good construction based on transitive, iso-orthogonal algebraic geometry codes. We go on to develop a powerful theory of quantum codes supporting a rich class of transversally addressable gates in the Clifford hierarchy, going far beyond just the CCZ gate. We call this framework addressable orthogonality, and show that it can be used to construct asymptotically good quantum codes supporting an arbitrary product of multiply-controlled Z gates transversally and addressably, enabling major adaptivity to particular algorithms. Our constructions mark the first quantum codes to support any multi-qubit gate transversally and addressably. Accordingly, our results have major implications for the general addressabilitiy problem in error correction. This is a merged submission based on arXiv:2502.01864 and arXiv:2507.05392.
- Optimal Distillation of Qubit ClocksSujay Kazi (Duke University); Iman Marvian (Duke University)[abstract]Abstract: The problem of coherence distillation asks: given multiple copies of a coherent input state, produce a coherent output state with much less noise (that is, much closer to a pure coherent state), in such a way that the time evolution of the output state matches the time evolution of the input state. We thoroughly address this problem for the case of qubit states. Principally, we compute the distillation channel that is asymptotically optimal in the limit of a large number of input qubit states and compute the resulting infidelity (one minus fidelity) between the output qubit state and the desired pure state. We show that this infidelity is closely related to a resource known as purity of coherence, a quantity obtained from the Right Logarithmic Derivative (RLD) Fisher information metric. We thus demonstrate an operational meaning for purity of coherence as the quantity whose monotonicity on time-translation invariant channels sets the strongest asymptotic bound on the capability of coherence distillation for qubit states. For a special choice of input and output state, we additionally investigate a number of extensions of this primary result. In particular, we present numerical schemes to produce the exact optimal distillation protocol for a fixed value of $N$, extend the computation of the optimal protocol and the minimum infidelity to the $1/N^2$ and $1/N^3$ orders, estimate the amount of purity of coherence that is dissipated by the optimal distillation protocol, and compute the mathematical behavior of the protocol more precisely in the low-noise and high-noise limits.
- Quantum Circuit Complexity of Matrix-Product UnitariesGeorgios Styliaris (Max Planck Institute of Quantum Optics); Rahul Trivedi (Max Planck Institute of Quantum Optics); J. Ignacio Cirac (Max Planck Institute of Quantum Optics)[abstract]Abstract: Matrix-product unitaries (MPUs) are many-body unitary operators that, as a consequence of their tensor-network structure, preserve the entanglement area law in 1D systems. However, it is unknown how to implement an MPU as a quantum circuit since the individual tensors describing the MPU are not unitary. In this paper, we show that a large class of MPUs can be implemented with a polynomial-depth quantum circuit. For an $N$-site MPU built from a repeated bulk tensor with open boundary, we explicitly construct a quantum circuit of polynomial depth $T = O(N^{\alpha})$ realizing the MPU, where the constant $\alpha$ depends only on the bulk and boundary tensor and not the system size $N$. We show that this class includes nontrivial unitaries that generate long-range entanglement and, in particular, contains a large class of unitaries constructed from representations of $C^*$-weak Hopf algebras. Furthermore, we also adapt our construction to nonuniform translationally-varying MPUs and show that they can be implemented by a circuit of depth $O(N^{\beta} \, \mathrm{poly}\, D)$ where $\beta \le 1 + \log_2 \sqrt{D}/ s_{\min}$, with $D$ being the bond dimension and $s_{\min}$ is the smallest nonzero Schmidt value of the normalized Choi state corresponding to the MPU.
- Efficient Learning Algorithms for Structured Bosonic and Fermionic Unitary OperatorsMarco Fanizza (Inria de Saclay, IPP); Vishnu Iyer (University of Texas at Austin); Junseo Lee (Seoul National University and Norma Inc.); Antonio A. Mele (Freie Universität Berlin); Francesco A. Mele (Scuola Normale Superiore di Pisa)[abstract]Abstract: The field of quantum learning theory has advanced rapidly in recent years, at the intersection of quantum information science, statistical learning, and computational complexity. A key task in this area is quantum process tomography, which seeks to learn unitary transformations of quantum states efficiently. Efficient process tomography would be highly valuable: for instance, learning an unknown natural process could enable its efficient implementation and simulation on a quantum computer. However, learning arbitrary unitary operators is generally prohibitively expensive, with several sample- and time-complexity lower bounds showing the task is intractable. Thus, work typically focuses on more structured classes of operators when computational efficiency is desired. Two especially important such classes are bosonic and fermionic Gaussian unitaries. These operators have compact parametrizations, rich algebraic structure, and enough expressiveness to capture many relevant physical processes. As a result, they are ubiquitous in quantum information theory. In this work, we advance the learning theory of bosonic and fermionic unitaries in two ways: (1) We give the first time-efficient algorithm to learn bosonic Gaussian unitaries. The complexity of the algorithm scales polynomially in the number of modes, a total photon number bound (which is critical in defining an energy-constrained distance measure), and a squeezing parameter which captures how much the operator increases the mean energy of a vacuum state. (2) We give a first-of-its-kind algorithm to learn fermionic unitaries prepared with at most t non-Gaussian gates. Our algorithm scales polynomially in the number of modes and exponentially in t, and we argue that this scaling is optimal up to polynomial factors. Both algorithms produce an output whose distance to the input unitary is small in the worst-case (diamond) distance. Our results are organized into two separate manuscripts: one is arXiv:2504.11318 (Mildly-Interacting Fermionic Unitaries are Efficiently Learnable), and the other will be released on arXiv within a month (Efficient Learning of Bosonic Gaussian Unitary Channels).
- Fast and Error-Correctable Quantum RAMFrancesco Cesa (IQOQI Innsnruck); Hannes Pichler (IQOQI Innsbruck); Hannes Bernien (University of Chicago / IQOQI Innsbruck)[abstract]Abstract: Quantum devices can process data in a fundamentally different way than classical computers. To leverage this potential, many algorithms require the aid of a quantum Random Access Memory (QRAM), i.e. a module capable of efficiently loading datasets (both classical and quantum) onto the quantum processor. However, a realisation of this fundamental building block is still outstanding, since existing proposals require prohibitively many resources for reliable implementations, or are not compatible with current architectures. Moreover, present approaches cannot be scaled-up, as they do not allow for efficient quantum error-correction. Here we develop a QRAM design, that enables fast and robust QRAM calls, naturally allows for fault-tolerant and error-corrected operation, and can be integrated on present hardware. Our proposal employs a special quantum resource state that is consumed during the QRAM call: we discuss how it can be assembled and processed efficiently in a dedicated module. Concretely, we provide detailed blueprints and quantitative estimations for modern neutral-atom processors, demonstrating that high-fidelity QRAM queries can be implemented at rates compatible with the fault-tolerant computational clock-time. Our work places a long missing, fundamental component of quantum computers within reach of currently available technology; this opens the door to algorithms featuring practical quantum advantage, including search or oracular problems, quantum chemistry and machine learning.
- Non-iid hypothesis testing: from classical to quantumGiacomo De Palma (Università di Bologna); Marco Fanizza (Inria de Saclay, IPP); Ryan O'Donnell (Carnegie Mellon University); Connor Mowry (University of Illinois Urbana-Champaign)[abstract]Abstract: We study hypothesis testing (aka state certification) in the \emph{non-identically distributed} setting. A recent work (Garg et~al.~2023) considered the classical case, in which one is given (independent) samples from $T$ unknown probability distributions $p_1, \dots, p_T$ on $[d] = \{1, 2, \dots, d\}$, and one wishes to accept/reject the hypothesis that their average $p_{\textnormal{avg}}$ equals a known hypothesis distribution~$q$. Garg et al.~showed that if one has just $c = 2$ samples from each $p_i$, and provided $T \gg \frac{\sqrt{d}}{\eps^2} + \frac{1}{\eps^4}$, one can (whp) distinguish $p_{\textnormal{avg}} = q$ from $\dtv{p_{\textnormal{avg}}}{q} > \eps$. This nearly matches the optimal result for the classical iid setting (namely, $T \gg \frac{\sqrt{d}}{\eps^2}$). Besides optimally improving this result (and generalizing to tolerant testing with more stringent distance measures), we study the analogous problem of hypothesis testing for non-identical \emph{quantum} states. Here we uncover an unexpected phenomenon: for any $d$-dimensional hypothesis state~$\sigma$, and given just a \emph{single} copy ($c = 1$) of each state $\rho_1, \dots, \rho_T$, one can distinguish $\rho_{\textnormal{avg}} = \sigma$ from $\Dtr{\rho_{\textnormal{avg}}}{\sigma} > \eps$ provided $T \gg d/\eps^2$. (Again, we generalize to tolerant testing with more stringent distance measures.) This matches the optimal result for the iid case, which is surprising because doing this with $c = 1$ is provably impossible in the classical case. A technical tool we introduce may be of independent interest: an Efron--Stein inequality, and more generally an Efron--Stein decomposition, in the quantum setting.
- On quantum to classical comparison for Davies generatorsJoao Basso (UC Berkeley); Shirshendu Ganguly (UC Berkeley); Alistair Sinclair (UC Berkeley); Nikhil Srivastava (UC Berkeley); Zachary Stier (UC Berkeley); Thuy-Duong Vuong (UC Berkeley)[abstract]Abstract: Despite extensive study, our understanding of quantum Markov chains remains far less complete than that of their classical counterparts. [Temme'13] observed that the Davies Lindbladian, a well-studied model of quantum Markov dynamics, contains an embedded classical Markov generator, raising the natural question of how the convergence properties of the quantum and classical dynamics compare. While [Temme'13] showed that the spectral gap of the Davies Lindbladian can be exponentially smaller than that of the embedded classical generator for certain highly structured Hamiltonians, we show that if the spectrum of the Hamiltonian does not contain long arithmetic progressions, then the two spectral gaps must be comparable. As a consequence, we prove that for a large class of Hamiltonians, including those obtained by perturbing a fixed Hamiltonian with a generic external field, the quantum spectral gap remains within a constant factor of the classical spectral gap. Our result aligns with physical intuition and enables the application of classical Markov chain techniques to the quantum setting. The proof is based on showing that any ``off-diagonal'' eigenvector of the Davies generator can be used to construct an observable which commutes with the Hamiltonian and has a Lindbladian Rayleigh quotient which is comparably small. Thus, a spectral gap for such observables implies a spectral gap for the full Davies generator.
- Efficient implementation of sequential quantum processes with group symmetryDmitry Grinko (University of Amsterdam and QuSoft); Satoshi Yoshida (The University of Tokyo); Mio Murao (The University of Tokyo); Maris Ozols (University of Amsterdam and QuSoft)[abstract]Abstract: Symmetry plays a crucial role in the design and analysis of quantum protocols. This result shows a canonical circuit decomposition of a quantum comb with $G\times H$ symmetry for compact groups $G$ and $H$ using the corresponding Clebsch--Gordan transforms. By using this circuit decomposition, we propose a parametrized quantum comb with group symmetry, and derive the optimal quantum comb which transforms an unknown unitary operation $U\in \SU(d)$ to its inverse $U^\dagger$ or transpose $U^\mathsf{T}$. From numerics, we find a deterministic and exact unitary transposition protocol for $d=3$ with $7$ queries to $U$, which is improved over the protocol shown in [Y.-A. Chen et al., arXiv:2403.04704], which requires $13$ queries to $U$. We also provide the simulation of random unitaries for any compact group $G$ using the compressed oracle, which can be implemented efficiently for the unitary group. The precision of our simulation for the unitary group is improved over the path-recording oracle introduced in [F. Ma and H.-Y. Huang, arXiv:2410.10116].
- Long-range nonstabilizerness and quantum codes, phases, and complexityFuchuan Wei (Tsinghua University); Zi-Wen Liu (Tsinghua University)[abstract]Abstract: Understanding nonstabilizerness (aka quantum magic) in many-body quantum systems, particularly its interplay with entanglement, represents an important quest in quantum computation and many-body physics. Drawing motivation from the study of quantum phases of matter and entanglement, we develop a systematic and rigorous theory of the notion of long-range magic (LRM)---nonstabilizerness that cannot be (approximately) erased by shallow local unitary circuits. By establishing connections to the theory of fault-tolerant logical gates, we show the emergence of LRM state families from quantum error-correcting codes. Then, denoting phases whose ground states all exhibit LRM as LRM phases, we prove concrete conditions under which a topological order constitutes an LRM phase, with prominent examples including certain non-Abelian topological orders. Finally, from the computational complexity perspective, we discuss the intrinsic quantumness of long-range magic from e.g. preparation and learning perspectives, and formulate a "no low-energy trivial magic" (NLTM) conjecture that has key motivation in the quantum PCP context for which our LRM results suggest a promising route. We also show how correlation functions can serve as diagnostics for LRM, demonstrating certain LRM state families by correlation properties. Our concepts and results admit nontrivial extensions to approximate (robust) versions and settings without geometric locality. This work leverages and sheds new light on the interplay between quantum resources, error correction and fault tolerance, many-body physics, and complexity theory.
- Universal work extraction in quantum thermodynamicsKaito Watanabe (University of Tokyo); Ryuji Takagi (University of Tokyo)[abstract]Abstract: Evaluating the maximum amount of work extractable from a nanoscale quantum system is one of the central problems in quantum thermodynamics. Previous works identified the free energy of the input state as the optimal rate of extractable work under the crucial assumption: experimenters know the description of the given quantum state, which restricts the applicability to significantly limited settings. Here, we show that this optimal extractable work can be achieved without knowing the input states at all, removing the aforementioned fundamental operational restrictions. We achieve this by presenting a universal work extraction protocol, whose description does not depend on input states but nevertheless extracts work quantified by the free energy of the unknown input state. Remarkably, our result partially encompasses the case of infinite-dimensional systems, for which optimal extractable work has not been known even for the standard state-aware setting. Our results clarify that, in spite of the crucial difference between the state-aware and state-agnostic scenarios in accomplishing information-theoretic tasks, whether we are in possession of information on the given state does not influence the optimal performance of the asymptotic work extraction.
- Extractors: QLDPC Architectures for Efficient Pauli-Based ComputationZhiyang (Sunny) He (MIT); Alexander Cowtan (University of Oxford); Dominic J. Williamson (IBM); Theodore J. Yoder (IBM)[abstract]Abstract: In pursuit of large-scale fault-tolerant quantum computation, quantum low-density parity-check (LDPC) codes have been established as promising candidates for low-overhead memory when compared to conventional approaches based on surface codes. Performing fault-tolerant logical computation on QLDPC memory, however, has been a long standing challenge in theory and in practice. In this work, we propose a new primitive, which we call an extractor system, that can augment any QLDPC memory into a computational block well-suited for Pauli-based computation. In particular, any logical Pauli operator supported on the memory can be fault-tolerantly measured in one logical cycle, consisting of O(d) physical syndrome measurement cycles, without rearranging qubit connectivity. We further propose a fixed-connectivity, LDPC architecture built by connecting many extractor-augmented computational (EAC) blocks with bridge systems. When combined with any user-defined source of high fidelity \ket{T} states, our architecture can implement universal quantum circuits via parallel logical measurements, such that all single-block Clifford gates are compiled away. The size of an extractor on an n qubit code is \tilde{O}(n), where the precise overhead has immense room for practical optimizations.
- An Improved Quantum Algorithm for 3-Tuple Lattice SievingLynn Engelberts (QuSoft and CWI); Yanlin Chen (University of Maryland); Amin Shiraz Gilani (QuICS and University of Maryland); Maya-Iggy van Hoof (Ruhr University Bochum); Stacey Jeffery (QuSoft, CWI and University of Amsterdam); Ronald de Wolf (QuSoft, CWI and University of Amsterdam)[abstract]Abstract: The assumed hardness of the Shortest Vector Problem in high-dimensional lattices is one of the cornerstones of post-quantum cryptography. The fastest known heuristic attacks on SVP are via so-called sieving methods. While these still take exponential time in the dimension $d$, they are significantly faster than non-heuristic approaches and their heuristic assumptions are verified by extensive experiments. $k$-Tuple sieving is an iterative method where each iteration takes as input a large number of lattice vectors of a certain norm, and produces an equal number of lattice vectors of slightly smaller norm, by taking sums and differences of $k$ of the input vectors. Iterating these ``sieving steps'' sufficiently many times produces a short lattice vector. The fastest attacks (both classical and quantum) are for $k=2$, but taking larger $k$ reduces the amount of memory required for the attack. In this paper we improve the quantum time complexity of 3-tuple sieving from $2^{0.3098 d}$ to $2^{0.2846 d}$, using a two-level amplitude amplification aided by a preprocessing step that associates the given lattice vectors with nearby ``center points'' to focus the search on the neighborhoods of these center points. Our algorithm uses $2^{0.1887d}$ classical bits and QCRAM bits, and $2^{o(d)}$ qubits. This is the fastest known quantum algorithm for SVP when total memory is limited to $2^{0.1887d}$.
- A complete theory for the Clifford commutant and its applicationsLennart Bittel (Freie Universität Berlin); Jens Eisert (Freie Universität Berlin); Lorenzo Leone (Freie Universität Berlin); Antonio A. Mele (Freie Universität Berlin); Salvatore F.E. Oliviero (Scuola Normale Superiore di Pisa)[abstract]Abstract: The Clifford group plays a central role in quantum information science. It is the building block for many error-correcting schemes and matches the first three moments of the Haar measure over the unitary group—a property that is essential for a broad range of quantum algorithms, with applications in pseudorandomness, learning theory, benchmarking, and entanglement distillation. At the heart of understanding many properties of the Clifford group lies the Clifford commutant: the set of operators that commute with $k$-fold tensor powers of Clifford unitaries. Previous understanding of this commutant has been limited to relatively small values of $k$, constrained by the number of qubits $n$. In this work, we develop a complete theory of the Clifford commutant. Our first result provides an explicit orthogonal basis for the commutant and computes its dimension for arbitrary $n$ and $k$. We also introduce an alternative and easy-to-manipulate basis formed by isotropic sums of Pauli operators. We show that this basis is generated by products of permutations— which generate the unitary group commutant— and at most three other operators. Additionally, we develop a \emph{graphical calculus} allowing a diagrammatic manipulation of elements of this basis. These results enable a wealth of applications: among others, we characterize all \emph{measurable} magic measures and identify optimal strategies for stabilizer property testing, whose success probability also offers an operational interpretation to stabilizer entropies. Finally, we show that these results also generalize to multi-qudit systems with prime local dimension. This submission merges two of our recent works: one presenting a complete theory of the Clifford commutant with applications, and one focused on showcasing a major application to state $k$-design convergence.
- Tight relations and equivalences between smooth relative entropiesBartosz Regula (RIKEN); Ludovico Lami (Scuola Normale Superiore); Nilanjana Datta (University of Cambridge)[abstract]Abstract: The precise one-shot characterisation of operational tasks in classical and quantum information theory relies on different forms of smooth entropic quantities. A particularly important connection is between the hypothesis testing relative entropy and the smoothed max-relative entropy, which together govern many operational settings. We first strengthen this connection into a type of equivalence: we show that the hypothesis testing relative entropy is equivalent to a variant of the smooth max-relative entropy based on the information spectrum divergence, which can be alternatively understood as a measured smooth max-relative entropy. Furthermore, we improve a fundamental lemma due to Datta and Renner that connects the different variants of the smoothed max-relative entropy, introducing a modified proof technique based on matrix geometric means and a tightened gentle measurement lemma. We use the unveiled connections and tools to strictly improve on previously known one-shot bounds and duality relations between the smooth max-relative entropy and the hypothesis testing relative entropy, sharpening also bounds that connect the max-relative entropy with Rényi divergences.
- Efficient Non-Adaptive Quantum Algorithms for Tolerant Junta TestingZongbo Bao (CWI and Qusoft); Yuxuan Liu (nstitute of Computing Technology, Chinese Academy of Sciences); Penghui Yao (Nanjing University); Zekun Ye (Fuzhou University); Jialin Zhang (Institute of Computing Technology, Chinese Academy of Sciences)[abstract]Abstract: We consider the problem of deciding whether an $n$-qubit unitary (or $n$-bit Boolean function) is $\varepsilon_1$-close to some $k$-junta or $\varepsilon_2$-far from every $k$-junta, where $k$-junta unitaries act non-trivially on at most $k$ qubits and as the identity on the rest, and $k$-junta Boolean functions depend on at most $k$ variables. For constant numbers $\varepsilon_1,\varepsilon_2$ such that $0 < \varepsilon_1 < \varepsilon_2 < 1$, we show the following. 1. A non-adaptive $O(k\log k)$-query tolerant $(\varepsilon_1,\varepsilon_2)$-tester for $k$-junta unitaries when $2\sqrt{2}\varepsilon_1 < \varepsilon_2$. 2. A non-adaptive tolerant $(\varepsilon_1,\varepsilon_2)$-tester for Boolean functions with $O(k \log k)$ quantum queries when $4\varepsilon_1 < \varepsilon_2$. 3. A $2^{\widetilde{O}(k)}$-query tolerant $(\varepsilon_1,\varepsilon_2)$-tester for $k$-junta unitaries for any $\varepsilon_1,\varepsilon_2$. The first algorithm provides an exponential improvement over the best-known quantum algorithms [CLL24, ADG25]. The second algorithm shows an exponential quantum advantage over any non-adaptive classical algorithm [CDL+25]. The third tester gives the first tolerant junta unitary testing result for an arbitrary gap. Besides, we adapt the first two quantum algorithms to be implemented using only single-qubit operations, thereby enhancing experimental feasibility, with a slightly more stringent requirement for the parameter gap.
- Universal classical-quantum channel resolvability and private channel codingTakaya Matsuura (RIKEN); Masahito Hayashi (The Chinese University of Hong Kong); Min-Hsiu Hsieh (Hon Hai Research Institute)[abstract]Abstract: We study the construction of fully universal private channel coding protocols for classical-quantum channels. While earlier schemes achieved universal decoding, they relied on random encoders, preventing complete universality. We close this gap by showing that spectral expansion of a graph associated with a codebook guarantees universal channel resolvability: if the graph has a large spectral gap, the output state induced by the codewords is asymptotically indistinguishable from the target state, independent of the channel. This yields the first deterministic, channel-independent resolvability coding in the quantum regime. Combining this with universal channel coding, we construct a fully universal private coding protocol that achieves standard private information rates, highlighting the role of expander graphs in secure quantum communication.
- Universal Fault Tolerance with Non-Transversal Clifford GatesBenjamin Anker (University of New Mexico); Milad Marvian (University of New Mexico)[abstract]Abstract: It has previously been shown that fault-tolerant syndrome can be enabled with flag gadgets. In essence, this relies upon the fact that syndrome extraction circuits are Clifford circuits. In this work we extend our previous framework to produce flag gadgets for syndrome extraction to a framework to flag any Clifford circuit. The construction we present allows a Clifford circuit including n two-qubit gates acting upon physical qubits in a code of distance d to be made fault tolerant to distance d using O(d^2 log(nd^2 log n)) ancilla qubits and O(nd^2 log(nd^2 log n)) extra CNOTs. This framework opens new pathways to universal fault-tolerance, for instance by allowing T gates to be implemented transversally while (a subset of) Clifford gates are fault-tolerantly implemented using flags, or by creating higher-reliability magic states using a flagged preparation circuit.
- Strong converse exponent of channel interconversionAadil Oufkir (RWTH Aachen University); Yongsheng Yao (RWTH Aachen University); Mario Berta (RWTH Aachen University)[abstract]Abstract: In their seminal work, Bennett et al. [IEEE Trans. Inf. Theory (2002)] showed that, with sufficient shared randomness, one noisy channel can simulate another at a rate equal to the ratio of their capacities. We establish that when coding above this channel interconversion capacity, the exact strong converse exponent is characterized by a simple optimization involving the difference of the corresponding Renyi channel capacities with Holder dual parameters. We extend this result to the entanglement-assisted interconversion of classical-quantum channels, showing that the strong converse exponent is likewise determined by differences of sandwiched Renyi channel capacities. The converse bound is obtained by relaxing to non-signaling assisted codes and applying Holder duality together with the data processing inequality for Renyi divergences. Achievability is proven by concatenating refined channel coding and simulation protocols that go beyond first-order capacities, achieving exponentially small conversion errors.
- Quantum advantage from soft decodersAndre Chailloux (Inria de Paris); Jean-Pierre Tillich (Inria de Paris)[abstract]Abstract: In the last years, Regev's reduction has been used as a quantum algorithmic tool for providing a quantum advantage for variants of the decoding problem. Following this line of work, the authors of [JSW+24] have recently come up with a quantum algorithm called ``decoded quantum interferometry'' that is able to solve in polynomial time several optimization problems. They study in particular the Optimal Polynomial Interpolation (OPI) problem, which can be seen as a decoding problem on Reed-Solomon codes. In this work, we provide strong improvements for some instantiations of the OPI problem. The most notable improvements are for the ISIS_\infty problem (originating from lattice-based cryptography) on Reed-Solomon codes but we also study different constraints for OPI. Our results provide natural and convincing decoding problems for which we believe to have a quantum advantage. Our proof techniques involve the use of a soft decoder for Reed-Solomon codes, namely the decoding algorithm from Koetter and Vardy. In order to be able to use this decoder in the setting of Regev's reduction, we provide a novel generic reduction from a syndrome decoding problem to a coset sampling problem, providing a powerful and simple to use theorem, which generalizes previous work and is of independent interest. We also provide an extensive study of OPI using the Koetter and Vardy algorithm.
- Infinite temperature at zero energyMatteo Ippoliti (The University of Texas at Austin); David M. Long (Stanford University)[abstract]Abstract: We construct a family of static, geometrically local Hamiltonians that inherit eigenstate properties of periodically-driven (Floquet) systems. Our construction is a variation of the Feynman-Kitaev clock - a well-known mapping between quantum circuits and local Hamiltonians - where the clock register is given periodic boundary conditions. Assuming the eigenstate thermalization hypothesis (ETH) holds for the input circuit, our construction yields Hamiltonians whose eigenstates have properties characteristic of infinite temperature, like volume-law entanglement entropy, across the whole spectrum - including the ground state. We then construct a family of exactly solvable Floquet quantum circuits whose eigenstates are shown to obey the ETH at infinite temperature. Combining the two constructions yields a new family of local Hamiltonians with provably volume-law-entangled ground states, and the first such construction where the volume law holds for all contiguous subsystems.
- Derandomised tensor product gap amplification for quantum HamiltoniansThiago Bergamaschi (UC Berkeley); Tony Metger (ETH Zurich); Thomas Vidick (Weizmann Institute and EPFL); Tina Zhang (MIT)[abstract]Abstract: The quantum PCP conjecture asks whether it is QMA-hard to distinguish between high- and low- energy Hamiltonians even when the gap between "high" and "low" energy is large (constant). A natural proof strategy is gap amplification: start from the fact that high- and low-energy Hamiltonians are hard to distinguish if the gap is small (inverse polynomial) [KSV02] and amplify the Hamiltonians to increase the energy gap while preserving hardness. Such a gap amplification procedure is at the heart of Dinur’s proof of the classical PCP theorem [Din07]. In this work, following Dinur’s model, we introduce a new quantum gap amplification procedure for Hamiltonians which uses random walks on expander graphs to derandomise (subsample the terms of) the tensor product amplification of a Hamiltonian. Curiously, our analysis relies on a new technique inspired by quantum de Finetti theorems, which have previously been used to rule out certain approaches to the quantum PCP conjecture [BH13].
- Quantized Markov chain couplings that prepare QsamplesKristan Temme (IBM Quantum); Pawel Wocjan (IBM Quantum)[abstract]Abstract: We present a novel approach to quantizing Markov chains. The approach is based on the Markov chain coupling method, which is frequently used to prove fast mixing. Given a particular coupling, e.g., a grand coupling, we construct a completely positive and trace preserving map. This quantum map has a unique fixed point, which corresponds to the quantum sample (qsample) of the classical Markov chain's stationary distribution. We show that the convergence time of the quantum map is directly related to the coupling time of the Markov chain coupling.
- A distillation-teleportation protocol for fault-tolerant QRAMAlexander M. Dalzell (AWS); András Gilyén (Rényi Institute); Connor T. Hann (AWS); Sam McArdle (AWS); Grant Salton (AWS); Quynh T. Nguyen (Harvard); Aleksander Kubica (Yale); Fernando G.S.L. Brandao (AWS)[abstract]Abstract: We present a protocol for fault-tolerantly implementing the logical quantum random access memory (QRAM) operation, given access to a specialized, noisy QRAM device. For coherently accessing classical memories of size 2^n, our protocol consumes only poly(n) fault-tolerant quantum resources (logical gates, logical qubits, quantum error correction cycles, etc.), avoiding the need to perform active error correction on all Ω(2^n) components of the QRAM device. This is the first rigorous conceptual demonstration that a specialized, noisy QRAM device could be useful for implementing a fault-tolerant quantum algorithm. In fact, the fidelity of the device can be as low as 1/poly(n). The protocol queries the noisy QRAM device poly(n) times to prepare a sequence of n-qubit QRAM resource states, which are moved to a general-purpose poly(n)-size processor to be encoded into a QEC code, distilled, and fault-tolerantly teleported into the computation. To aid this protocol, we develop a new gate-efficient streaming version of quantum purity amplification that matches the optimal sample complexity in a wide range of parameters and is therefore of independent interest. The exponential reduction in fault-tolerant quantum resources comes at the expense of an exponential quantity of purely classical complexity---each of the n iterations of the protocol requires adaptively updating the 2^n-size classical dataset and providing the noisy QRAM device with access to the updated dataset at the next iteration. We show that this classical operation can be parallelized to poly(n) classical circuit depth, but only in a model where classical sparse matrix-vector multiplication for 2^n-dimensional vectors can be as well. While our protocol demonstrates that QRAM is more compatible with fault-tolerant quantum computation than previously thought, the need for significant classical computational complexity exposes potentially fundamental limitations to realizing a truly poly(n)-cost fault-tolerant QRAM.
- Partial trace relations beyond normal matricesPablo Costa Rico (Technical University of Munich); Michael M. Wolf (Technical University of Munich)[abstract]Abstract: We investigate the relationship between partial traces and their dilations for general complex matrices, focusing on two main aspects: the existence of (joint) dilations and norm inequalities relating partial traces and their dilations. Throughout our analysis, we pay particular attention to rank constraints. We find that every pair of matrices of equal size and trace admits dilations of any rank larger than one. We generalize Audenaert's subadditivity inequality to encompass general matrices, multiple tensor factors, and different norms. A central ingredient for this is a novel majorization relation for Kronecker sums. As an application, we extend the interval of Werner states in which they are provably 2-undistillable in any dimension $d\geq 4$. We also prove new Schmidt-number witnesses and k-positive maps.
- Quantum Gibbs states are locally MarkovChi-Fang (Anthony) Chen (UC Berkeley); Cambyse Rouzé (INRIA)[abstract]Abstract: The Markov property entails the conditional independence structure inherent in Gibbs distributions for general classical Hamiltonians, a feature that plays a crucial role in inference, mixing time analysis, and algorithm design. However, much less is known about quantum Gibbs states. In this work, we show that for any Hamiltonian with a bounded interaction degree (e.g., D-dimensional lattices), the quantum Gibbs state is locally Markov at arbitrary temperature, meaning there exists a quasi-local recovery map for every local region. Notably, this recovery map is obtained by applying a detailed-balanced Lindbladian with jumps acting on the region. Consequently, we prove that (i) the conditional mutual information (CMI) for a shielded small region decays exponentially with the shielding distance, and (ii) under the assumption of uniform clustering of correlations, Gibbs states of general non-commuting Hamiltonians on $D$-dimensional lattices can be prepared by a quantum circuit of depth $\e^{\mathcal{O}(\log^D(n/\epsilon))}$. Our proofs introduce a regularization scheme for imaginary-time-evolved operators at arbitrarily low temperatures and reveal a connection between the Dirichlet form, a dynamic quantity, and the commutator in the KMS inner product, a static quantity. We believe these tools pave the way for tackling further challenges in quantum thermodynamics and mixing times, particularly in low-temperature regimes.
- Improving quantum communication rates with permutation-invariant codesSujeet Bhalerao (University of Illinois Urbana-Champaign); Felix Leditzky (University of Illinois Urbana-Champaign)[abstract]Abstract: In this work we improve the quantum communication rates of various quantum channels of interest using permutation-invariant quantum codes. We focus in particular on parametrized families of quantum channels and aim to improve bounds on their quantum capacity threshold, defined as the lowest noise level at which the quantum capacity of the channel family vanishes. These thresholds are important quantities as they mark the noise level up to which faithful quantum communication is theoretically possible. Our method exploits the fact that independent and identically distributed quantum channels preserve any permutation symmetry present at the input. The resulting symmetric output states can be described succinctly using the representation theory of the symmetric and general linear groups, which we use to derive an efficient algorithm for computing the channel coherent information of a permutation-invariant code. Our approach allows us to evaluate coherent information values for a large number of channel copies, e.g., at least 100 channel copies for qubit channels. We apply this method to various physically relevant channel models, including general Pauli channels, the dephrasure channel, the generalized amplitude damping channel, and the damping-dephasing channel. For each channel family we obtain improved lower bounds on their quantum capacities. For example, for the 2-Pauli and BB84 channel families we significantly improve the best known quantum capacity thresholds derived in [Fern, Whaley 2008]. These threshold improvements are achieved using a repetition code-like input state with non-orthogonal code states, which we further analyze in our representation-theoretic framework.
- Evidence that the Quantum Approximate Optimization Algorithm Optimizes the Sherrington-Kirkpatrick Model Efficiently in the Average CaseSami Boulebnane (JPMorganChase); Abid A. Khan (JPMorganChase); Minzhao Liu (JPMorganChase); Jeffrey Larson (Argonne National Laboratory); Dylan Herman (JPMorganChase); Ruslan Shaydulin (JPMorganChase); Marco Pistoia (JPMorganChase)[abstract]Abstract: The Sherrington-Kirkpatrick (SK) model serves as a foundational framework for understanding disordered systems. The Quantum Approximate Optimization Algorithm (QAOA) is a quantum optimization algorithm whose performance monotonically improves with its depth $p$. In this work, we introduce a new equivalence between the task of evaluating the energy of QAOA applied to the SK model in the infinite-size limit and the task of simulating a spin-boson system, which we show can be done with modest cost using matrix product states. Using this equivalence, we optimize QAOA parameters and provide numerical evidence that QAOA obtains a $(1-\epsilon)$ approximation to the optimal energy with circuit depth $\mathcal{O}(n/\epsilon^{\infiniteSizeLimitOneOverEta})$ in the average case, with $\varepsilon\lesssim\infiniteSizeLastpError\%$ at $p=\infiniteSizeLastp$. We then use these optimized QAOA parameters to evaluate the QAOA energy for finite-sized instances with up to $30$ qubits and find convergence to the ground state consistent with the infinite-size limit prediction. Our results provide strong numerical evidence that QAOA can efficiently approximate the ground state of the SK model in the average case.
- Efficiently learning depth-3 circuits via quantum agnostic boostingSrinivasan Arunachalam (IBM Quantum, Almaden Research Center); Arkopal Dutt (IBM Quantum, Cambridge); Alexandru Gheorghiu (IBM Quantum, Cambridge); Michael de Oliveira (International Iberian Nanotechnology Laboratory)[abstract]Abstract: We initiate the study of \emph{quantum agnostic learning} of phase states with respect to a function class $\mathcal{C}\subseteq \{c:\{0,1\}^n\rightarrow \{0,1\}$: given copies of an unknown $n$-qubit state $|\psi\rangle$ which has fidelity $\textsf{opt}$ with a phase state $|\phi_c\rangle=\frac{1}{\sqrt{2^n}}\sum_{x\in \{0,1\}^n}(-1)^{c(x)}|x\rangle$ for some $c\in \mathcal{C}$, output $|\phi\rangle$ which has fidelity $|\langle \phi | \psi \rangle|^2 \geq \textsf{opt}-\varepsilon$. To this end, we give agnostic learning protocols for the following classes: \begin{enumerate} \item Size-$t$ decision trees which runs in time $\textsf{poly}(n,t,1/\varepsilon)$. This also implies $k$-juntas can be agnostically learned in time $\textsf{poly}(n,2^k,1/\varepsilon)$. \item $s$-term DNF formulas in near-polynomial time $\textsf{poly}(n,(s/\varepsilon)^{\log \log s/\varepsilon})$. \end{enumerate} Our main technical contribution is a \emph{quantum agnostic boosting} protocol which converts a ``weak'' agnostic learner (which outputs a \emph{parity state} $|\phi\rangle$ such that $|\langle \phi|\psi\rangle|^2\geq \textsf{opt}/\textsf{poly}(n)$) into a ``strong'' learner (which outputs a sum of parity states $|\phi'\rangle$ such that $|\langle \phi'|\psi\rangle|^2\geq \textsf{opt} - \varepsilon$). Using quantum agnostic boosting, we obtain the first ``near'' polynomial-time $n^{O(\log \log n)}$ algorithm for learning $\textsf{poly}(n)$-sized depth-$3$ circuits (consisting of $\textsf{AND}$, $\textsf{OR}$, $\textsf{NOT}$ gates) in the uniform quantum $\textsf{PAC}$ model using quantum examples. Classically, the analogue of efficient learning depth-$3$ circuits (and even depth-$2$ circuits) in the uniform $\textsf{PAC}$ model has been a longstanding open question in computational learning theory. Our work nearly settles this question, when the learner is given quantum examples.
- Exponential improvements to the average-case hardness of BosonSamplingIshaun Datta (Stanford University); Adam Bouland (Stanford University); Bill Fefferman (University of Chicago); Felipe Hernandez (Penn State)[abstract]Abstract: BosonSampling and Random Circuit Sampling are important both as a theoretical tool for separating quantum and classical computation, and as an experimental means of demonstrating quantum speedups. Prior works have shown that average-case hardness of sampling follows from certain unproven conjectures about the hardness of computing output probabilities, such as the Permanent-of-Gaussians Conjecture (PGC), which states that $e^{-n\log{n}-n-O(\log n)}$ additive-error estimates to the output probability of most random BosonSampling experiments are $\#P$-hard. Prior works have only shown weaker average-case hardness results that do not imply sampling hardness. Proving these conjectures has become a central question in quantum complexity. In this work, we show that $e^{-n\log n-n-O(n^\delta)}$ additive-error estimates to output probabilities of most random BosonSampling experiments are $\#P$-hard for any $\delta>0$, exponentially improving on prior work. In the process, we circumvent all known barrier results for proving PGC. The remaining hurdle to prove PGC is now “merely” to show that the $O(n^\delta)$ in the exponent can be improved to $O(\log n).$ We also obtain an analogous result for Random Circuit Sampling. We then show, for the first time, a hardness of average-case classical sampling result for BosonSampling, under an anticoncentration conjecture. Specifically, we prove the impossibility of multiplicative-error sampling from random BosonSampling experiments with probability $1-2^{-\tilde{\mathstrut O}(N^{1/3})}$ for input size $N$, unless the Polynomial Hierarchy collapses. This exponentially improves upon the state-of-the-art. To do this, we introduce new proof techniques which tolerate exponential loss in the worst-to-average-case reduction. This opens the possibility to show the hardness of average-case sampling without ever proving PGC.
- Composable logical gate error in approximate quantum error correctionLukas Brenner (Technical University of Munich); Beatriz Dias (Technical University of Munich); Robert König (Technical University of Munich)[abstract]Abstract: To quantify the accuracy of logical gates in approximate quantum error correction, we introduce the {\em composable logical gate error}. This quantity accounts for both deviation from the target gate and leakage out of the code space. It is subadditive under gate composition, enabling simple circuit analysis, and can be bounded using matrix elements of physical unitaries between (approximate) logical basis states. As a case study, we study the composable logical gate error of linear optics implementations of Paulis and Cliffords in approximate Gottesman-Kitaev-Preskill (GKP) codes. We find that the logical gate error for implementations of Pauli gates depends linearly on the squeezing parameter. This means that their accuracy increases monotonically with the amount of squeezing. In contrast, implementations of some Clifford gates retain a constant logical gate error even in the limit of infinite squeezing. This highlights that results derived for ideal GKP codes do not always translate to physically realistic approximate codes. We propose a way of sidestepping this no-go result in hybrid qubit-oscillator systems with Gaussian, multi-qubit, and qubit-controlled Gaussian unitaries. We propose implementations of logical gates using two oscillators and three qubits, whose logical gate error is bounded by a linear function of the squeezing parameter and scales polynomially with the number of encoded qubits.
- Quantum oracles, weak and strongEwin Tang (UC Berkeley); John Wright (UC Berkeley); Mark Zhandry (Stanford University & NTT Research)[abstract]Abstract: What does it mean to have access to a subroutine? In quantum computing, the usual answer is that, if we can perform a circuit implementing a unitary $U$, we can also implement its variants $U^\dagger$, $\controlled U$, and $\controlled U^\dagger$ equally as efficiently. These four operations are the "usual" suite of oracles we mean when we model access to a subroutine. We interrogate these choices and investigate how changing our access model to $U$ affects the tractability of its computational problems. This submission is a collection of three preprints on possible models of oracle access. 1. In the most limited setting, we only have access to $U$: this is the natural model for metrology, sensing, learning, and other experimental contexts. We show that, unlike in the usual model, the generic Grover speedups for search and estimation, ubiquitous throughout quantum algorithms, cannot be achieved in this setting. 2. In the usual model as well as in general, we show that controlled unitary oracles have a surprisingly limited role. We prove they are not helpful for a large class of problems: problems which are invariant up to global phase, i.e. problems which are a about $U$ as a unitary channel. 3. In the full setting, we are given a circuit implementing $U$: this model is natural for algorithms on scalable quantum computers. We show that the usual suite of oracles is not enough to characterize the strength of this model, by giving a natural problem which cannot be solved efficiently without queries to $U^*$ or $U^T$---which can be implemented from any circuit for $U$. Our proofs are simple and use two main techniques: the compressed oracle method and a lifting principle which we call the "acorn trick".
- The Compressed Oracle is a Worthy (Multiplicative) AdversaryStacey Jeffery (QuSoft, CWI, University of Amsterdam); Sebastian Zur (IRIF, CNRS)[abstract]Abstract: The compressed oracle technique, introduced in the context of quantum cryptanalysis, is the latest method for proving quantum query lower bounds, and has had an impressive number of applications since its introduction, due in part to the ease of importing classical lower bound intuition into the quantum setting via this method. Previously, the main quantum query lower bound methods were the polynomial method, the adversary method, and the multiplicative adversary method, and their relative powers were well understood. In this work, we situate the compressed oracle technique within this established landscape, by showing that it is a special case of the multiplicative adversary method. To accomplish this, we introduce a simplified restriction of the multiplicative adversary method, the MLADV method, that remains powerful enough to capture the polynomial method and exhibit a strong direct product theorem, but is much simpler to reason about. We show that the compressed oracle technique is also captured by the MLADV method. This might make the MLADV method a promising direction in the current quest to extend the compressed oracle technique to non-product distributions.
- Dequantization and Hardness of Spectral Sum EstimationRoman Edenhofer (Université Paris Cité, CNRS, IRIF); Atsuya Hasegawa (Nagoya University); Francois Le Gall (Nagoya University)[abstract]Abstract: In this work, we present new dequantization and hardness results for estimating spectral sums of matrices, such as the log-determinant. Recent quantum algorithms have demonstrated that the logarithm of the determinant of sparse, well-conditioned, positive matrices can be approximated to $\varepsilon$-relative accuracy in time polylogarithmic in the dimension~$N$, specifically in time~$\mathrm{poly}(\mathrm{log}(N), s, \kappa, 1/\varepsilon)$, where $s$ is the sparsity and $\kappa$ the condition number of the input matrix. We provide a simple dequantization of these techniques that preserves the polylogarithmic dependence on the dimension, at the cost of worse scaling in the sparsity, condition number, and accuracy. Our classical algorithm runs in time~$\mathrm{poly}\mathrm{log}(N)\cdot s^{O(\sqrt{\kappa}\log \kappa/\varepsilon)}$ which constitutes an exponential improvement over previous classical algorithms in certain parameter regimes. In addition, we prove $\mathsf{DQC}1$-completeness for estimating specific spectral sums such as the trace of the inverse and the trace of matrix powers with analogous parameter scalings as the quantum algorithms for the log-determinant in the case of log-local Hamiltonians. The latter solves a main open problem posed by Cade and Montanaro (TQC 2018) and is closely related to the complexity of Schatten $p$-norm estimation. It further shows that the parameter scalings of known quantum algorithms are not achievable classically, assuming~$\mathsf{BPP}\subsetneq\mathsf{DQC}1$. Further, we also consider a different input model where instead of a classical description of a sparse matrix, we are given a block-encoding of it and analyze the complexity of spectral sum estimation in this model. We obtain $\mathsf{DQC}1$-completeness of estimating $\mathrm{tr}[f(A)]$ in this model in a very general way, in particular whenever $f$ and $f^{-1}$ are Lipschitz continuous with bounded Lipschitz constants. We conclude our work with $\mathsf{BQP}$-hardness and $\mathsf{PP}$-completeness results on the estimation of the log-determinant.
- Topological Quantum Spin Glass Order in qLDPC codesBenedikt Placke (University of Oxford); Tibor Rakovszky (Budapest University of Technology and Economics); Nikolas P. Breuckmann (University of Bristol); Vedika Khemani (Stanford University)[abstract]Abstract: Ordered phases of matter have close connections to computation. Two prominent examples are spin glass order, with wide-ranging applications in machine learning and optimization, and topological order, closely related to quantum error correction. Here, we introduce the concept of topological quantum spin glass (TQSG) order which marries these two notions, exhibiting both the complex energy landscapes of spin glasses, and the quantum memory and long-range entanglement characteristic of topologically ordered systems. Using techniques from coding theory and a quantum generalization of Gibbs state decompositions, we show that TQSG order is the low-temperature phase of various quantum low density parity check codes on expander graphs, including hypergraph and balanced product codes. Our work introduces a topological analog of spin glasses that preserves quantum information via a physically distinct mechanism, opening new avenues for both quantum statistical mechanics and quantum computer science.
- Tour de gross: A modular quantum computer based on bivariate bicycle codesEddie Schoute (IBM Quantum); Theodore J. Yoder (IBM Quantum); Patrick Rall (IBM Quantum); Emily Pritchett (IBM Quantum); Jay Gambetta (IBM Quantum); Andrew W. Cross (IBM Quantum); Malcolm Carroll (IBM Quantum); Michael E. Beverland (IBM Quantum)[abstract]Abstract: We present the bicycle architecture, a modular quantum computing framework based on high-rate, low-overhead quantum LDPC codes identified in prior work. For two specific bivariate bicycle codes with distances 12 and 18, we construct explicit fault-tolerant logical instruction sets and estimate the logical error rate of the instructions under circuit noise. We develop a compilation strategy adapted to the constraints of the bicycle architecture, enabling large-scale universal quantum circuit execution. Integrating these components, we perform end-to-end resource estimates demonstrating that an order of magnitude larger logical circuits can be implemented with a given number of physical qubits on the bicycle architecture than on surface code architectures. We anticipate further improvements through advances in code constructions, circuit designs, and compilation techniques.
- All pure multipartite entangled states of qubits can be self-tested up to complex conjugationIvan Šupić (Université Grenoble Alpes, CNRS, Grenoble INP, LIG); Maria Balanzo Juando (Universite Libre de Bruxelles); Andrea Coladangelo (Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, USA); Remigiusz Augusiak (Center for Theoretical Physics, Polish Academy of Sciences, Aleja Lotników 32/46, 02-668 Warsaw, Poland); Antonio Acin (CFO – Institut de Ciències Fotòniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels, Spain)[abstract]Abstract: Device-independent self-testing refers to the certification of quantum states based entirely on the correlations exhibited by measurements on separate subsystems. The fact that such a certification is possible at all is remarkable in its own right, and is intimately connected to the violation Bell’s inequalities by entangled quantum systems. In the bipartite case, self-testing of states has been completely characterized, up to local isometries, as there exist protocols that self-test arbitrary pure states of any local dimension. Despite the growing interest in device-independent certification protocols, an analogous result in the general multipartite case has remained elusive. In this work, we give a complete characterization of the qubit case, showing that any multipartite entangled state of qubits can be self-tested.
- An Area Law for Metastable StatesThiago Bergamaschi (UC Berkeley); Chi-Fang (Anthony) Chen (UC Berkeley); Umesh Vazirani (UC Berkeley)[abstract]Abstract: Statistical mechanics assumes that a quantum many-body system at low temperature can be described by its Gibbs state. However, many complex quantum systems only exist as metastable states of dissipative open system dynamics, which substantially deviate from true thermal equilibrium. Why, then, should the predictions of thermal equilibrium--such as the area law--be so unreasonably effective in explaining low-temperature phenomena? In this work, we model metastable states as approximate stationary states of a quasi-local, (KMS)-detailed-balanced master equation representing Markovian system-bath interaction. We show that all metastable states exhibit universal structures that parallel true quantum Gibbs states: an area law of mutual information and a local Markov property. The more metastable the states are, the larger the regions to which these structural results apply. Behind our structural results lies a systematic framework encompassing sharp equivalences between local minima of free energy, a non-commutative Fisher information, as well as approximate detailed-balance and Kubo-Martin-Schwinger conditions, ultimately building towards a quantitative theory of thermal metastability.
- An Algorithmic Polynomial Freiman-Ruzsa Theorem via Stabilizer LearningSrinivasan Arunachalam (IBM Quantum); Jop Briet (CWI); Davi Castro-Silva (University of Cambridge); Arkopal Dutt (IBM Quantum); Tom Gur (University of Cambridge)[abstract]Abstract: In a recent breakthrough in additive combinatorics, Gowers, Green, Manners, and Tao (Annals of Mathematics, 2025) resolved the polynomial Freiman-Ruzsa conjecture. Here, we algorithmize their main result by dequantizing the stabilizer learning algorithm of Chen et al. [QIP'25]
- Quantum Spin Chains Thermalize at All TemperaturesThiago Bergamaschi (UC Berkeley); Chi-Fang (Anthony) Chen (UC Berkeley)[abstract]Abstract: It is shown that every one-dimensional Hamiltonian with short-range interacting spins admits a quantum Gibbs sampler [CKG23] with a system-size independent spectral gap at all finite temperatures. Consequently, their Gibbs states can be prepared in polylogarithmic depth, and satisfy exponential clustering of correlations, generalizing [Ara69].
- Learning quantum Gibbs states locally and efficientlyChi-Fang (Anthony) Chen (UC Berkeley); Anurag Anshu (Harvard University); Quynh Nguyen (Harvard University)[abstract]Abstract: Learning the Hamiltonian underlying a quantum many-body system in thermal equilibrium is a fundamental task in quantum learning theory and experimental sciences. To learn the Gibbs state of local Hamiltonians at any inverse temperature $\beta$, the state-of-the-art provable algorithms fall short of the optimal sample and computational complexity, in sharp contrast with the locality and simplicity in the classical cases. In this work, we present a learning algorithm that learns each local term of an $n$-qubit $D$-dimensional Hamiltonian to an additive error $\epsilon$ with sample complexity $\tilde{O}( \frac{e^{\poly\beta}}{\beta^2\epsilon^2}) \log(n)$. The protocol uses parallelizable local quantum measurements that act within bounded regions of the lattice and near-linear-time classical post-processing. Thus, our complexity is near optimal with respect to $n,\epsilon$ and is polynomially tight with respect to $\beta$. We also give a learning algorithm for Hamiltonians with bounded interaction degree with sample and time complexities of similar scaling on $n$ but worse on $\beta, \epsilon$. At the heart of our algorithm is the interplay between locality, the Kubo-Martin-Schwinger condition, and the operator Fourier transform at arbitrary temperatures.
- Local transformations of bipartite entanglement are rigidCan Bostanci (Columbia University); Tony Metger (ETH Zurich); Henry Yuen (Columbia University)[abstract]Abstract: Uhlmann’s theorem is a fundamental result in quantum information theory that quantifies the optimal overlap between two bipartite pure states after applying local unitary operations (called Uhlmann transformations). We show that optimal Uhlmann transformations are rigid – in other words, they must be unique up to some well-characterized degrees of freedom. This rigidity is also robust: Uhlmann transformations achieving near-optimal overlaps must be close to the unique optimal transformation (again, up to well-characterized degrees of freedom). We describe two applications of our robust rigidity theorem: (a) we obtain better interactive proofs for synthesizing Uhlmann transformations and (b) we obtain a simple, alternative proof of the Gowers-Hatami theorem on the stability of approximate representations of finite groups.
- Lieb-Robinson bounds with exponential-in-volume tailsBen McDonough (The University of Colorado, Boulder); Chao Yin (Stanford University); Andrew Lucas (The University of Colorado, Boulder); Carolyn Zhang (Harvard University)[abstract]Abstract: Lieb-Robinson bounds demonstrate the emergence of locality in many-body quantum systems. Intuitively, Lieb-Robinson bounds state that with local or exponentially decaying interactions, the correlation that can be built up between two sites separated by distance $r$ after a time $t$ decays as $\exp(vt-r)$, where $v$ is the emergent Lieb-Robinson velocity. In many problems, it is important to also capture how much of an operator grows to act on $r^d$ sites in $d$ spatial dimensions. Perturbation theory and cluster expansion methods suggest that at short times, these volume-filling operators are suppressed as $\exp(-r^d)$ at short times. We confirm this intuition, showing that for $r > vt$, the volume-filling operator is suppressed by $\exp(-(r-vt)^d/(vt)^{d-1})$. This closes a conceptual and practical gap between the cluster expansion and the Lieb-Robinson bound. We then present two very different applications of this new bound. Firstly, we obtain improved bounds on the classical computational resources necessary to simulate many-body dynamics with error tolerance $\epsilon$ for any finite time $t$: as $\epsilon$ becomes sufficiently small, only $\epsilon^{-\mathrm{O}(t^{d-1})}$ resources are needed. A protocol that likely saturates this bound is given. Secondly, we prove that disorder operators have volume-law suppression near the "solvable (Ising) point" in quantum phases with spontaneous symmetry breaking, which implies a new diagnostic for distinguishing many-body phases of quantum matter.
- Better completeness for QMAScott Aaronson (University of Texas at Austin); Stacey Jeffery (Centrum Wiskunde & Informatica); Freek Witteveen (Centrum Wiskunde & Informatica)[abstract]Abstract: A long-standing open problem in quantum complexity theory is whether QMA has perfect completeness, i.e. whether any QMA verifier can be made to have completeness $c=1$. Previous constructions have yielded a completeness parameter exponentially close to 1. We improve this to doubly-exponentially close to 1. Additionally, we show that QMA has perfect completeness if one allows the verifier an infinite-dimensional (witness) space. We show that this can be achieved using a gate set which is such that the ability to use an infinite-dimensional space does not increase the computational power of QMA. We also show that when using a finite-dimensional space of polynomially many qubits, a completeness doubly-exponentially close to 1 is optimal among black-box constructions. We show that the soundness can at most be made exponentially small using black-box reductions.
- Is it Gaussian? Testing bosonic quantum statesFilippo Girardi (Scuola Normale Superiore); Freek Witteveen (QuSoft and CWI); Francesco Anna Mele (Scuola Normale Superiore); Lennart Bittel (Dahlem Center for Complex Quantum Systems, Freie Universität Berlin); Salvatore Francesco Emanuele Oliviero (Scuola Normale Superiore); David Gross (Institute for Theoretical Physics, University of Cologne); Michael Walter (Ruhr University Bochum and University of Amsterdam)[abstract]Abstract: Gaussian states are widely regarded as the most important class of continuous-variable (CV) quantum states, as they naturally arise in physical systems and play a key role in quantum technologies. This motivates a fundamental question: given copies of an unknown CV state, how can we efficiently test whether it is Gaussian? We address this problem from the perspective of representation theory and quantum learning theory, characterizing the sample complexity of Gaussianity testing as a function of the number of modes. For pure states, we prove that just a constant number of copies is sufficient to decide whether the state is exactly Gaussian. We then extend this to the tolerant setting, showing that a polynomial number of copies suffices to distinguish states that are close to Gaussian from those that are far. In contrast, we establish that testing Gaussianity of general mixed states necessarily requires exponentially many copies, thereby identifying a fundamental limitation in testing CV systems. Our approach relies on rotation-invariant symmetries of Gaussian states together with the recently introduced toolbox of CV trace-distance bounds.
- Umlaut informationFilippo Girardi (Scuola Normale Superiore); Aadil Oufkir (RWTH Aachen University); Bartosz Regula (RIKEN Center for Quantum Computing); Marco Tomamichel (National University of Singapore); Mario Berta (RWTH Aachen University); Ludovico Lami (Scuola Normale Superiore)[abstract]Abstract: We study the quantum umlaut information, a correlation measure defined for bipartite quantum states as a reversed variant of the quantum mutual information. We show that it has an operational interpretation as the asymptotic error exponent in the hypothesis testing task of deciding whether a given bipartite state is product or not. We generalise the umlaut information to quantum channels, where it also extends the notion of `oveloh information' [Nuradha et al., arXiv:2404.16101]. We prove that channel umlaut information is additive for classical-quantum channels, while we observe additivity violations for fully quantum channels. Inspired by recent results in entanglement theory, we then show as our main result that the regularised umlaut information constitutes a fundamental measure of the quality of classical information transmission over a quantum channel - as opposed to the capacity, which quantifies the quantity of information that can be sent. This interpretation applies to coding assisted by activated non-signalling correlations, and the channel umlaut information is in general larger than the corresponding expression for unassisted communication as obtained by Dalai for the classical-quantum case [IEEE Trans. Inf. Theory 59, 8027 (2013)]. In the classical unassisted setting, the channel umlaut information has a further operational interpretation as the zero-rate error exponent of list decoding in the large list limit. Combined with prior works on non-signalling--assisted zero-error channel capacities, our findings imply a dichotomy between the settings of zero-rate error exponents and zero-error communication. While our results are single-letter only for classical-quantum channels, we also give a single-letter bound for fully quantum channels in terms of the `geometric' version of umlaut information.
- Quantum generalizations of Glauber and Metropolis dynamicsChi-Fang Chen (Institute for Quantum Information and Matter, California Institute of Technology, Pasadena, CA, USA; University of California, Berkeley, CA, USA; Massachusetts Institute of Technology, Cambridge, MA, USA); Csaba Czabán (HUN-REN Alfréd Rényi Institute of Mathematics, Budapest, Hungary); Joao F. Doriguello (HUN-REN Alfréd Rényi Institute of Mathematics, Budapest, Hungary); András Gilyén (HUN-REN Alfréd Rényi Institute of Mathematics, Budapest, Hungary); Balázs Kabella (HUN-REN Alfréd Rényi Institute of Mathematics, Budapest, Hungary); Michael J. Kastoryano (AWS Center for Quantum Computing, Pasadena, CA & University of Copenhagen, Denmark); József Mák (HUN-REN Wigner Research Centre for Physics, Budapest, Hungary); Zoltán Zimborás (HUN-REN Wigner Research Centre for Physics, Budapest, Hungary & University of Helsinki, Finland)[abstract]Abstract: Markov Chain Monte Carlo (MCMC) methods are an essential tool in classical algorithms design. Especially, the Metropolis sampling algorithm and Glauber dynamics have drastically advanced our understanding of material properties, reaction dynamics, phase transitions, and thermodynamics. Recently, there has been a new wave of quantum MCMC algorithms that draws inspiration from the cooling process in Nature to design continuous-time Quantum Markov chains (i.e., Lindbladians) satisfying (approximate) detailed balance. Nevertheless, the quantum analog of detailed balance, which has been central to classical Markov chain design and analysis, has posed a challenge to quantum algorithms design and has only recently been achieved exactly and (quasi)-locally for an efficiently implementable Lindbladian by [CKG23]. The construction of [CKG23] provably leads to an efficient Gibbs state preparation method in the high-temperature regime. However, proving fast mixing for low temperatures remains an open problem, apart from some (almost) integrable systems. Here we introduce (i) a new continuous-time Lindbladian construction that also leads to quasi-local and detailed-balanced dynamics, and (ii) show that it is fast mixing for high-temperature lattice Hamiltonians. The new construction's major advantage is that it does not increase the number of Kraus operators, which is particularly helpful for numerical studies. We exploit the resulting low Kraus rank through a (iii) novel custom variant of density matrix renormalization group (DMRG) for superoperators to provide numerical evidence for various 1D models (Transverse-field Ising, Heisenberg XXZ) that the Gibbs sampler is mixing fast. We also introduce (iv) new detailed-balanced discrete-time quantum channel variants of all existing continuous-time detailed-balanced Lindbladian construction and (v) show that they are also mixing fast at high-temperatures, and provide some preliminary (vi) resource estimates for their implementation confirming their algorithmic efficiency.
- Pauli tomography at your fingertipsJayadev Acharya (Cornell University); Abhilash Dharmavarapu (Cornell University); Yuhan Liu (Rice University); Nengkun Yu (SUNY Stony Brook)[abstract]Abstract: We prove that to learn an $N$-qubit state with $\varepsilon$ error in trace distance, $\Tilde{\Theta}(\frac{10^N}{\eps^2})$ copies are necessary and sufficient using Pauli measurements, where $\Tilde{\Theta}$ hides a $\sqrt{N}$ factor. The lower bound holds under adaptivity. Thus, we nearly settle the worst-case copy complexity of Pauli tomography, which has been a long-standing problem. Our main technical contribution is a novel lower bound framework for adaptive single-copy state tomography with measurement constraints. Our method allows measurement-dependent hard instances for tighter lower bounds, and characterizes the hardness of learning using the \emph{measurement information channel}. The power of our framework extends beyond Pauli measurements: We prove that Pauli measurements are near-optimal among single-qubit measurements, and further prove tight lower bounds for adaptive $k$-outcome measurements.
- Average-case quantum complexity from glassinessAlexander Zlokapa (MIT); Bobak T. Kiani (Bowdoin College); Eric R. Anschuetz (Caltech)[abstract]Abstract: We provide a framework for average-case quantum complexity by showing that glassiness obstructs a natural family of quantum algorithms. Glassiness --- a phenomenon in physics characterized by a disordered, slow-mixing phase --- is known to imply hardness for stable classical algorithms; for example, constant-time Langevin dynamics or message-passing fail for random $k$-SAT and max-cut problems in a glassy parameter regime. We present comparable results in the quantum setting with the following contributions. \begin{itemize}[rightmargin=7em] \item \emph{Quantum optimal transport view of glassiness.} We show that the standard notion of quantum glassiness in physics implies that the Gibbs state is decomposed into clusters extensively separated in quantum Wasserstein distance. We prove this implies lower bounds on the quantum Wasserstein complexity of channels from non-glassy to glassy states. \item \emph{Structural argument for hardness.} We define \emph{stable quantum algorithms} in terms of Lipschitz temperature dependence. We prove that constant-time local Lindbladian evolution and shallow variational algorithms are stable and hence fail to capture the clustered geometry of the Gibbs state, yielding a geometrically interpretable algorithmic obstruction. Contrary to prior Lindbladian runtime lower bounds that only apply to evolution from worst-case initial states, our results hold even when starting from the maximally mixed state. \end{itemize} At a technical level, our techniques (based on channel complexity) differ significantly from classical probabilistic approaches due to the sign problem in the absence of a known eigenbasis. This allows our average-case hardness results to apply to non-commuting, non-stoquastic quantum Hamiltonians. As an example, we show the average-case hardness of random 3-local Hamiltonians: the ensemble of all 3-local Pauli strings with independent Gaussian coefficients. To obtain this result, we compute the full replica symmetry breaking solution of the general $p$-local Pauli Hamiltonian ensemble via the replica trick, a non-rigorous but widely used method in statistical physics. The system's phase diagram is richer than its classical (Ising $p$-spin) and fermionic (SYK) analogues, which either always or never have a glassy phase; instead, the Pauli ensemble has a glassy phase only below some constant value of $p$, confirming the phase diagram predicted by prior finite-size numerical analyses.
- Less is More: On Copy Complexity in Quantum CryptographyPrabhanjan Ananth (UCSB); Eli Goldin (NYU)[abstract]Abstract: Quantum cryptographic definitions are often sensitive to the number of copies of the cryptographic states received by adversary. Making definitional changes to the number of copies accessible to an adversary can drastically affect various aspects including the computational hardness, feasibility, and applicability of the resulting cryptographic scheme. This phenomenon appears in many places in quantum cryptography, including the notions quantum pseudorandomness and unclonable cryptography. To address this, we present a generic approach to boost single-copy security to multi-copy security and apply this approach to many settings. As a consequence, we obtain the following new results: • One-copy stretch pseudorandom state generators (under mild assumptions) imply the existence of t-copy stretch pseudorandom state generators, for any fixed polynomial t. • One-query pseudorandom unitaries with short keys (under mild assumptions) imply the existence of t-query pseudorandom unitaries with short keys, for any fixed polynomial t. • Assuming indistinguishability obfuscation and other standard cryptographic assumptions, there exist identical-copy secure unclonable primitives such as publickey quantum money and quantum copy-protection.
- Quantum simulation of a noisy classical nonlinear dynamicsSergey Bravyi (IBM Quantum); Sergiy Zhuk (IBM Quantum); Mykhaylo Zayats (IBM Quantum); Robert Manson-Sawko (IBM Research Europe)[abstract]Abstract: We present an end-to-end quantum algorithm with provable performance guarantees for simulating a large class of classical nonlinear dynamical systems. The considered dynamical systems are described by stochastic dissipative differential equations with a quadratic nonlinearity satisfying certain sparsity and divergence-free conditions. Our algorithm approximates the expected value of any sparse low-degree polynomial evaluated at the solution of the classical system. The runtime scales poly-logarithmically with the system size and polynomially with the evolution time, inverse error tolerance, and parameters quantifying sparsity, dissipation, and nonlinearity strength. The considered simulation problem is shown to be BQP-complete, providing a strong evidence for a quantum advantage. We benchmark the quantum algorithm via numerical experiments by simulating a vortex flow in the 2D Navier Stokes equation.
- Classical Simulations of Low Magic Quantum DynamicsKemal Aziz (Rutgers University); Haining Pan (Rutgers University); Michael Gullans (NIST & University of Maryland); Jedediah Pixley (Rutgers University & Flatiron Institute)[abstract]Abstract: We develop classical simulation algorithms for adaptive quantum circuits that produce states with low levels of "magic" (i.e., non-stabilizerness). These algorithms are particularly well-suited to circuits with high rates of Pauli measurements, such as those encountered in quantum error correction and monitored quantum circuits. The measurements serve to limit the buildup of magic induced by non-Clifford operations arising from generic noise processes or unitary gates, respectively. Our algorithms also allow a systematic truncation procedure to achieve approximate simulation. To benchmark our approach, we study the dynamics of all-to-all monitored quantum circuits with a sub-extensive rate of T-gates per unit of circuit depth, where we can simulate previously inaccessible system sizes and depths. We characterize measurement-induced phase transitions in the output wavefunction, including in the entanglement, purification, and magic. We outline the utility of our algorithms to simulate dynamics with low magic and high entanglement, complementary to the leading matrix-product state approaches.
- Fast Simulation of Fermions with Reconfigurable QubitsNishad Maskara (MIT); Marcin Kalinowski (Harvard); Daniel Gonzalez-Cuadra (CSIC); Mikhail Lukin (Harvard)[abstract]Abstract: Performing large-scale, accurate quantum simulations of many-fermion systems is a central challenge in quantum science, with applications in chemistry, materials, and high-energy physics. Despite significant progress, realizing generic fermionic algorithms with qubit systems incurs significant space-time overhead, scaling as $O(N)$ for $N$ fermionic modes. Here we present a method for faster fermionic simulation with asymptotic space-time overhead of $O(\log(N))$ in the worst case, and $O(1)$ for circuits with additional structure, including important subroutines like the fermionic fast Fourier transform. This exponential reduction is achieved by using reconfigurable quantum systems with non-local connectivity, mid-circuit measurement, and classical feedforward, to generate dynamical fermion-to-qubit mappings. We apply this technique to achieve efficient compilation for key simulation tasks, including Hamiltonian simulation of the sparse Sachdev–Ye–Kitaev model and periodic materials, as well as free-fermion state-preparation. Moreover, we show that the algorithms themselves can be adapted to use only the $O(1)$-overhead structures to further reduce resource overhead. These techniques can lower gate counts by orders of magnitude for practical system sizes and are natively compatible with error corrected computation, making them ideal for early fault-tolerant quantum devices. Our results tightly bound the computational gap between fermionic and qubit models and open new directions in quantum simulation algorithm design and implementation.
- Quantum matrix arithmetics with Hamiltonian evolutionChristopher Kang (The University of Chicago); Yuan Su (Microsoft)[abstract]Abstract: The efficient implementation of matrix arithmetic operations underpins the speedups of many quantum algorithms. We present a suite of methods to do matrix arithmetics---with the result encoded in the off-diagonal blocks of a Hamiltonian---using Hamiltonian evolutions of input operators. We show how to maintain this Hamiltonian block encoding, so that matrix operations can be composed one after another, and the entire computation takes $\leq 2$ ancilla qubits. We achieve this for matrix multiplication, matrix addition, matrix inversion, Hermitian conjugation, fractional scaling, integer scaling, complex phase scaling, as well as singular value transformation for both odd and even polynomials. We also develop an overlap estimation algorithm to extract classical properties of Hamiltonian block encoded operators, analogous to the well known Hadmard test, at no extra cost of qubit. Our Hamiltonian matrix multiplication uses the Lie group commutator product formula and its higher-order generalizations due to Childs and Wiebe. Our Hamiltonian singular value transformation employs a dominated polynomial approximation, wherein the approximation holds over the domain of interest, while the polynomial is upper bounded by the target function on the entire unit interval. When applied to quantum simulation, our methods inherit the commutator scaling of product formulas, while leveraging the power of matrix arithmetics to reduce the cost of each step. To illustrate this feature, we present a circuit for simulating a class of sum-of-squares Hamiltonians, covering systems studied recently in quantum chemistry. Our simulation attains a commutator scaling in step count, while the gate cost per step remains comparable to that of more advanced algorithms. We achieve this with $1$ ancilla qubit.
- The Complexity of Thermalization in Finite Quantum SystemsDhruv Devulapalli (University of Maryland, College Park); Timothy Connor Mooney (University of Maryland, College Park); James Watson (University of Maryland, College Park, Google Quantum AI)[abstract]Abstract: Thermalization is the process through which a physical system evolves toward a state of thermal equilibrium. Determining whether or not a physical system will thermalize from an initial state has been a key question in condensed matter physics. Closely related questions are determining whether observables in these systems relax to stationary values, and what those values are. Using tools from computational complexity theory, we demonstrate that given a Hamiltonian on a finite-sized system, determining whether or not it thermalizes or relaxes to a given stationary value is computationally intractable, even for a quantum computer. In particular, we show that the problem of determining whether an observable of a finite-sized quantum system relaxes to a given value is PSPACE-complete, and so no efficient algorithm for determining the value is expected to exist. Further, we show the existence of Hamiltonians for which the problem of determining whether the system thermalizes to the Gibbs expectation value is PSPACE-complete. We also show that the related problem of determining whether the system thermalizes to the microcanonical expectation value is contained in PSPACE and is PSPACE-hard under quantum polynomial time reductions. In light of recent results demonstrating undecidability of thermalization in the thermodynamic limit, our work shows that the intractability of the problem is due to inherent difficulties in many-body physics rather than particularities of infinite systems.
- A log-depth in-place quantum Fourier transform that rarely needs ancillasGregory D. Kahanamoku-Meyer (MIT); John Blue (MIT); Thiago Bergamaschi (UC Berkeley); Craig Gidney (Google); Isaac Chuang (MIT)[abstract]Abstract: When designing quantum circuits for a given unitary, it can be much cheaper to achieve a good approximation on most inputs than on all inputs. In this work we formalize this idea, and propose that such "optimistic quantum circuits" are often sufficient in the context of larger quantum algorithms. For the rare algorithm in which a subroutine needs to be a good approximation on all inputs, we provide a reduction which transforms optimistic circuits into general ones. Applying these ideas, we build an optimistic circuit for the in-place quantum Fourier transform (QFT). Our circuit has depth O(log(n/ϵ)) for tunable error parameter ϵ, uses n total qubits, i.e. no ancillas, is local for input qubits arranged in 1D, and is measurement-free. The circuit's error is bounded by ϵ on all input states except an ϵ-sized fraction of the Hilbert space. The circuit is also rather simple and thus may be practically useful. Combined with recent QFT-based fast arithmetic constructions, the optimistic QFT yields factoring circuits of nearly linear depth using only 2n + O(n/log n) total qubits. Additionally, we apply our reduction technique to yield an approximate QFT with well-controlled error on all inputs; it is the first to achieve the asymptotically optimal depth of O(log (n/ϵ)) with a sublinear number of ancilla qubits. The reduction uses long-range gates but no measurements.
- A New Quantum Linear System Algorithm Beyond the Condition Number and Its Application to Solving Multivariate Polynomial SystemsJianqiang Li (Rice University)[abstract]Abstract: Given a matrix $A$ of dimension $M\times N$ and a vector $\vec{b}$, the quantum linear system (QLS) problem asks for the preparation of a quantum state $\ket{\vec{y}}$ proportional to the solution of $A\vec{y} = \vec{b}$. Existing QLS algorithms typically have runtimes that scale linearly with the condition number $\kappa(A)$, the sparsity of $A$, and logarithmically with the inverse precision. However, these algorithms often overlook structural properties of the input vector $\vec{b}$, despite the fact that its alignment with the eigenspaces of $A$ can significantly affect performance. In this work, we present a new QLS algorithm that explicitly leverages the structure of the right-hand side vector $\vec{b}$. Let the sparsity of a matrix be defined as the maximum number of nonzero entries in any row or column. The runtime of our algorithm depends polynomially on the sparsity $\s$ of the augmented matrix $H = [A , -\vec{b}]$, the inverse precision, the $\ell_2$ norm of the solution $\vec{y} = A^+ \vec{b}$, and a new instance-dependent parameter $$ ET = \sum_{i=1}^M p_i^2 \cdot d_i, $$ where $\vec{p} = (AA^{\top})^+ \vec{b}$, and $d_i$ denotes the squared $\ell_2$-norm of the $i$-th row of $H$. To further reduce the runtime for certain applications we introduce a structure-aware rescaling technique tailored to the solution $\vec{y} = A^+ \vec{b}$. Unlike \emph{left} preconditioning methods, which transform the system to $DA\vec{z} = D\vec{b}$, our approach applies a \emph{right} rescaling matrix, reformulating the linear system as $A D \vec{z} = \vec{b}$. This combination of an instance-aware QLS algorithm and a rescaling strategy reopens the possibilities for achieving superpolynomial quantum speedups in various domains, such as nonlinear differential equations and polynomial systems. As a application, we develop a new quantum algorithm for solving multivariate polynomial systems in regimes where previous QLS-based methods fail. Our results yield a new end-to-end algorithmic framework, grounded in our new QLS algorithm, that applies to a broad class of problems. In particular, we apply our approach to the maximum independent set (MIS) problem, formulated as a special case of a polynomial system. Given a graph, the MIS problem asks for finding the largest subset of vertices such that no two vertices in the subset share an edge. We provide a detailed runtime analysis and show that, under certain conditions, our quantum algorithm for the MIS problem runs in polynomial time. While no classical algorithms have been developed under these conditions, a promising feature of our quantum algorithm is that its runtime explicitly depends on the structure of the independent sets in the input graph.
- Quantum algorithms for Uhlmann transformationTakeru Utsumi (The University of Tokyo); Yoshifumi Nakata (Kyoto University); Qisheng Wang (The University of Edinburgh); Ryuji Takagi (The University of Tokyo)[abstract]Abstract: Uhlmann's theorem is a central result in quantum information theory, associating the closeness of two quantum states with that of their purifications. This theorem well characterizes the fundamental task of transforming a quantum state into another state via local operations on its subsystem. The optimal transformation for this task is called the Uhlmann transformation, which has broad applications in various fields; however, its quantum circuit implementation and computational cost have remained unclear. In this work, we fill this gap by proposing quantum query and sample algorithms that realize the Uhlmann transformation in the form of quantum circuits. These algorithms achieve exponential improvements in computational costs, including query and sample complexities, over naive approaches based on state measurements such as quantum state tomography, under certain computational models. We apply our algorithms to the square root fidelity estimation task and particularly show that our approach attains a better query complexity than the prior state-of-the-art. Furthermore, we discuss applications to several information-theoretic tasks, specifically, entanglement transmission, quantum state merging, and algorithmic implementation of the Petz recovery map, providing a comprehensive evaluation of the computational costs.
- Quantum error correction beyond SU(2): spin, permutation-invariant, and bosonic codes from convex geometryArda Aydin (University of Maryland); Victor Albert (NIST and U. Maryland); Alexander Barg (University of Maryland)[abstract]Abstract: We study relationships between permutation-invariant, bosonic Fock-state, and spin codes, which arise in different physical systems, but exhibit close mathematical affinity. We show that, starting with classical ell-1 codes, it is possible to construct qudit permutationally invariant (PI) codes of arbitrary dimension, spin codes, and Fock state codes, called collectively SU(q) codes. To maintain control of the code parameters in this transition, we rely on a classic result from convex geometry known as Tverberg's theorem. Constructing ell-1 codes based on combinatorial patterns called Sidon sets and utilizing their Tverberg partitions, we obtain new families of SU(q) codes with distance that scales almost linearly with the code length N. This improves upon the existing designs for all the three code families and yields a conceptually new framework for constructing spin codes. We further present explicit constructions of SU(2) codes with shorter length or lower total spin/excitation than the known codes with similar parameters, new bosonic codes with exotic Gaussian gates, as well as examples of some short codes with distance larger than the known constructions.
- Efficient Quantum Hermite TransformSiddhartha Jain (UT Austin, Google Quantum AI); Vishnu Iyer (UT Austin); Rolando Somma (Google Quantum AI); Ning Bao (Northeastern, Brookhaven National Laboratory); Stephen Jordan (Google Quantum AI)[abstract]Abstract: We present a new primitive for quantum algorithms that implements a discrete Hermite transform efficiently, in time that depends logarithmically in both the dimension and the inverse of the allowable error. This transform, which maps basis states to states whose amplitudes are proportional to the Hermite functions, can be interpreted as the Gaussian analogue of the Fourier transform. Our algorithm is based on a method to exponentially fast forward the evolution of the quantum harmonic oscillator, which significantly improves over prior art. We apply this Hermite transform to give examples of provable quantum query advantage in property testing and learning. In particular, we show how to efficiently test the property of being close to a low-degree in the Hermite basis when inputs are sampled from the Gaussian distribution, and how to solve a Gaussian analogue of the Goldreich-Levin learning task efficiently. We also comment on other potential uses of this transform to simulating time dynamics of quantum systems in the continuum.
- merged with #732:Optimal lower bounds for quantum state tomographyThilo Scharnhorst (UC Berkeley); Jack Spilecki (UC Berkeley); John Wright (UC Berkeley)[abstract]Abstract: We show that $n = \Omega(rd/\epsilon^2)$ copies are necessary to learn a rank $r$ mixed state $\rho \in \C^{d \times d}$ up to error~$\epsilon$ in trace distance. This matches the upper bound of $n = O(rd/\epsilon^2)$ from~\cite{OW16} and therefore settles the sample complexity of mixed state tomography. We prove this lower bound by studying a special case of full state tomography that we refer to as \emph{projector tomography}, in which $\rho$ is promised to be of the form $\rho = P/r$, where $P \in \C^{d \times d}$ is a rank $r$ projector. A key technical ingredient in our proof, which may be of independent interest, is a reduction which converts any algorithm for projector tomography which learns to error $\epsilon$ in trace distance to an algorithm which learns to error $O(\epsilon)$ in the more stringent Bures distance.The debiased Keyl's algorithm: a new unbiased estimator for full state tomographyAngelos Pelecanos (UC Berkeley); Jack Spilecki (UC Berkeley); John Wright (UC Berkeley)[abstract]Abstract: In the problem of quantum state tomography, one is given $n$ copies of an unknown rank-$r$ mixed state $\rho \in \C^{d \times d}$ and asked to produce an estimator $\widehat{\brho}$ of $\rho$ which is close to it. One of the most basic and useful properties a statistical estimator can have is that of being \emph{unbiased}, meaning that $\widehat{\brho}$ is equal to the true state $\rho$ in expectation. Unfortunately, of the three estimators for tomography which are known to be either sample-optimal or almost sample-optimal, namely Keyl's algorithm~\cite{Key06} and the two tomography algorithms from~\cite{HHJ+16}, none of them produce unbiased estimators. This situation has recently been highlighted as a bottleneck in several important tomographic applications~\cite{CLL24a,CLL24b}. In this work, we present the \emph{debiased Keyl's algorithm}, the first estimator for full state tomography which is both unbiased and sample-optimal. Our algorithm is the result of taking Keyl's algorithm and carefully modifying it in order to correct for its bias. In addition, we derive an explicit formula for the second moment of our estimator which is simple and easy to use. Using it, we show the following applications. \begin{itemize} \item[$\circ$] We show that $n = O(rd/\epsilon^2)$ copies are sufficient to learn a rank-$r$ mixed state $\rho \in \C^{d \times d}$ to trace distance error $\epsilon$. This gives a new proof of the main result of~\cite{OW16} and matches the lower bound of~\cite{SSW25}. \item[$\circ$] We then improve this result and show that $n = O(rd/\epsilon^2)$ copies are sufficient to learn to error $\epsilon$ in the more challenging Bures distance. This is optimal due to the lower bound of~\cite{Yue23} and improves on the best known bound of $n = \widetilde{O}(rd/\epsilon^2)$ from prior work~\cite{HHJ+16,OW17a,FO24}. \item[$\circ$] We consider the scenario of full state tomography with limited entanglement, in which one is given $n$ copies of $\rho$ but only allowed to perform entangled measurements across $k$ of them at a time. We show that \begin{equation*} n =O\left(\max \left(\frac{d^3}{\sqrt{k}\epsilon^2}, \frac{d^2}{\epsilon^2} \right) \right) \end{equation*} copies of $\rho$ suffice to learn $\rho$ to trace distance error $\epsilon$ in this scenario. This improves on the prior work of \cite{CLL24a} and matches their lower bound which holds for $k \leq 1/\epsilon^c$, where $c$ is a constant. \item[$\circ$] In the problem of shadow tomography, one is given $m$ observables $O_1, \ldots, O_m$, and the goal is to estimate each quantity $\tr(O_i \cdot \rho)$ up to $\epsilon$ error. If $\Vert O_i \Vert_{\infty} \leq 1$ for all $i$, we show that $O(\log(m)/\epsilon^2)$ copies are sufficient to solve this task in the ``high accuracy regime'' when $\epsilon = O(1/d)$. This improves a result of~\cite{CLL24b}, who showed this bound holds when $\epsilon = O(1/d^{12})$. More generally, we show that if $\tr(O_i^2) \leq F$ for all $i$, then \begin{equation*} n = O\Big(\log(m) \cdot \Big(\min\Big\{\frac{\sqrt{r F}}{\epsilon}, \frac{F^{2/3}}{\epsilon^{4/3}}\Big\} + \frac{1}{\epsilon^2}\Big)\Big) \end{equation*} copies suffice to solve this task. This improves on a result of~\cite{GLM24}, who showed a bound of $n = O(\log(m) \cdot \sqrt{rF}/\epsilon^2)$, and disproves a conjecture of~\cite{CLL24b}. \item[$\circ$] Our final application is to the field of quantum metrology. In quantum metrology, one is provided copies of a mixed state $\rho_{\theta}$ parameterized by a vector $\theta \in \R^n$, and the goal is to estimate the parameter $\theta$. An algorithm is \emph{locally unbiased} at a parameter $\theta^*$ if, roughly, it is unbiased given $\rho_{\theta}$ when $\theta$ is in a small neighborhood around $\theta^*$. The quantum Cramér–Rao bound states that the mean squared error matrix (MSEM) of any locally unbiased algorithm is lower bounded by the inverse of the Quantum Fisher Information (QFI) matrix. We give a locally unbiased algorithm whose MSEM is \emph{upper bounded} by twice the inverse of the QFI matrix in the asymptotic limit of large $n$, which is optimal. \end{itemize}
- Fast-forwardable Lindbladians imply quantum phase estimationZhong-Xia Shang (University of Hong Kong); Naixu Guo (National University of Singapore); Patrick Rebentrost (National University of Singapore); Al´an Aspuru-Guzik (University of Toronto); Tongyang Li (Peking University); Qi Zhao (University of Hong Kong)[abstract]Abstract: Quantum phase estimation (QPE) and Lindbladian dynamics are both foundational in quantum information science and central to quantum algorithm design. In this work, we bridge these two concepts: certain simple Lindbladian processes can be adapted to perform QPE-type tasks. However, unlike QPE, which achieves Heisenberg-limit scaling, these Lindbladian evolutions are restricted to standard quantum limit complexity. This indicates that, different from Hamiltonian dynamics, the natural dissipative evolution speed of such Lindbladians does not saturate the fundamental quantum limit, thereby suggesting the potential for quadratic fast-forwarding. We confirm this by presenting a quantum algorithm that simulates these Lindbladians for time $t$ within an error $\varepsilon$ using $\mathcal{O}\left(\sqrt{t\log(\varepsilon^{-1})}\right)$ cost. This, to our knowledge, is the first example of Lindbladian fast forwarding, which shares a fundamentally different mechanism from the fast-forwarding examples of Hamiltonian dynamics. As a bonus, this fast-forwarded simulation naturally serves as a new Heisenberg-limit QPE algorithm. Therefore, our work explicitly bridges the standard quantum limit-Heisenberg limit transition to the fast-forwarding of dissipative dynamics. We also adopt our fast-forwarding algorithm for efficient Gibbs state preparation and demonstrate the counter-intuitive implication: the allowance of a quadratically accelerated decoherence effect under arbitrary Pauli noise.
- Polynomial-time tolerant testing stabilizer statesSrinivasan Arunachalam (IBM Quantum); Arkopal Dutt (IBM Quantum)[abstract]Abstract: We consider the following task: suppose an algorithm is given copies of an unknown $n$-qubit quantum state $\ket{\psi}$ promised $(i)$ $\ket{\psi}$ is $\varepsilon_1$-close to a stabilizer state in fidelity or $(ii)$ $\ket{\psi}$ is $\varepsilon_2$-far from all stabilizer states, decide which is the case. We show that for every $\varepsilon_1>0$ and $\varepsilon_2\leq \varepsilon_1^C$, there is a $\poly(1/\varepsilon_1)$-sample and $n\cdot \poly(1/\varepsilon_1)$-time algorithm that decides which is the case (where $C>1$ is a universal constant). Our proof includes a new definition of Gowers norm for quantum states, an inverse theorem for the Gowers-$3$ norm of quantum states and new bounds on stabilizer covering for structured subsets of Paulis using results in additive~combinatorics.
- Approximate Quantum Error Correction with 1D Log-Depth CircuitsGuoding Liu (Tsinghua University); Zhenyu Du (Tsinghua University); Zi-Wen Liu (Tsinghua University); Xiongfeng Ma (Tsinghua University)[abstract]Abstract: Efficient and high-performance quantum error correction is essential for achieving fault-tolerant quantum computing. Low-depth random circuits offer a promising approach to identifying effective and practical encoding strategies. In this work, we rigorously prove through information-theoretic analysis that one-dimensional logarithmic-depth random Clifford encoding circuits can achieve high quantum error correction performance. We demonstrate that these random codes typically exhibit good approximate quantum error correction capability by proving that their encoding rate achieves the hashing bound for Pauli noise and the channel capacity for erasure errors. We show that the error correction inaccuracy decays once a threshold of logarithmic depth is exceeded, resulting in negligible recovery errors. This threshold is shown to be lower than that of the simple separate block encoding, and the decay rate is higher. We further establish that these codes are optimal by proving that logarithmic depth is necessary to maintain a constant encoding rate and high error correction performance. To prove our results, we propose new decoupling theorems for one-dimensional low-depth circuits. These results also imply strong decoupling and rapid thermalization properties in low-depth random circuits and have potential applications in quantum information science and physics.
- Layer codes as partially self-correcting quantum memoriesShouzhen Gu (Yale University); Libor Caha (Technical University of Munich); Shin Ho Choe (IQM Quantum Computers); Zhiyang He (Massachusetts Institute of Technology); Aleksander Kubica (Yale University); Eugene Tang (Northeastern University)[abstract]Abstract: We investigate layer codes, a family of three-dimensional stabilizer codes that can achieve optimal scaling of code parameters and a polynomial energy barrier, as candidates for self-correcting quantum memories. First, we introduce two decoding algorithms for layer codes with provable guarantees for local stochastic and adversarial noise, respectively. We then prove that layer codes are partially self-correcting quantum memories. With memory times scaling exponentially in the linear size of the system, layer codes outperform the previously demonstrated subexponential scaling of the welded solid code. Notably, we argue that partial self-correction without the requirement of efficient decoding is more common than expected, as it arises from a diverging energy barrier. This draws a sharp distinction between partially self-correcting systems, and partially self-correcting memories. Another novel aspect of our work is an analysis of layer codes constructed from random Calderbank–Shor–Steane codes. We show that these random layer codes have optimal scaling (up to logarithmic corrections) of code parameters and a polynomial energy barrier. Finally, we present numerical studies of their memory times and report behavior consistent with partial self-correction.
- Inverse Nonlinear Fast Fourier Transform: Closing A Chapter in Quantum Signal ProcessingHongkang Ni (Stanford University); Rahul Sarkar (University of California, Berkeley); Lexing Ying (Stanford University); Lin Lin (University of California, Berkeley)[abstract]Abstract: The nonlinear Fourier transform (NLFT) extends the classical Fourier transform by replacing addition with matrix multiplication. While the NLFT on $\mathrm{SU}(1,1)$ has been widely studied, its $\mathrm{SU}(2)$ variant has only recently attracted attention due to emerging applications in quantum signal processing (QSP) and quantum singular value transformation (QSVT). In this paper, we investigate the inverse NLFT on $\mathrm{SU}(2)$ and establish the numerical stability of the layer stripping algorithm for the first time under suitable conditions. Furthermore, we develop a fast and numerically stable algorithm, called inverse nonlinear fast Fourier transform, for performing inverse NLFT with near-linear complexity. This algorithm is applicable to computing phase factors for both QSP and the generalized QSP (GQSP).
- Batched high-rate logical operations for quantum LDPC codesQian Xu (California Institute of Technology); Hengyun Zhou (QuEra Computing Inc.); Dolev Bluvstein (Harvard University); Madelyn Cain (Harvard University); Marcin Kalinowski (Harvard University); John Preskill (California Institute of Technology); Mikhail D. Lukin (Harvard University); Nishad Maskara (Harvard University)[abstract]Abstract: High-rate quantum LDPC (qLDPC) codes reduce space overhead by densely packing many logical qubits into a single block of physical qubits. Here we extend such savings to computation by constructing batched fault-tolerant operations that apply the same logical gate across many code blocks in parallel. By leveraging shared physical resources to execute many logical operations in parallel, these operations realize high rates in space-time and significantly reduce computational costs. For arbitrary CSS qLDPC codes, we build batched gadgets with constant space-time overhead for (i) single-shot error correction and state preparation, (ii) code switching, and (iii) addressable Clifford gates. Using these batched gadgets we also construct parallel non-Clifford gates with low space-time cost. We outline principles for designing parallel quantum algorithms optimized for a batched architecture, and show in particular how lattice Hamiltonian dynamical simulations can be compiled efficiently. We also propose a near-term–friendly implementation using new self-dual Bivariate-Bicycle codes with high encoding rates (∼ 1/10), transversal Clifford gates, and global T gates, enabling Hamiltonian simulations with a lower space-time cost than analogous surface-code protocols and low-rate qLDPC protocols. These results open new paths toward scalable quantum computation via co-design of parallel quantum algorithms and high-rate fault-tolerant protocols.
- A Dobrushin condition for quantum Markov chains: Rapid mixing and conditional mutual information at high temperatureAinesh Bakshi (MIT); Allen Liu (MIT); Ankur Moitra (MIT); Ewin Tang (UC Berkeley)[abstract]Abstract: A central challenge of quantum physics is to understand the structural properties of many-body systems, both in equilibrium and out of equilibrium. For classical systems, we have a unified perspective which connects structural properties of systems at thermal equilibrium to the Markov chain dynamics which mix to them. We lack such a perspective for quantum systems: many of the most fundamental ideas of the modern classical theory are notably absent from our quantum toolkit. We develop a theory which brings the broad scope and flexibility of the classical theory to quantum Gibbs states at high temperature. At its core is a natural quantum analogue of Dobrushin’s condition; whenever this condition holds, a concise path-coupling argument proves rapid mixing for the corresponding Markovian evolution. The same machinery bridges dynamic and structural properties: rapid mixing yields exponential decay of CMI without restrictions on the size of the probed subsystems, resolving a central question in the theory of open quantum systems. Our key technical insight is an optimal transport viewpoint which couples the quantum dynamics to a linear differential equation, enabling precise control over how local deviations from equilibrium propagate to distant sites.
- Free mutual information and ergodicity in operator spaceShreya Vardhan (Caltech); Jinzhao Wang (Stanford University)[abstract]Abstract: We introduce a quantity called the free mutual information (FMI), adapted from concepts in free probability theory, as a new physical measure of quantum chaos. This quantity captures the spreading of a time-evolved operator in the space of all possible operators in the Hilbert space, which is doubly exponential in the number of degrees of freedom. It thus provides a finer notion of operator spreading than the well-understood phenomenon of operator growth in physical space. We derive two central results which apply in any physical system: first, an explicit ``Coulomb gas’’ formula for the FMI in terms of the eigenvalues of the product operator $A(t)B$; and second, a general relation expressing the FMI as a weighted sum of all higher-point out-of-time-ordered correlators (OTOCs). This second result provides a precise information-theoretic interpretation for the higher-point OTOCs as collectively quantifying operator ergodicity and the approach to freeness. This physical interpretation is particularly useful in light of recent progress in experimentally measuring higher-point OTOCs. We identify universal behaviours of the FMI and higher-point OTOCs in a variety of chaotic systems, including random unitary circuits and chaotic spin chains, which indicate that spreading in the doubly exponential operator space is a generic feature of quantum many-body chaos. At the same time, the non-generic behavior of the FMI in various non-chaotic systems, including certain unitary designs, shows that there are cases where an operator spreads in physical space but remains localized in operator space. The FMI is thus a sharper diagnostic of chaos than the standard 4-point OTOC.
- merged with #824:An infinite hierarchy of multi-copy quantum learning tasksJan Nöller (TU Darmstadt); Viet Tran (JKU Linz); Mariami Gachechildaze (TU Darmstadt); Richard Kueng (JKU Linz)[abstract]Abstract: Learning properties of quantum states from measurement data is a fundamental challenge in quantum information. The sample complexity of such tasks depends crucially on the measurement primitive. While shadow tomography achieves sample- efficient learning by allowing entangling measurements across many copies, it requires prohibitively deep circuits. At the other extreme, two-copy measurements already yield exponential advantages over single-copy strategies in tasks such as Pauli tomography. In this work we show that such sharp separations extend far beyond the two-copy regime: for every prime k we construct explicit learning tasks of degree k, which are exponentially hard with (k − 1)-copy measurements but efficiently solvable with k- copy measurements. Our protocols are not only sample-efficient but also realizable with shallow circuits. Extending further, we show that such finite-degree tasks ex- ist for all square-free integers k, pointing toward a general principle underlying their existence. Together, our results reveal an infinite hierarchy of multi-copy learning prob- lems, uncovering new phase transitions in sample complexity and underscoring the role of reliable quantum memory as a key resource for exponential quantum advantageExponential Advantage from One More Replica in Estimating Nonlinear Properties of Quantum StatesQi Ye (Tsinghua University); Dong-Ling Deng (Tsinghua University); Zhenhuan Liu (Tsinghua University)[abstract]Abstract: Estimating nonlinear properties of quantum states, such as $\mathrm{tr}(\rho^{k} O)$, is fundamental in quantum information but challenging due to the intrinsic linearity of quantum mechanics. Typical approaches such as generalized swap test rely on joint access to $k$ replicas to convert nonlinear functions in $\rho$ into linear functions in $\rho^{\otimes k}$. In this work, we prove that this conversion is not only sufficient but also necessary: any protocol that can only perform $(k-1)$-replica joint measurements require exponentially many samples to estimate $\mathrm{tr}(\rho^{k}O)$ with nonzero $\tr(O)$, while $k$-replica joint measurements allow efficient estimation. This establishes, for the first time, an exponential separation between $(k-1)$- and $k$-replica protocols for general $k$, thereby defining a fine-grained hierarchy for replica quantum advantage and solving an open question in the literature. The workhorse of our proofs is a general indistinguishability principle showing that any ensemble assembled from Haar-random states is hard to distinguish from its average. We also leverage this principle in spectrum testing, proving that $k$-replica joint measurements are also necessary to efficiently distinguish two spectra that match on all moments up to degree $k-1$. Our work draws sharp boundaries on the power of joint measurements, shedding light on resource-complexity tradeoffs in quantum learning theory.
- Parallel Spooky Pebble Games and Regev's Factoring AlgorithmGregory D. Kahanamoku-Meyer (MIT); Seyoon Ragavan (MIT); Katherine Van Kirk (Harvard)[abstract]Abstract: We define and study \emph{parallel spooky pebble games} in the context of carrying out an inherently sequential computation on a quantum computer, where reversibility is a concern. Previous work by Blocki, Holman, and Lee (TCC 2022) and Gidney studied the benefits offered by either parallelism or spookiness respectively; we show that these two resources can enable even more efficient guarantees when utilized in tandem. Specifically, we show that with parallelism and spookiness, a line graph of length $\ell$ can be pebbled in time $2\ell$ (which is exactly optimal) and space $\leq 2.47\log \ell$. We show that these techniques can be applied to Regev's factoring algorithm (Journal of the ACM 2025) to obtain significant concrete improvements on previous implementations of the underlying arithmetic (Ragavan and Vaikuntanathan, CRYPTO 2024). For example, we can factor 4096-bit integers $N$ using a mod $N$ multiplication depth of 328, which outperforms the 680 required of previous variants of Regev and the 444 reported by Eker{\aa} and G{\"a}rtner (IACR Communications in Cryptology 2025) for Shor's algorithm. We believe our pebbling techniques will find applications in quantum cryptanalysis beyond integer factorization.
- Fault-tolerant protocols through spacetime concatenationYichen Xu (Cornell); Arpit Dua (Virginia Tech)[abstract]Abstract: We introduce a framework called spacetime concatenation for fault-tolerant compilation of syn- drome extraction circuits of stabilizer codes. This framework enables efficient compilation of syn-drome extraction circuits into dynamical codes through structured gadget layouts and encoding matrices, facilitating low-weight measurements while preserving logical information. This frame-work uses conditions that are sufficient for fault-tolerance of the dynamical code, including not measuring logical operators and preserving the spacetime distance. This framework goes beyond dynamical weight-reduction methods, which do not explicitly produce Floquet codes with nontrivial logical automorphisms. Using this fault-tolerant framework, we reproduce known Floquet codes such as the hon-eycomb Floquet code and the ruby lattice Floquet color code with nontrivial automorphisms and a fault-tolerant planar variant of the Floquet toric code. We also construct new explicit examples of dynamical codes, including a dynamical bivariate bicycle code and a dynamical Haah code. Be-yond constructing examples, we write a restricted equivalence relation which can be used to discuss classification and resource trade-offs of dynamical codes. Lastly, we demonstrate the adaptability of our framework to arbitrary qubit layouts and fabrication defects, making it well-suited for a hardware-first design approach.
- Hamiltonians and random unitariesLaura Cui (Caltech); Liang Mao (Tsinghua University and Caltech); Fernando Brandao (AWS Center for Quantum Computing and Caltech); Hsin-Yuan Huang (Caltech and Google); Thomas Schuster (Caltech and Google)[abstract]Abstract: Haar-random unitaries are fundamental mathematical tools for understanding quantum many-body dynamics, yet they fail to obey basic physical constraints imposed by Hamiltonians. In this work, we explore how to reconcile random unitary models with physical constraints imposed by Hamiltonians, addressing the central question: Can we efficiently generate random unitaries while obeying Hamiltonian constraints? First, we consider the role of energy conservation in the setting where the Hamiltonian $H$ is completely known. There, we show that energy-conserving pseudorandom unitaries (PRUs) exist for random local commuting Hamiltonians, assuming quantum-secure one-way functions exist. However, we also prove that energy-conserving PRUs do not exist for some local translation-invariant Hamiltonians, even in one-dimensional systems. Furthermore, we show that determining whether energy-conserving PRUs exist for a family of Hamiltonians is undecidable. Second, we consider Hamiltonian time dynamics itself when there is incomplete knowledge of $H$. In this setting, we prove that random unitaries $e^{-iHt}$ generated from any ensemble of constant-local Hamiltonians $H$ cannot form approximate unitary designs or PRUs. This barrier vanishes when we relax locality: we construct an ensemble of polylog-local Hamiltonians $H$ that generates short-time dynamics which form both a unitary design and a PRU. Our results reveal fundamental computational barriers emerging from energy conservation constraints, highlighting the tension between common models of ergodicity and the structure of physical dynamics.
- Haar random codes attain the quantum Hamming bound, approximatelyFermi Ma (UC Berkeley); Xinyu Tan (MIT); John Wright (UC Berkeley)[abstract]Abstract: We study the error correcting properties of Haar random codes, in which a $K$-dimensional code space $\bC \subseteq \C^N$ is chosen at random from the Haar distribution. Our main result is that Haar random codes can approximately correct errors up to the quantum Hamming bound, meaning that a set of $m$ Pauli errors can be approximately corrected so long as $mK \ll N$. This is the strongest bound known for any family of quantum error correcting codes (QECs), and continues a line of work showing that approximate QECs can significantly outperform exact QECs [LNCY97,CGS05,BGG24]. Our proof relies on a recent matrix concentration result of Bandeira, Boedihardjo, and van Handel [BBV23].
- Mechanisms for Quantum Advantage in Global Optimization of Nonconvex FunctionsDylan Herman (JPMorganChase); Guneykan Ozgul (JPMorganChase); Anuj Apte (JPMorganChase); Junhyung Lyle Kim (JPMorganChase); Anupam Prakash (JPMorganChase); Jiayu Shen (JPMorganChase); Shouvanik Chakrabarti (JPMorganChase)[abstract]Abstract: We introduce new theoretical mechanisms for quantum speedup in the global optimization of nonconvex functions, broadening the scope of quantum advantage beyond traditional tunneling-based explanations. By establishing a rigorous correspondence between the spectral properties of Schrödinger operators and classical Langevin diffusion, we identify regimes where a real-space adiabatic algorithm (RsAA) can significantly outperform classical stochastic gradient-based methods. Leveraging this connection and novel non-asymptotic versions of well-known semi-classical results, we provide polynomial (in the dimension $d$) runtime bounds for RsAA on rotated block-separable functions, and offer theoretical evidence that black-box classical algorithms require exponential time for optimization in these settings, thereby generalizing and formalizing prior work by Leng et al (arXiv:2311.00811). While we can design certain specialized, structure-aware classical algorithms that can efficiently optimize these functions, we further show that the quantum ground state exhibits remarkable robustness to nontrivial perturbations. Leveraging recent advances in the study of the hypercontractivity of Schr\"{o}dinger operators, we construct new families of nonconvex functions for which RsAA achieves polynomial-time optimization, whereas both off-the-shelf and structure-aware classical algorithms incur exponential computational costs. These results provide new insight into quantum-classical separations in nonconvex optimization and highlight tractable and general pathways to advantage in this setting.
- Quantum Relative Entropy Decay Composition Yields Shallow, Unstructured k-DesignsNicholas Laracuente (Indiana University)[abstract]Abstract: A major line of questions in quantum information and computing asks how quickly locally random circuits converge to resemble global randomness. In particular, approximate k-designs are random unitary ensembles that resemble random circuits up to their first k moments. It was recently shown that on n qudits, random circuits with slightly structured architectures converge to k-designs in depth O(log n), even on one-dimensional connectivity. It has however remained open whether the same shallow depth applies more generally among random circuit architecture and connectivity, or if the structure is truly necessary. We recall the study of exponential relative entropy decay, another topic with a long history in quantum information theory. We show that a constant number of layers of a parallel random circuit on a family of architectures including one-dimensional `brickwork' has O(1 / logn) per-layer multiplicative entropy decay. We further show that on general connectivity graphs of bounded degree, randomly placed gates achieve O(1 / nlogn)-decay (consistent with log n depth convergence). Both of these results imply that random circuit ensembles with O(polylog(n)) average depth achieve k-designs in diamond norm. Hence our results address the question of whether extra structure is truly necessary for log-depth convergence. Furthermore, the relative entropy recombination techniques might be of independent interest.
- Average-Case Hardness and Reducibility of Decoding Quantum Stabilizer CodesAndrey Khesin (University of Oxford); Jonathan Lu (MIT); Alexander Poremba (Boston University); Yihui Quek (EPFL); Akshar Ramkumar (Caltech); Peter Shor (MIT); Vinod Vaikuntanathan (MIT)[abstract]Abstract: Random classical linear codes are widely believed to be hard to decode, exponentially so at constant coding rate. If the rate vanishes asymptotically sufficiently rapidly, slightly sub-exponential decoding algorithms are known. By contrast, the complexity of decoding a random quantum stabilizer code has remained an open question for quite some time. This work closes the gap in our understanding of the algorithmic hardness of decoding random quantum versus random classical codes. We prove that decoding a random stabilizer code with even a single logical qubit is at least as hard as decoding a random classical code at constant rate—the maximally hard regime. This result suggests that the easiest random quantum decoding problem is at least as hard as the hardest random classical decoding problem, and shows that any sub-exponential algorithm decoding a typical stabilizer code, at any coding rate, would immediately imply a breakthrough in cryptography. More generally, we also characterize many other complexity-theoretic properties of stabilizer codes. While classical decoding admits a random self-reduction, we prove significant barriers for the existence of random self-reductions in the quantum case. This result follows from new bounds on Clifford entropies and Pauli mixing times, which may be of independent interest. As a complementary result, we demonstrate various other self-reductions which are in fact achievable, such as between search and decision. Our work also demonstrates several ways in which quantum phenomena, such as quantum degeneracy, force several reasonable definitions of stabilizer decoding—all of which are classically identical—to have distinct or non-trivially equivalent complexity.
- Nonlocality of Quantum States Can be TransitiveKai-Siang Chen (National Cheng Kung University, Taiwan); Gelo Noel M. Tabia (Hon Hai (Foxconn) Research Institute, Taiwan); Chung-Yun Hsieh (University of Bristol, United Kingdom); Yu-Chun Yin (National Yang Ming Chiao Tung University, Taiwan); Yeong-Cherng Liang (National Cheng Kung University, Taiwan)[abstract]Abstract: As a striking manifestation of quantum entanglement, nonlocality has long played a pivotal role in shaping our understanding of the quantum world. When considering a Bell test involving three parties, we may even find a remarkable situation where the nonlocality in two bipartite subsystems forces the remaining bipartite subsystem to exhibit nonlocality. This intriguing effect, dubbed nonlocality transitivity, was first identified in the non-quantum non-signaling world in 2011. However, whether such transitivity could manifest within quantum theory has remained unresolved—until now. Here, we provide the first affirmative answer to this open problem at the level of quantum states, thereby showing that there exists a quantum-realizable notion of nonlocality transitivity. Specifically, by leveraging the possibility of Bell-inequality violation by tensoring, we analytically construct a pair of nonlocal bipartite states such that simultaneously realizing them in a tripartite system induces nonlocality in the remaining bipartite subsystem. En route to showing this, we also prove that multiple copies of the W-state marginals uniquely determine the global compatible state, thus establishing another instance when the parts determine the whole. Surprisingly, the nonlocality transitivity of quantum states also occurs among the reduced states of Haar-random three-qutrit pure states. We further show that the transitivity of quantum steering can already be demonstrated with the marginals of a three-qubit W state, showing again another noteworthy difference between the two forms of quantum correlations. Finally, we present a simple method to construct quantum states and correlations that are nonlocal in all their non-unipartite marginals, which may be of independent interest.
- A Quantum Time-Space Tradeoff for Directed st-ConnectivityStacey Jeffery (CWI, QuSoft & University of Amsterdam); Galina Pass (QuSoft & University of Amsterdam)[abstract]Abstract: Directed $st$-connectivity (DSTCON) is the problem of deciding if there exists a directed path between a pair of distinguished vertices $s$ and $t$ in an input directed graph. This problem appears in many algorithmic applications, and is also a fundamental problem in complexity theory, due to its ${\sf NL}$-completeness. We show that for any $S\geq \log^2(n)$, there is a quantum algorithm for DSTCON using space $S$ and time $T\leq 2^{\frac{1}{2}\log(n)\log(n/S)+o(\log^2(n))}$, which is an (up to quadratic) improvement over the best classical algorithm for any $S=o(\sqrt{n})$. Of the $S$ total space used by our algorithm, only $O(\log^2(n))$ is quantum space -- the rest is classical. This effectively means that we can tradeoff classical space for quantum time.
- On the optimization of quantum divergencesGereon Kossmann (RWTH Aachen University); René Schwonnek (Leibniz University Hannover); Mario Berta (RWTH Aachen University); Mark M. Wilde (School of Electrical and Computer Engineering, Cornell University)[abstract]Abstract: Many fundamental quantities in quantum information processing are instances of quantum divergences - functionals on quantum states that satisfy natural axioms grounded in information-theoretic principles. Recently, a new class of divergences - the f-divergences - has gained prominence in quantum information theory and received operational interpretations, while being long established in the classical setting. Furthermore, Frenkel showed that the Umegaki relative entropy is a special case of a quantum f-divergence for the function f(x) = x log x; building on this, Hirche et al. introduced a parameterized family of f-divergences that, in appropriate regimes, recovers the sandwiched and Petz relative entropies as regularizations. Taken together, these results reveal a tight link between the best-understood quantum divergences - the Umegaki, Petz, and sandwiched relative entropies - on a technical level and the general class of f-divergences, thereby strongly motivating a program that connects f-divergences to concrete quantum information tasks as already started by Cheng et al. In this contribution, we develop a variational formulation that approximates general quantum f-divergences to arbitrary precision. These approximations yield (i) efficient evaluation of the quantum relative entropy of channels and already used as the core numerical method in quantum many body physics and (ii) computation of asymptotic key rates in DIQKD in particular in the scenario of two switches in routed Bell scenarios.
- Universal tradeoff relations between resource cost and irreversibility of channels: General-resource Wigner-Araki-Yanase theorems and beyondHiroyasu Tajima (Kyushu University); Koji Yamaguchi (Kyushu University); Ryuji Takagi (The University of Tokyo); Yui Kuramochi (Kyushu University)[abstract]Abstract: Quantum technologies offer exceptional---sometimes almost magical---speed and performance, yet every quantum process costs physical resources. Designing next-generation quantum devices, therefore, depends on solving the following question: which resources, and in what amount, are required to implement a desired quantum process? Casting the problem in the language of quantum resource theories, we prove a universal cost-irreversibility tradeoff: the lower the irreversibility of a quantum process, the greater the required resource cost for its realization. The trade-off law holds for a broad range of resources---energy, magic, asymmetry, coherence, athermality, and others---yielding lower bounds on resource cost of any quantum channel. Its broad scope positions this result as a foundation for deriving the following key results: (1) we show a universal relation between the energetic cost and the irreversibility for arbitrary channels, encompassing the energy-error tradeoff for any measurement or unitary gate; (2) we extend the energy-error tradeoff to free energy and work costs; (3) we extend the Wigner-Araki-Yanase theorem, which is the universal limitation on measurements under conservation laws, to a wide class of resource theories: the probability of failure in distinguishing resourceful states via a measurement is inversely proportional to its resource cost; (4) we prove that infinitely many resource-non-increasing operations in fact require an infinite implementation cost. These findings reveal a universal relationship between quantumness and irreversibility, providing a first step toward a general theory that explains when---and how---quantumness can suppress irreversibility.
- Gluing Random Unitaries with InversesPrabhanjan Ananth (UCSB); John Bostanci (Columbia University); Aditya Gulati (UCSB); Yao-Ting Lin (UCSB)[abstract]Abstract: Gluing theorem for random unitaries [Schuster, Haferkamp, Huang, QIP 2025] have found numerous applications, including designing low depth random unitaries [Schuster, Haferkamp, Huang, QIP 2025], random unitaries in QAC0 [Foxman, Parham, Vasconcelos, Yuen'25] and generically shortening the key length of pseudorandom unitaries [Ananth, Bostanci, Gulati, Lin EUROCRYPT'25]. We present an alternate method of combining Haar random unitaries from the gluing lemma from [Schuster, Haferkamp, Huang, QIP 2025] that is secure against adversaries with inverse query access to the joined unitary. As a consequence, we show for the first time that strong pseudorandom unitaries can generically have their length extended, and can be constructed using only O(n^(1/c)) bits of randomness, for any constant c, if strong pseudorandom unitaries exists.
- Breaking the Treewidth Barrier in Quantum Circuit Simulation with Decision DiagramsBin Cheng (National University of Singapore); Ziyuan Wang (Tsinghua University); Longxiang Yuan (Tsinghua University); Ruixuan Deng (Tsinghua University); Jianxin Chen (Tsinghua University); Zhengfeng Ji (Tsinghua University)[abstract]Abstract: Classical simulation of quantum circuits is a critical tool for validating quantum hardware and probing the boundary between classical and quantum computational power. Existing state-of-the-art methods, notably tensor network approaches, have computational costs governed by the treewidth of the underlying circuit graph, making circuits with large treewidth intractable. This work rigorously analyzes FeynmanDD, a decision diagram-based simulation method proposed in CAV 2025 by a subset of the authors, and shows that the size of the multi-terminal decision diagram used in FeynmanDD is exponential in the linear rank-width of the circuit graph. As linear rank-width can be substantially smaller than treewidth and is at most larger than the treewidth by a logarithmic factor, our analysis demonstrates that FeynmanDD outperforms all tensor network-based methods for certain circuit families. We also show that the method remains efficient if we use the Solovay-Kitaev algorithm to expand arbitrary single-qubit gates to sequences of Hadamard and T gates, essentially removing the gate-set restriction posed by the method.
- When Quantum Nonlocality Does Not Play Dice \&\\ No Bound Randomness in Quantum NonlocalityRavishankar Ramanathan (School of Computing and Data Science, The University of Hong Kong, Pokfulam Road, Hong Kong); Yuan Liu (School of Computing and Data Science, The University of Hong Kong, Pokfulam Road, Hong Kong); Yutian Wu (School of Computing and Data Science, The University of Hong Kong, Pokfulam Road, Hong Kong); Stefano Pironio (Laboratoire d'Information Quantique, CP224, Universit\'{e} libre de Bruxelles, 1050 Brussels, Belgium)[abstract]Abstract: Violations of Bell inequalities are often regarded as evidence that quantum nonlocality guarantees intrinsic randomness, effectively playing the role of a “dice” at the heart of device-independent (DI) cryptographic protocols. Yet the precise connection between nonlocality and randomness is more nuanced. We first show that there exist nontrivial Bell inequalities that are maximally violated by quantum correlations while certifying no randomness for any fixed input pair, rendering them ineffective for a large class of standard DI schemes. Moreover, we construct maximally nonlocal quantum correlations that remain deterministic for every fixed input pair, in the sense that for any chosen inputs they can be decomposed into strategies with fixed outputs. Conversely, we show when all input pairs are used for randomness generation, any amount of quantum nonlocality suffices to certify randomness, implying that no form of bound randomness exists in quantum nonlocality: every nonlocal behavior can be useful for DI randomness generation under an appropriately designed protocol. Building on this, we introduce the average guessing probability over all inputs, in contrast to the hitherto considered fixed-input guessing probability, as a faithful and monotonic quantifier of nonlocality. Using this measure, we prove that, contrary to recent findings in PRL 134, 090201, the detection efficiency threshold for certifying randomness is never lower than that required for detecting nonlocality. Finally, we analytically compute the average guessing probability by a quantum adversary in the standard CHSH test and show how this leads to improved generation rates in state-of-the-art amplification protocols. Together, our results precisely delineate the limits of determinism compatible with quantum nonlocality and establish average guessing probability as the correct operational bridge between nonlocality and DI randomness.
- Quartic quantum speedups for community detectionAlexander Schmidhuber (MIT); Alexander Zlokapa (MIT)[abstract]Abstract: Community detection is a foundational problem in data science concerned with detecting clustering structure in datasets. A natural extension is hypergraph community detection, which aims to capture higher-order correlations beyond pairwise interactions. In this work, we develop a quantum algorithm for hypergraph community detection and show that it achieves a quartic (power-of-four) speedup over the best known classical algorithm, along with superpolynomial savings in space. The speedup is based on a quantized version of the Kikuchi method and arises from the efficient preparation of a guiding state that has enhanced overlap with the solution space. To support our main result, we prove matching low-coordinate degree function lower bounds, a generalization of sum-of-squares and low-degree likelihood ratio lower bounds, which establishes that the classical baseline algorithm over which we achieve a quartic quantum speedup is (nearly) optimal. Our work constitutes a general framework for identifying when our guiding state–based approach, rooted in the Kikuchi hierarchy, can yield super-quadratic quantum speedups. We suggest that a quantity known as marginal order, which captures the presence of a tradeoff between signal-to-noise ratio and computational cost, effectively determines the existence of quantum Kikuchi-guided algorithms.
- Hardness of recognizing phases of matterThomas Schuster (Caltech, Google); Dominik Kufel (Harvard); Norman Y. Yao (Harvard); Hsin-Yuan Huang (Caltech and Google)[abstract]Abstract: We prove that recognizing the phase of matter of an unknown quantum state is quantum computationally hard. More specifically, we show that the worst-case runtime of any phase recognition algorithm must grow exponentially in the correlation length $\xi$ of the state. This exponential growth renders the problem practically infeasible even for moderate constant values of the correlation length $\xi$, and leads to super-polynomial quantum computational time in the system size $n$ whenever $\xi = \omega(\log n)$. Our results apply to a substantial portion of all known phases of matter, including symmetry-breaking phases and symmetry-protected topological phases for any discrete on-site symmetry group in any spatial dimension. To establish this hardness, we extend the study of pseudorandom unitaries to quantum systems with symmetries. We prove that symmetric pseudorandom unitaries exist under standard cryptographic conjectures, and can be constructed in extremely low circuit depths for any discrete on-site group. We also provide extensions of our results to systems with translation invariance and purely classical phases of matter. A key technical limitation is that the locality of the parent Hamiltonian of the states we consider is linear in $\xi$; removing this constraint remains an important open question.
- Hamiltonian Decoded Quantum InterferometryAlexander Schmidhuber (MIT, Google); Jonathan Z Lu (MIT); Stephen Jordan (Google); Alexander Poremba (MIT, Boston University); Noah Shutty (Google); Yihui Quek (MIT, EPFL)[abstract]Abstract: We introduce Hamiltonian Decoded Quantum Interferometry (HDQI), a quantum algorithm that utilizes coherent Bell measurements and the symplectic representation of the Pauli group to reduce Gibbs sampling and Hamiltonian optimization to classical decoding. For a signed Pauli Hamiltonian $H$ and any degree-$\ell$ polynomial $\calP$, HDQI prepares a purification of the density matrix $$\rho_\calP(H) = \calP^2(H)/\Tr[\calP^2(H)]$$ by solving a combination of two tasks: decoding $\ell$ errors on a classical code defined by $H$, and preparing a pilot state that encodes the anti-commutation structure of $H$. Choosing $\calP(x)$ to approximate $\exp(-\beta x/2)$ yields Gibbs states at inverse temperature $\beta$; other choices of $\calP$ prepare approximate ground states, microcanonical ensembles, and other spectral filters. The decoding problem inherits structural properties of $H$; in particular, local Hamiltonians map to LDPC codes. Preparing the pilot state is always efficient for commuting Hamiltonians, but highly non-trivial for non-commuting Hamiltonians. Nevertheless, we prove that this state admits an efficient matrix product state representation for a class of nearly commuting Pauli Hamiltonians whose anti-commutation graph decomposes into connected components of logarithmic size. We show that HDQI efficiently prepares Gibbs states at arbitrary temperatures for a class of physically motivated commuting Hamiltonians -- including the toric code, color code, and Haah's cubic code -- but also develop a matching efficient classical algorithm for this task, thereby delineating the boundary of efficient classical simulation. For a non-commuting semiclassical spin glass and commuting stabilizer code Hamiltonians with quantum defects, HDQI provably prepares Gibbs states up to a constant inverse-temperature threshold using polynomial quantum resources and quasi-polynomial classical preprocessing. These results position HDQI as a versatile new algorithmic primitive, connecting quantum state preparation to classical decoding.
- Optimising quantum data hidingFrancesco Anna Mele (Scuola Normale Superiore of Pisa); Ludovico Lami (Scuola Normale Superiore of Pisa)[abstract]Abstract: Quantum data hiding is the existence of pairs of bipartite quantum states that are (almost) perfectly distinguishable with global measurements, yet close to indistinguishable when only measurements implementable with local operations and classical communication are allowed. Remarkably, data hiding states can also be chosen to be separable, meaning that secrets can be hidden using no entanglement that are almost irretrievable without entanglement --- this is sometimes called `nonlocality without entanglement'. Essentially two families of data hiding states were known prior to this work: Werner states and random states. Hiding Werner states can be made either separable or globally perfectly orthogonal, but not both --- separability comes at the price of orthogonality being only approximate. Random states can hide many more bits, but they are typically entangled and again only approximately orthogonal. In this paper, we present an explicit construction of novel group-symmetric data hiding states that are simultaneously separable, perfectly orthogonal, and even invariant under partial transpose, thus exhibiting the phenomenon of nonlocality without entanglement to the utmost extent. Our analysis leverages novel applications of numerical analysis tools to study convex optimisation problems in quantum information theory, potentially offering technical insights that extend beyond this work.
List of Accepted Posters
(in order of submission)