
Tomography & Learning 3
contributed
Thu, 29 Jan 2026, 13:00 - 14:30
- Adversarially robust quantum state learning and testingMaryam Aliakbarpour (Rice University); Nai-Hui Chia (Rice University); Vladimir Braverman (Johns Hopkins University); Yuhan Liu (Rice University)[abstract]Abstract: Quantum state learning is a fundamental problem in physics and computer science. As near-term quantum devices are error-prone, it is important to design error-resistant algorithms. Apart from device errors, other unexpected factors could also affect the algorithm, such as careless human read-out error, or even a malicious hacker deliberately altering the measurement results. Thus, we want our algorithm to work even in the worst case when things go against our favor. We consider the practical setting of single-copy measurements and propose the $\gamma$-adversarial corruption model where an imaginary adversary can arbitrarily change $\gamma$-fraction of the measurement outcomes. This is stronger than the $\gamma$-bounded SPAM noise model, where the post-measurement state changes by at most $\gamma$ in trace distance. Under our stronger model of corruption, we design an algorithm using non-adaptive measurements that can learn an unknown rank-$r$ state up to $\tilde{O}(\gamma\sqrt{r})$ in trace distance, provided that the number of copies is sufficiently large. We further prove an information-theoretic lower bound of $\Omega(\gamma\sqrt{r})$ for non-adaptive measurements, demonstrating the optimality of our algorithm. Our upper and lower bounds also hold for quantum state testing, where the goal is to test whether an unknown state is equal to a given state or far from it. Our results are intriguingly optimistic and pessimistic at the same time. For general states, the error is dimension-dependent and $\gamma\sqrt{d}$ in the worst case, meaning that only corrupting a very small fraction ($1/\sqrt{d}$) of the outcomes could totally destroy any non-adaptive learning algorithm. However, for constant-rank states that are useful in many quantum algorithms, it is possible to achieve dimension-independent error, even in the worst-case adversarial setting.
- merged withOptimal lower bounds for quantum state tomographyThilo Scharnhorst (UC Berkeley); Jack Spilecki (UC Berkeley); John Wright (UC Berkeley)[abstract]Abstract: We show that $n = \Omega(rd/\epsilon^2)$ copies are necessary to learn a rank $r$ mixed state $\rho \in \C^{d \times d}$ up to error~$\epsilon$ in trace distance. This matches the upper bound of $n = O(rd/\epsilon^2)$ from~\cite{OW16} and therefore settles the sample complexity of mixed state tomography. We prove this lower bound by studying a special case of full state tomography that we refer to as \emph{projector tomography}, in which $\rho$ is promised to be of the form $\rho = P/r$, where $P \in \C^{d \times d}$ is a rank $r$ projector. A key technical ingredient in our proof, which may be of independent interest, is a reduction which converts any algorithm for projector tomography which learns to error $\epsilon$ in trace distance to an algorithm which learns to error $O(\epsilon)$ in the more stringent Bures distance.The debiased Keyl's algorithm: a new unbiased estimator for full state tomographyAngelos Pelecanos (UC Berkeley); Jack Spilecki (UC Berkeley); John Wright (UC Berkeley)[abstract]Abstract: In the problem of quantum state tomography, one is given $n$ copies of an unknown rank-$r$ mixed state $\rho \in \C^{d \times d}$ and asked to produce an estimator $\widehat{\brho}$ of $\rho$ which is close to it. One of the most basic and useful properties a statistical estimator can have is that of being \emph{unbiased}, meaning that $\widehat{\brho}$ is equal to the true state $\rho$ in expectation. Unfortunately, of the three estimators for tomography which are known to be either sample-optimal or almost sample-optimal, namely Keyl's algorithm~\cite{Key06} and the two tomography algorithms from~\cite{HHJ+16}, none of them produce unbiased estimators. This situation has recently been highlighted as a bottleneck in several important tomographic applications~\cite{CLL24a,CLL24b}. In this work, we present the \emph{debiased Keyl's algorithm}, the first estimator for full state tomography which is both unbiased and sample-optimal. Our algorithm is the result of taking Keyl's algorithm and carefully modifying it in order to correct for its bias. In addition, we derive an explicit formula for the second moment of our estimator which is simple and easy to use. Using it, we show the following applications. \begin{itemize} \item[$\circ$] We show that $n = O(rd/\epsilon^2)$ copies are sufficient to learn a rank-$r$ mixed state $\rho \in \C^{d \times d}$ to trace distance error $\epsilon$. This gives a new proof of the main result of~\cite{OW16} and matches the lower bound of~\cite{SSW25}. \item[$\circ$] We then improve this result and show that $n = O(rd/\epsilon^2)$ copies are sufficient to learn to error $\epsilon$ in the more challenging Bures distance. This is optimal due to the lower bound of~\cite{Yue23} and improves on the best known bound of $n = \widetilde{O}(rd/\epsilon^2)$ from prior work~\cite{HHJ+16,OW17a,FO24}. \item[$\circ$] We consider the scenario of full state tomography with limited entanglement, in which one is given $n$ copies of $\rho$ but only allowed to perform entangled measurements across $k$ of them at a time. We show that \begin{equation*} n =O\left(\max \left(\frac{d^3}{\sqrt{k}\epsilon^2}, \frac{d^2}{\epsilon^2} \right) \right) \end{equation*} copies of $\rho$ suffice to learn $\rho$ to trace distance error $\epsilon$ in this scenario. This improves on the prior work of \cite{CLL24a} and matches their lower bound which holds for $k \leq 1/\epsilon^c$, where $c$ is a constant. \item[$\circ$] In the problem of shadow tomography, one is given $m$ observables $O_1, \ldots, O_m$, and the goal is to estimate each quantity $\tr(O_i \cdot \rho)$ up to $\epsilon$ error. If $\Vert O_i \Vert_{\infty} \leq 1$ for all $i$, we show that $O(\log(m)/\epsilon^2)$ copies are sufficient to solve this task in the ``high accuracy regime'' when $\epsilon = O(1/d)$. This improves a result of~\cite{CLL24b}, who showed this bound holds when $\epsilon = O(1/d^{12})$. More generally, we show that if $\tr(O_i^2) \leq F$ for all $i$, then \begin{equation*} n = O\Big(\log(m) \cdot \Big(\min\Big\{\frac{\sqrt{r F}}{\epsilon}, \frac{F^{2/3}}{\epsilon^{4/3}}\Big\} + \frac{1}{\epsilon^2}\Big)\Big) \end{equation*} copies suffice to solve this task. This improves on a result of~\cite{GLM24}, who showed a bound of $n = O(\log(m) \cdot \sqrt{rF}/\epsilon^2)$, and disproves a conjecture of~\cite{CLL24b}. \item[$\circ$] Our final application is to the field of quantum metrology. In quantum metrology, one is provided copies of a mixed state $\rho_{\theta}$ parameterized by a vector $\theta \in \R^n$, and the goal is to estimate the parameter $\theta$. An algorithm is \emph{locally unbiased} at a parameter $\theta^*$ if, roughly, it is unbiased given $\rho_{\theta}$ when $\theta$ is in a small neighborhood around $\theta^*$. The quantum Cramér–Rao bound states that the mean squared error matrix (MSEM) of any locally unbiased algorithm is lower bounded by the inverse of the Quantum Fisher Information (QFI) matrix. We give a locally unbiased algorithm whose MSEM is \emph{upper bounded} by twice the inverse of the QFI matrix in the asymptotic limit of large $n$, which is optimal. \end{itemize}
- merged withAn infinite hierarchy of multi-copy quantum learning tasksJan Nöller (TU Darmstadt); Viet Tran (JKU Linz); Mariami Gachechildaze (TU Darmstadt); Richard Kueng (JKU Linz)[abstract]Abstract: Learning properties of quantum states from measurement data is a fundamental challenge in quantum information. The sample complexity of such tasks depends crucially on the measurement primitive. While shadow tomography achieves sample- efficient learning by allowing entangling measurements across many copies, it requires prohibitively deep circuits. At the other extreme, two-copy measurements already yield exponential advantages over single-copy strategies in tasks such as Pauli tomography. In this work we show that such sharp separations extend far beyond the two-copy regime: for every prime k we construct explicit learning tasks of degree k, which are exponentially hard with (k − 1)-copy measurements but efficiently solvable with k- copy measurements. Our protocols are not only sample-efficient but also realizable with shallow circuits. Extending further, we show that such finite-degree tasks ex- ist for all square-free integers k, pointing toward a general principle underlying their existence. Together, our results reveal an infinite hierarchy of multi-copy learning prob- lems, uncovering new phase transitions in sample complexity and underscoring the role of reliable quantum memory as a key resource for exponential quantum advantageExponential Advantage from One More Replica in Estimating Nonlinear Properties of Quantum StatesQi Ye (Tsinghua University); Dong-Ling Deng (Tsinghua University); Zhenhuan Liu (Tsinghua University)[abstract]Abstract: Estimating nonlinear properties of quantum states, such as $\mathrm{tr}(\rho^{k} O)$, is fundamental in quantum information but challenging due to the intrinsic linearity of quantum mechanics. Typical approaches such as generalized swap test rely on joint access to $k$ replicas to convert nonlinear functions in $\rho$ into linear functions in $\rho^{\otimes k}$. In this work, we prove that this conversion is not only sufficient but also necessary: any protocol that can only perform $(k-1)$-replica joint measurements require exponentially many samples to estimate $\mathrm{tr}(\rho^{k}O)$ with nonzero $\tr(O)$, while $k$-replica joint measurements allow efficient estimation. This establishes, for the first time, an exponential separation between $(k-1)$- and $k$-replica protocols for general $k$, thereby defining a fine-grained hierarchy for replica quantum advantage and solving an open question in the literature. The workhorse of our proofs is a general indistinguishability principle showing that any ensemble assembled from Haar-random states is hard to distinguish from its average. We also leverage this principle in spectrum testing, proving that $k$-replica joint measurements are also necessary to efficiently distinguish two spectra that match on all moments up to degree $k-1$. Our work draws sharp boundaries on the power of joint measurements, shedding light on resource-complexity tradeoffs in quantum learning theory.