We study the complexity of learning quantum states in various models with respect to the stabilizer formalism and obtain the following results: - We prove that $\Omega(n)$ $T$-gates are necessary for any Clifford+$T$ circuit to prepare computationally pseudorandom quantum states, an exponential improvement over the previously known bound. This bound is asymptotically tight if linear-time quantum-secure pseudorandom functions exist. - Given an $n$-qubit pure quantum state $|\psi\rangle$ that has fidelity at least $\tau$ with some stabilizer state, we give an algorithm that outputs a succinct description of a stabilizer state that witnesses fidelity at least $\tau - \varepsilon$. The algorithm uses $O(n/(\varepsilon^2\tau^4))$ samples and $\exp\left(O(n/\tau^4)\right) / \varepsilon^2$ time. In the regime of $\tau$ constant, this algorithm estimates stabilizer fidelity substantially faster than the na\"ive $\exp(O(n^2))$-time brute-force algorithm over all stabilizer states. - We improve the soundness analysis of the stabilizer state property testing algorithm due to Gross, Nezami, and Walter [Comms. Math. Phys. 385 (2021)]. As an application, we exhibit a tolerant property testing algorithm for stabilizer states. The underlying algorithmic primitive in all of our results is Bell difference sampling. To prove our results, we establish and/or strengthen connections between Bell difference sampling, symplectic Fourier analysis, and graph theory.
Quantum span program algorithms for function evaluation sometimes have reduced query complexity when promised that the input has a certain structure. We design a modified span program algorithm to show these improvements persist even without a promise ahead of time, and we extend this approach to the more general problem of state conversion. As an application, we prove exponential and superpolynomial quantum advantages in average query complexity for several search problems, generalizing Montanaro's Search with Advice [Montanaro, TQC 2010].
We study a class of orbit recovery problems in which we observe independent copies of an unknown element of $\mathbb{R}^p$, each linearly acted upon by a random element of some group (such as $\mathbb{Z}/p$ or $\mathrm{SO}(3)$) and then corrupted by additive Gaussian noise. We prove matching upper and lower bounds on the number of samples required to approximately recover the group orbit of this unknown element with high probability. These bounds, based on quantitative techniques in invariant theory, give a precise correspondence between the statistical difficulty of the estimation problem and algebraic properties of the group. Furthermore, we give computer-assisted procedures to certify these properties that are computationally efficient in many cases of interest. The model is motivated by geometric problems in signal processing, computer vision, and structural biology, and applies to the reconstruction problem in cryo-electron microscopy (cryo-EM), a problem of significant practical interest. Our results allow us to verify (for a given problem size) that if cryo-EM images are corrupted by noise with variance $\sigma^2$, the number of images required to recover the molecule structure scales as $\sigma^6$. We match this bound with a novel (albeit computationally expensive) algorithm for ab initio reconstruction in cryo-EM, based on invariant features of degree at most 3. We further discuss how to recover multiple molecular structures from mixed (or heterogeneous) cryo-EM samples.
We provide new false discovery proportion (FDP) confidence envelopes in several multiple testing settings relevant to modern high dimensional-data methods. We revisit the scenarios considered in the recent work of \cite{katsevich2020simultaneous}(top-$k$, preordered -- including knockoffs -- , online) with a particular emphasis on obtaining FDP bounds that have both non-asymptotical coverage and asymptotical consistency, i.e. converge below the desired level $\alpha$ when applied to a classical $\alpha$-level false discovery rate (FDR) controlling procedure. This way, we derive new bounds that provide improvements over existing ones, both theoretically and practically, and are suitable for situations where at least a moderate number of rejections is expected. These improvements are illustrated with numerical experiments and real data examples. In particular, the improvement is significant in the knockoff setting, which shows the impact of the method for practical use. As side results, we introduce a new confidence envelope for the empirical cumulative distribution function of i.i.d. uniform variables and we provide new power results in sparse cases, both being of independent interest.
We present a new explicit construction of onesided bipartite lossless expanders of constant degree, with arbitrary constant ratio between the sizes of the two vertex sets. Our construction is simpler to state and analyze than the prior construction of Capalbo, Reingold, Vadhan, and Wigderson (2002). We construct our lossless expanders by imposing the structure of a constant-sized lossless expander "gadget" within the neighborhoods of a large bipartite spectral expander; similar constructions were previously used to obtain the weaker notion of unique-neighbor expansion. Our analysis simply consists of elementary counting arguments and an application of the expander mixing lemma.
As a signal recovery algorithm, compressed sensing is particularly useful when the data has low-complexity and samples are rare, which matches perfectly with the task of quantum phase estimation (QPE). In this work we present a new Heisenberg-limited QPE algorithm for early quantum computers based on compressed sensing. More specifically, given many copies of a proper initial state and queries to some unitary operators, our algorithm is able to recover the frequency with a total runtime $\mathcal{O}(\epsilon^{-1}\text{poly}\log(\epsilon^{-1}))$, where $\epsilon$ is the accuracy. Moreover, the maximal runtime satisfies $T_{\max}\epsilon \ll \pi$, which is comparable to the state of art algorithms, and our algorithm is also robust against certain amount of noise from sampling. We also consider the more general quantum eigenvalue estimation problem (QEEP) and show numerically that the off-grid compressed sensing can be a strong candidate for solving the QEEP.
We consider leader election in clique networks, where $n$ nodes are connected by point-to-point communication links. For the synchronous clique under simultaneous wake-up, i.e., where all nodes start executing the algorithm in round $1$, we show a tradeoff between the number of messages and the amount of time. More specifically, we show that any deterministic algorithm with a message complexity of $n f(n)$ requires $\Omega\left(\frac{\log n}{\log f(n)+1}\right)$ rounds, for $f(n) = \Omega(\log n)$. Our result holds even if the node IDs are chosen from a relatively small set of size $\Theta(n\log n)$, as we are able to avoid using Ramsey's theorem. We also give an upper bound that improves over the previously-best tradeoff. Our second contribution for the synchronous clique under simultaneous wake-up is to show that $\Omega(n\log n)$ is in fact a lower bound on the message complexity that holds for any deterministic algorithm with a termination time $T(n)$. We complement this result by giving a simple deterministic algorithm that achieves leader election in sublinear time while sending only $o(n\log n)$ messages, if the ID space is of at most linear size. We also show that Las Vegas algorithms (that never fail) require $\Theta(n)$ messages. For the synchronous clique under adversarial wake-up, we show that $\Omega(n^{3/2})$ is a tight lower bound for randomized $2$-round algorithms. Finally, we turn our attention to the asynchronous clique: Assuming adversarial wake-up, we give a randomized algorithm that achieves a message complexity of $O(n^{1 + 1/k})$ and an asynchronous time complexity of $k+8$. For simultaneous wake-up, we translate the deterministic tradeoff algorithm of Afek and Gafni to the asynchronous model, thus partially answering an open problem they pose.
This study demonstrates the existence of a testable condition for the identification of the causal effect of a treatment on an outcome in observational data, which relies on two sets of variables: observed covariates to be controlled for and a suspected instrument. Under a causal structure commonly found in empirical applications, the testable conditional independence of the suspected instrument and the outcome given the treatment and the covariates has two implications. First, the instrument is valid, i.e. it does not directly affect the outcome (other than through the treatment) and is unconfounded conditional on the covariates. Second, the treatment is unconfounded conditional on the covariates such that the treatment effect is identified. We suggest tests of this conditional independence based on machine learning methods that account for covariates in a data-driven way and investigate their asymptotic behavior and finite sample performance in a simulation study. We also apply our testing approach to evaluating the impact of fertility on female labor supply when using the sibling sex ratio of the first two children as supposed instrument, which by and large points to a violation of our testable implication for the moderate set of socio-economic covariates considered.
We consider the problem of estimating the causal effect of a treatment on an outcome in linear structural causal models (SCM) with latent confounders when we have access to a single proxy variable. Several methods (such as difference-in-difference (DiD) estimator or negative outcome control) have been proposed in this setting in the literature. However, these approaches require either restrictive assumptions on the data generating model or having access to at least two proxy variables. We propose a method to estimate the causal effect using cross moments between the treatment, the outcome, and the proxy variable. In particular, we show that the causal effect can be identified with simple arithmetic operations on the cross moments if the latent confounder in linear SCM is non-Gaussian. In this setting, DiD estimator provides an unbiased estimate only in the special case where the latent confounder has exactly the same direct causal effects on the outcomes in the pre-treatment and post-treatment phases. This translates to the common trend assumption in DiD, which we effectively relax. Additionally, we provide an impossibility result that shows the causal effect cannot be identified if the observational distribution over the treatment, the outcome, and the proxy is jointly Gaussian. Our experiments on both synthetic and real-world datasets showcase the effectiveness of the proposed approach in estimating the causal effect.
In spite of the large literature on reinforcement learning (RL) algorithms for partially observable Markov decision processes (POMDPs), a complete theoretical understanding is still lacking. In a partially observable setting, the history of data available to the agent increases over time so most practical algorithms either truncate the history to a finite window or compress it using a recurrent neural network leading to an agent state that is non-Markovian. In this paper, it is shown that in spite of the lack of the Markov property, recurrent Q-learning (RQL) converges in the tabular setting. Moreover, it is shown that the quality of the converged limit depends on the quality of the representation which is quantified in terms of what is known as an approximate information state (AIS). Based on this characterization of the approximation error, a variant of RQL with AIS losses is presented. This variant performs better than a strong baseline for RQL that does not use AIS losses. It is demonstrated that there is a strong correlation between the performance of RQL over time and the loss associated with the AIS representation.
Let $G=(V,E)$ be an $n$-vertex connected graph of maximum degree $\Delta$. Given access to $V$ and an oracle that given two vertices $u,v\in V$, returns the shortest path distance between $u$ and $v$, how many queries are needed to reconstruct $E$? We give a simple deterministic algorithm to reconstruct trees using $\Delta n\log_\Delta n+(\Delta+2)n$ distance queries and show that even randomised algorithms need to use at least $\frac1{100} \Delta n\log_\Delta n$ queries in expectation. The best previous lower bound was an information-theoretic lower bound of $\Omega(n\log n/\log \log n)$. Our lower bound also extends to related query models including distance queries for phylogenetic trees, membership queries for learning partitions and path queries in directed trees. We extend our deterministic algorithm to reconstruct graphs without induced cycles of length at least $k$ using $O_{\Delta,k}(n\log n)$ queries, which includes various graph classes of interest such as chordal graphs, permutation graphs and AT-free graphs. Since the previously best known randomised algorithm for chordal graphs uses $O_{\Delta}(n\log^2 n)$ queries in expectation, we both get rid off the randomness and get the optimal dependency in $n$ for chordal graphs and various other graph classes. Finally, we build on an algorithm of Kannan, Mathieu, and Zhou [ICALP, 2015] to give a randomised algorithm for reconstructing graphs of treelength $k$ using $O_{\Delta,k}(n\log^2n)$ queries in expectation.