Quantum circuit compilation comprises many computationally hard reasoning tasks that nonetheless lie inside #$\mathbf{P}$ and its decision counterpart in $\mathbf{PP}$. The classical simulation of general quantum circuits is a core example. We show for the first time that a strong simulation of universal quantum circuits can be efficiently tackled through weighted model counting by providing a linear encoding of Clifford+T circuits. To achieve this, we exploit the stabilizer formalism by Knill, Gottesmann, and Aaronson and the fact that stabilizer states form a basis for density operators. With an open-source simulator implementation, we demonstrate empirically that model counting often outperforms state-of-the-art simulation techniques based on the ZX calculus and decision diagrams. Our work paves the way to apply the existing array of powerful classical reasoning tools to realize efficient quantum circuit compilation; one of the obstacles on the road towards quantum supremacy.
Given some binary matrix $M$, suppose we are presented with the collection of its rows and columns in independent arbitrary orderings. From this information, are we able to recover the unique original orderings and matrix? We present an algorithm that identifies whether there is a unique ordering associated with a set of rows and columns, and outputs either the unique correct orderings for the rows and columns or the full collection of all valid orderings and valid matrices. We show that there is a constant $c > 0$ such that the algorithm terminates in $O(n^2)$ time with high probability and in expectation for random $n \times n$ binary matrices with i.i.d.\ Bernoulli $(p)$ entries $(m_{ij})_{ij=1}^n$ such that $\frac{c\log^2(n)}{n(\log\log(n))^2} \leq p \leq \frac{1}{2}$.
We study pseudorandomness properties of permutations on $\{0,1\}^n$ computed by random circuits made from reversible $3$-bit gates (permutations on $\{0,1\}^3$). Our main result is that a random circuit of depth $n \cdot \tilde{O}(k^2)$, with each layer consisting of $\approx n/3$ random gates in a fixed nearest-neighbor architecture, yields almost $k$-wise independent permutations. The main technical component is showing that the Markov chain on $k$-tuples of $n$-bit strings induced by a single random $3$-bit nearest-neighbor gate has spectral gap at least $1/n \cdot \tilde{O}(k)$. This improves on the original work of Gowers [Gowers96], who showed a gap of $1/\mathrm{poly}(n,k)$ for one random gate (with non-neighboring inputs); and, on subsequent work [HMMR05,BH08] improving the gap to $\Omega(1/n^2k)$ in the same setting. From the perspective of cryptography, our result can be seen as a particularly simple/practical block cipher construction that gives provable statistical security against attackers with access to $k$~input-output pairs within few rounds. We also show that the Luby--Rackoff construction of pseudorandom permutations from pseudorandom functions can be implemented with reversible circuits. From this, we make progress on the complexity of the Minimum Reversible Circuit Size Problem (MRCSP), showing that block ciphers of fixed polynomial size are computationally secure against arbitrary polynomial-time adversaries, assuming the existence of one-way functions (OWFs).
We investigate the consequence of two Lip$(\gamma)$ functions, in the sense of Stein, being close throughout a subset of their domain. A particular consequence of our results is the following. Given $K_0 > \varepsilon > 0$ and $\gamma > \eta > 0$ there is a constant $\delta = \delta(\gamma,\eta,\varepsilon,K_0) > 0$ for which the following is true. Let $\Sigma \subset \mathbb{R}^d$ be closed and $f , h : \Sigma \to \mathbb{R}$ be Lip$(\gamma)$ functions whose Lip$(\gamma)$ norms are both bounded above by $K_0$. Suppose $B \subset \Sigma$ is closed and that $f$ and $h$ coincide throughout $B$. Then over the set of points in $\Sigma$ whose distance to $B$ is at most $\delta$ we have that the Lip$(\eta)$ norm of the difference $f-h$ is bounded above by $\varepsilon$. More generally, we establish that this phenomenon remains valid in a less restrictive Banach space setting under the weaker hypothesis that the two Lip$(\gamma)$ functions $f$ and $h$ are only close in a pointwise sense throughout the closed subset $B$. We require only that the subset $\Sigma$ be closed; in particular, the case that $\Sigma$ is finite is covered by our results. The restriction that $\eta < \gamma$ is sharp in the sense that our result is false for $\eta := \gamma$.
Consider a connected graph $G$ and let $T$ be a spanning tree of $G$. Every edge $e \in G-T$ induces a cycle in $T \cup \{e\}$. The intersection of two distinct such cycles is the set of edges of $T$ that belong to both cycles. We consider the problem of finding a spanning tree that has the least number of such non-empty intersections.
Given a simple weighted directed graph $G = (V, E, \omega)$ on $n$ vertices as well as two designated terminals $s, t\in V$, our goal is to compute the shortest path from $s$ to $t$ avoiding any pair of presumably failed edges $f_1, f_2\in E$, which is a natural generalization of the classical replacement path problem which considers single edge failures only. This dual failure replacement paths problem was recently studied by Vassilevska Williams, Woldeghebriel and Xu [FOCS 2022] who designed a cubic time algorithm for general weighted digraphs which is conditionally optimal; in the same paper, for unweighted graphs where $\omega \equiv 1$, the authors presented an algebraic algorithm with runtime $\tilde{O}(n^{2.9146})$, as well as a conditional lower bound of $n^{8/3-o(1)}$ against combinatorial algorithms. However, it was unknown in their work whether fast matrix multiplication is necessary for a subcubic runtime in unweighted digraphs. As our primary result, we present the first truly subcubic combinatorial algorithm for dual failure replacement paths in unweighted digraphs. Our runtime is $\tilde{O}(n^{3-1/18})$. Besides, we also study algebraic algorithms for digraphs with small integer edge weights from $\{-M, -M+1, \cdots, M-1, M\}$. As our secondary result, we obtained a runtime of $\tilde{O}(Mn^{2.8716})$, which is faster than the previous bound of $\tilde{O}(M^{2/3}n^{2.9144} + Mn^{2.8716})$ from [Vassilevska Williams, Woldeghebriela and Xu, 2022].
An $(m,n,R)$-de Bruijn covering array (dBCA) is a doubly periodic $M \times N$ array over an alphabet of size $q$ such that the set of all its $m \times n$ windows form a covering code with radius $R$. An upper bound of the smallest array area of an $(m,n,R)$-dBCA is provided using a probabilistic technique which is similar to the one that was used for an upper bound on the length of a de Bruijn covering sequence. A folding technique to construct a dBCA from a de Bruijn covering sequence or de Bruijn covering sequences code is presented. Several new constructions that yield shorter de Bruijn covering sequences and $(m,n,R)$-dBCAs with smaller areas are also provided. These constructions are mainly based on sequences derived from cyclic codes, self-dual sequences, primitive polynomials, an interleaving technique, folding, and mutual shifts of sequences with the same covering radius. Finally, constructions of de Bruijn covering sequences codes are also discussed.
We give a non-adaptive algorithm that makes $2^{\tilde{O}(\sqrt{k\log(1/\varepsilon_2 - \varepsilon_1)})}$ queries to a Boolean function $f:\{\pm 1\}^n \rightarrow \{\pm 1\}$ and distinguishes between $f$ being $\varepsilon_1$-close to some $k$-junta versus $\varepsilon_2$-far from every $k$-junta. At the heart of our algorithm is a local mean estimation procedure for Boolean functions that may be of independent interest. We complement our upper bound with a matching lower bound, improving a recent lower bound obtained by Chen et al. We thus obtain the first tight bounds for a natural property of Boolean functions in the tolerant testing model.
Modern compression methods can summarize a target distribution $\mathbb{P}$ more succinctly than i.i.d. sampling but require access to a low-bias input sequence like a Markov chain converging quickly to $\mathbb{P}$. We introduce a new suite of compression methods suitable for compression with biased input sequences. Given $n$ points targeting the wrong distribution and quadratic time, Stein Kernel Thinning (SKT) returns $\sqrt{n}$ equal-weighted points with $\widetilde{O}(n^{-1/2})$ maximum mean discrepancy (MMD) to $\mathbb {P}$. For larger-scale compression tasks, Low-rank SKT achieves the same feat in sub-quadratic time using an adaptive low-rank debiasing procedure that may be of independent interest. For downstream tasks that support simplex or constant-preserving weights, Stein Recombination and Stein Cholesky achieve even greater parsimony, matching the guarantees of SKT with as few as $\operatorname{poly-log}(n)$ weighted points. Underlying these advances are new guarantees for the quality of simplex-weighted coresets, the spectral decay of kernel matrices, and the covering numbers of Stein kernel Hilbert spaces. In our experiments, our techniques provide succinct and accurate posterior summaries while overcoming biases due to burn-in, approximate Markov chain Monte Carlo, and tempering.
Given an increasing sequence of integers $x_1,\ldots,x_n$ from a universe $\{0,\ldots,u-1\}$, the monotone minimal perfect hash function (MMPHF) for this sequence is a data structure that answers the following rank queries: $rank(x) = i$ if $x = x_i$, for $i\in \{1,\ldots,n\}$, and $rank(x)$ is arbitrary otherwise. Assadi, Farach-Colton, and Kuszmaul recently presented at SODA'23 a proof of the lower bound $\Omega(n \min\{\log\log\log u, \log n\})$ for the bits of space required by MMPHF, provided $u \ge n 2^{2^{\sqrt{\log\log n}}}$, which is tight since there is a data structure for MMPHF that attains this space bound (and answers the queries in $O(\log u)$ time). In this paper, we close the remaining gap by proving that, for $u \ge (1+\epsilon)n$, where $\epsilon > 0$ is any constant, the tight lower bound is $\Omega(n \min\{\log\log\log \frac{u}{n}, \log n\})$, which is also attainable; we observe that, for all reasonable cases when $n < u < (1+\epsilon)n$, known facts imply tight bounds, which virtually settles the problem. Along the way we substantially simplify the proof of Assadi et al. replacing a part of their heavy combinatorial machinery by trivial observations. However, an important part of the proof still remains complicated. This part of our paper repeats arguments of Assadi et al. and is not novel. Nevertheless, we include it, for completeness, offering a somewhat different perspective on these arguments.
It is important to detect anomalous inputs when deploying machine learning systems. The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples. At the same time, diverse image and text data are available in enormous quantities. We propose leveraging these data to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure (OE). This enables anomaly detectors to generalize and detect unseen anomalies. In extensive experiments on natural language processing and small- and large-scale vision tasks, we find that Outlier Exposure significantly improves detection performance. We also observe that cutting-edge generative models trained on CIFAR-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; we use OE to mitigate this issue. We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.