In this work we establish lower bounds on the size of Clifford circuits that measure a family of commuting Pauli operators. Our bounds depend on the interplay between a pair of graphs: the Tanner graph of the set of measured Pauli operators, and the connectivity graph which represents the qubit connections required to implement the circuit. For local-expander quantum codes, which are promising for low-overhead quantum error correction, we prove that any syndrome extraction circuit implemented with local Clifford gates in a 2D square patch of $N$ qubits has depth at least $\Omega(n/\sqrt{N})$ where $n$ is the code length. Then, we propose two families of quantum circuits saturating this bound. First, we construct 2D local syndrome extraction circuits for quantum LDPC codes with bounded depth using only $O(n^2)$ ancilla qubits. Second, we design a family of 2D local syndrome extraction circuits for hypergraph product codes using $O(n)$ ancilla qubits with depth $O(\sqrt{n})$. Finally, we use circuit noise simulations to compare the performance of a family of hypergraph product codes using this last family of 2D syndrome extraction circuits with a syndrome extraction circuit implemented with fully connected qubits. While there is a threshold of about $10^{-3}$ for a fully connected implementation, we observe no threshold for the 2D local implementation despite simulating error rates of as low as $10^{-6}$. This suggests that quantum LDPC codes are impractical with 2D local quantum hardware. We believe that our proof technique is of independent interest and could find other applications. Our bounds on circuit sizes are derived from a lower bound on the amount of correlations between two subsets of qubits of the circuit and an upper bound on the amount of correlations introduced by each circuit gate, which together provide a lower bound on the circuit size.
Existing quantum compilers focus on mapping a logical quantum circuit to a quantum device and its native quantum gates. Only simple circuit identities are used to optimize the quantum circuit during the compilation process. This approach misses more complex circuit identities, which could be used to optimize the quantum circuit further. We propose Quanto, the first quantum optimizer that automatically generates circuit identities. Quanto takes as input a gate set and generates provably correct circuit identities for the gate set. Quanto's automatic generation of circuit identities includes single-qubit and two-qubit gates, which leads to a new database of circuit identities, some of which are novel to the best of our knowledge. In addition to the generation of new circuit identities, Quanto's optimizer applies such circuit identities to quantum circuits and finds optimized quantum circuits that have not been discovered by other quantum compilers, including IBM Qiskit and Cambridge Quantum Computing Tket. Quanto's database of circuit identities could be applied to improve existing quantum compilers and Quanto can be used to generate identity databases for new gate sets.
We construct reversible Boolean circuits efficiently simulating reversible Turing machines. Both the circuits and the simulation proof are rather simple. Then we give a fairly straightforward generalization of the circuits and the simulation proof to the quantum case.
Recently, minimal linear codes have been extensively studied due to their applications in secret sharing schemes, two-party computations, and so on. Constructing minimal linear codes violating the Ashikhmin-Barg condition and then determining their weight distributions have been interesting in coding theory and cryptography. In this paper, basing on exponential sums, Krawtchouk polynomials, and a function defined on special sets of vectors in $\mathbb{F}_3^m$, we present two new classes of minimal ternary linear codes violating the Ashikhmin-Barg condition, and then determine their complete weight enumerators. Especially, the minimal distance of a class of these codes is better than that of codes constructed in \cite{Heng-Ding-Zhou}.
We study the mechanism design problem of selling $k$ items to unit-demand buyers with private valuations for the items. A buyer either participates directly in the auction or is represented by an intermediary, who represents a subset of buyers. Our goal is to design robust mechanisms that are independent of the demand structure (i.e. how the buyers are partitioned across intermediaries), and perform well under a wide variety of possible contracts between intermediaries and buyers. We first study the case of $k$ identical items where each buyer draws its private valuation for an item i.i.d. from a known $\lambda$-regular distribution. We construct a robust mechanism that, independent of the demand structure and under certain conditions on the contracts between intermediaries and buyers, obtains a constant factor of the revenue that the mechanism designer could obtain had she known the buyers' valuations. In other words, our mechanism's expected revenue achieves a constant factor of the optimal welfare, regardless of the demand structure. Our mechanism is a simple posted-price mechanism that sets a take-it-or-leave-it per-item price that depends on $k$ and the total number of buyers, but does not depend on the demand structure or the downstream contracts. Next we generalize our result to the case when the items are not identical. We assume that the item valuations are separable. For this case, we design a mechanism that obtains at least a constant fraction of the optimal welfare, by using a menu of posted prices. This mechanism is also independent of the demand structure, but makes a relatively stronger assumption on the contracts between intermediaries and buyers, namely that each intermediary prefers outcomes with a higher sum of utilities of the subset of buyers represented by it.
We propose a method for finding approximate compilations of quantum circuits, based on techniques from policy gradient reinforcement learning. The choice of a stochastic policy allows us to rephrase the optimization problem in terms of probability distributions, rather than variational parameters. This implies that searching for the optimal configuration is done by optimizing over the distribution parameters, rather than over the circuit free angles. The upshot of this is that we can always compute a gradient, provided that the policy is differentiable. We show numerically that this approach is more competitive than those using gradient-free methods, even in the presence of depolarizing noise, and argue analytically why this is the case. Another interesting feature of this approach to variational compilation is that it does not need a separate register and long-range interactions to estimate the end-point fidelity. We expect these techniques to be relevant for training variational circuit in other contexts
We study the problem of efficiently computing on encoded data. More specifically, we study the question of low-bandwidth computation of functions $F:\mathbb{F}^k \to \mathbb{F}$ of some data $x \in \mathbb{F}^k$, given access to an encoding $c \in \mathbb{F}^n$ of $x$ under an error correcting code. In our model -- relevant in distributed storage, distributed computation and secret sharing -- each symbol of $c$ is held by a different party, and we aim to minimize the total amount of information downloaded from each party in order to compute $F(x)$. Special cases of this problem have arisen in several domains, and we believe that it is fruitful to study this problem in generality. Our main result is a low-bandwidth scheme to compute linear functions for Reed-Solomon codes, even in the presence of erasures. More precisely, let $\epsilon > 0$ and let $\mathcal{C}: \mathbb{F}^k \to \mathbb{F}^n$ be a full-length Reed-Solomon code of rate $1 - \epsilon$ over a field $\mathbb{F}$ with constant characteristic. For any $\gamma \in [0, \epsilon)$, our scheme can compute any linear function $F(x)$ given access to any $(1 - \gamma)$-fraction of the symbols of $\mathcal{C}(x)$, with download bandwidth $O(n/(\epsilon - \gamma))$ bits. In contrast, the naive scheme that involves reconstructing the data $x$ and then computing $F(x)$ uses $\Theta(n \log n)$ bits. Our scheme has applications in distributed storage, coded computation, and homomorphic secret sharing.
Motivated by applications to topological data analysis, we give an efficient algorithm for computing a (minimal) presentation of a bigraded $K[x,y]$-module $M$, where $K$ is a field. The algorithm takes as input a short chain complex of free modules $X\xrightarrow{f} Y \xrightarrow{g} Z$ such that $M\cong \ker{g}/\mathrm{im}{f}$. It runs in time $O(|X|^3+|Y|^3+|Z|^3)$ and requires $O(|X|^2+|Y|^2+|Z|^2)$ memory, where $|\cdot |$ denotes the rank. Given the presentation computed by our algorithm, the bigraded Betti numbers of $M$ are readily computed. Our approach is based on a simple matrix reduction algorithm, slight variants of which compute kernels of morphisms between free modules, minimal generating sets, and Gr\"obner bases. Our algorithm for computing minimal presentations has been implemented in RIVET, a software tool for the visualization and analysis of two-parameter persistent homology. In experiments on topological data analysis problems, our implementation outperforms the standard computational commutative algebra packages Singular and Macaulay2 by a wide margin.
We consider the problem of untangling a given (non-planar) straight-line circular drawing $\delta_G$ of an outerplanar graph $G=(V, E)$ into a planar straight-line circular drawing by shifting a minimum number of vertices to a new position on the circle. For an outerplanar graph $G$, it is clear that such a crossing-free circular drawing always exists and we define the circular shifting number shift$(\delta_G)$ as the minimum number of vertices that are required to be shifted in order to resolve all crossings of $\delta_G$. We show that the problem Circular Untangling, asking whether shift$(\delta_G) \le K$ for a given integer $K$, is NP-complete. For $n$-vertex outerplanar graphs, we obtain a tight upper bound of shift$(\delta_G) \le n - \lfloor\sqrt{n-2}\rfloor -2$. Based on these results we study Circular Untangling for almost-planar circular drawings, in which a single edge is involved in all the crossings. In this case, we provide a tight upper bound shift$(\delta_G) \le \lfloor \frac{n}{2} \rfloor-1$ and present a constructive polynomial-time algorithm to compute the circular shifting number of almost-planar drawings.
We introduce the following variant of the VC-dimension. Given $S \subseteq \{0, 1\}^n$ and a positive integer $d$, we define $\mathbb{U}_d(S)$ to be the size of the largest subset $I \subseteq [n]$ such that the projection of $S$ on every subset of $I$ of size $d$ is the $d$-dimensional cube. We show that determining the largest cardinality of a set with a given $\mathbb{U}_d$ dimension is equivalent to a Tur\'an-type problem related to the total number of cliques in a $d$-uniform hypergraph. This allows us to beat the Sauer--Shelah lemma for this notion of dimension. We use this to obtain several results on $\Sigma_3^k$-circuits, i.e., depth-$3$ circuits with top gate OR and bottom fan-in at most $k$: * Tight relationship between the number of satisfying assignments of a $2$-CNF and the dimension of the largest projection accepted by it, thus improving Paturi, Saks, and Zane (Comput. Complex. '00). * Improved $\Sigma_3^3$-circuit lower bounds for affine dispersers for sublinear dimension. Moreover, we pose a purely hypergraph-theoretic conjecture under which we get further improvement. * We make progress towards settling the $\Sigma_3^2$ complexity of the inner product function and all degree-$2$ polynomials over $\mathbb{F}_2$ in general. The question of determining the $\Sigma_3^3$ complexity of IP was recently posed by Golovnev, Kulikov, and Williams (ITCS'21).
Minwise hashing (MinHash) is a classical method for efficiently estimating the Jaccrad similarity in massive binary (0/1) data. To generate $K$ hash values for each data vector, the standard theory of MinHash requires $K$ independent permutations. Interestingly, the recent work on "circulant MinHash" (C-MinHash) has shown that merely two permutations are needed. The first permutation breaks the structure of the data and the second permutation is re-used $K$ time in a circulant manner. Surprisingly, the estimation accuracy of C-MinHash is proved to be strictly smaller than that of the original MinHash. The more recent work further demonstrates that practically only one permutation is needed. Note that C-MinHash is different from the well-known work on "One Permutation Hashing (OPH)" published in NIPS'12. OPH and its variants using different "densification" schemes are popular alternatives to the standard MinHash. The densification step is necessary in order to deal with empty bins which exist in One Permutation Hashing. In this paper, we propose to incorporate the essential ideas of C-MinHash to improve the accuracy of One Permutation Hashing. Basically, we develop a new densification method for OPH, which achieves the smallest estimation variance compared to all existing densification schemes for OPH. Our proposed method is named C-OPH (Circulant OPH). After the initial permutation (which breaks the existing structure of the data), C-OPH only needs a "shorter" permutation of length $D/K$ (instead of $D$), where $D$ is the original data dimension and $K$ is the total number of bins in OPH. This short permutation is re-used in $K$ bins in a circulant shifting manner. It can be shown that the estimation variance of the Jaccard similarity is strictly smaller than that of the existing (densified) OPH methods.