We prove that any $n$-qubit unitary transformation can be implemented (i) approximately in time $\tilde O\big(2^{n/2}\big)$ with query access to an appropriate classical oracle, and also (ii) exactly by a circuit of depth $\tilde O\big(2^{n/2}\big)$ with one- and two-qubit gates and $2^{O(n)}$ ancillae. The proofs involve similar reductions to Grover search. The proof of (ii) also involves a linear-depth construction of arbitrary quantum states using one- and two-qubit gates (in fact, this can be improved to constant depth with the addition of fanout and generalized Toffoli gates) which may be of independent interest. We also prove a matching $\Omega\big(2^{n/2}\big)$ lower bound for (i) and (ii) for a certain class of implementations.
Given a rectangle $R$ with area $A$ and a set of areas $L=\{A_1,...,A_n\}$ with $\sum_{i=1}^n A_i = A$, we consider the problem of partitioning $R$ into $n$ sub-regions $R_1,...,R_n$ with areas $A_1,...,A_n$ in a way that the total perimeter of all sub-regions is minimized. The goal is to create square-like sub-regions, which are often more desired. We propose a divide and conquer algorithm for this problem that finds factor $1.2$--approximate solutions in $\mathcal{O}(n\log n)$ time.
We provide a framework to prove convergence rates for discretizations of kinetic Langevin dynamics for $M$-$\nabla$Lipschitz $m$-log-concave densities. Our approach provides convergence rates of $\mathcal{O}(m/M)$, with explicit stepsize restrictions, which are of the same order as the stability threshold for Gaussian targets and are valid for a large interval of the friction parameter. We apply this methodology to various integration methods which are popular in the molecular dynamics and machine learning communities. Finally we introduce the property ``$\gamma$-limit convergent" (GLC) to characterise underdamped Langevin schemes that converge to overdamped dynamics in the high friction limit and which have stepsize restrictions that are independent of the friction parameter; we show that this property is not generic by exhibiting methods from both the class and its complement.
\textsc{Edge Triangle Packing} and \textsc{Edge Triangle Covering} are dual problems extensively studied in the field of parameterized complexity. Given a graph $G$ and an integer $k$, \textsc{Edge Triangle Packing} seeks to determine whether there exists a set of at least $k$ edge-disjoint triangles in $G$, while \textsc{Edge Triangle Covering} aims to find out whether there exists a set of at most $k$ edges that intersects all triangles in $G$. Previous research has shown that \textsc{Edge Triangle Packing} has a kernel of $(3+\epsilon)k$ vertices, while \textsc{Edge Triangle Covering} has a kernel of $6k$ vertices. In this paper, we show that the two problems allow kernels of $3k$ vertices, improving all previous results. A significant contribution of our work is the utilization of a novel discharging method for analyzing kernel size, which exhibits potential for analyzing other kernel algorithms.
Originating in Girard's Linear logic, Ehrhard and Regnier's Taylor expansion of $\lambda$-terms has been broadly used as a tool to approximate the terms of several variants of the $\lambda$-calculus. Many results arise from a Commutation theorem relating the normal form of the Taylor expansion of a term to its B\"ohm tree. This led us to consider extending this formalism to the infinitary $\lambda$-calculus, since the $\Lambda_{\infty}^{001}$ version of this calculus has B\"ohm trees as normal forms and seems to be the ideal framework to reformulate the Commutation theorem. We give a (co-)inductive presentation of $\Lambda_{\infty}^{001}$. We define a Taylor expansion on this calculus, and state that the infinitary $\beta$-reduction can be simulated through this Taylor expansion. The target language is the usual resource calculus, and in particular the resource reduction remains finite, confluent and terminating. Finally, we state the generalised Commutation theorem and use our results to provide simple proofs of some normalisation and confluence properties in the infinitary $\lambda$-calculus.
We study the \textsc{$\alpha$-Fixed Cardinality Graph Partitioning ($\alpha$-FCGP)} problem, the generic local graph partitioning problem introduced by Bonnet et al. [Algorithmica 2015]. In this problem, we are given a graph $G$, two numbers $k,p$ and $0\leq\alpha\leq 1$, the question is whether there is a set $S\subseteq V$ of size $k$ with a specified coverage function $cov_{\alpha}(S)$ at least $p$ (or at most $p$ for the minimization version). The coverage function $cov_{\alpha}(\cdot)$ counts edges with exactly one endpoint in $S$ with weight $\alpha$ and edges with both endpoints in $S$ with weight $1 - \alpha$. $\alpha$-FCGP generalizes a number of fundamental graph problems such as \textsc{Densest $k$-Subgraph}, \textsc{Max $k$-Vertex Cover}, and \textsc{Max $(k,n-k)$-Cut}. A natural question in the study of $\alpha$-FCGP is whether the algorithmic results known for its special cases, like \textsc{Max $k$-Vertex Cover}, could be extended to more general settings. One of the simple but powerful methods for obtaining parameterized approximation [Manurangsi, SOSA 2019] and subexponential algorithms [Fomin et al. IPL 2011] for \textsc{Max $k$-Vertex Cover} is based on the greedy vertex degree orderings. The main insight of our work is that the idea of greed vertex degree ordering could be used to design fixed-parameter approximation schemes (FPT-AS) for $\alpha > 0$ and the subexponential-time algorithms for the problem on apex-minor free graphs for maximization with $\alpha > 1/3$ and minimization with $\alpha < 1/3$.
A cut-down de Bruijn sequence is a cyclic string of length $L$, where $1 \leq L \leq k^n$, such that every substring of length $n$ appears at most once. Etzion [Theor. Comp. Sci 44 (1986)] gives an algorithm to construct binary cut-down de Bruijn sequences that requires $o(n)$ simple $n$-bit operations per symbol generated. In this paper, we simplify the algorithm and improve the running time to $\mathcal{O}(n)$ time per symbol generated using $\mathcal{O}(n)$ space. We then provide the first successor-rule approach for constructing a binary cut-down de Bruijn sequence by leveraging recent ranking algorithms for fixed-density Lyndon words. Finally, we develop an algorithm to generate cut-down de Bruijn sequences for $k>2$ that runs in $\mathcal{O}(n)$ time per symbol using $\mathcal{O}(n)$ space after some initialization. While our $k$-ary algorithm is based on our simplified version of Etzion's binary algorithm, a number of non-trivial adaptations are required to generalize to larger alphabets.
A 2-packing set for an undirected graph $G=(V,E)$ is a subset $\mathcal{S} \subset V$ such that any two vertices $v_1,v_2 \in \mathcal{S}$ have no common neighbors. Finding a 2-packing set of maximum cardinality is a NP-hard problem. We develop a new approach to solve this problem on arbitrary graphs using its close relation to the independent set problem. Thereby, our algorithm red2pack uses new data reduction rules specific to the 2-packing set problem as well as a graph transformation. Our experiments show that we outperform the state-of-the-art for arbitrary graphs with respect to solution quality and also are able to compute solutions multiple orders of magnitude faster than previously possible. For example, we are able to solve 63% of our graphs to optimality in less than a second while the competitor for arbitrary graphs can only solve 5% of the graphs in the data set to optimality even with a 10 hour time limit. Moreover, our approach can solve a wide range of large instances that have previously been unsolved.
A code $C \colon \{0,1\}^k \to \{0,1\}^n$ is a $q$-locally decodable code ($q$-LDC) if one can recover any chosen bit $b_i$ of the message $b \in \{0,1\}^k$ with good confidence by randomly querying the encoding $x := C(b)$ on at most $q$ coordinates. Existing constructions of $2$-LDCs achieve $n = \exp(O(k))$, and lower bounds show that this is in fact tight. However, when $q = 3$, far less is known: the best constructions achieve $n = \exp(k^{o(1)})$, while the best known results only show a quadratic lower bound $n \geq \tilde{\Omega}(k^2)$ on the blocklength. In this paper, we prove a near-cubic lower bound of $n \geq \tilde{\Omega}(k^3)$ on the blocklength of $3$-query LDCs. This improves on the best known prior works by a polynomial factor in $k$. Our proof relies on a new connection between LDCs and refuting constraint satisfaction problems with limited randomness. Our quantitative improvement builds on the new techniques for refuting semirandom instances of CSPs developed in [GKM22, HKM23] and, in particular, relies on bounding the spectral norm of appropriate Kikuchi matrices.
The goal of the trace reconstruction problem is to recover a string $x\in\{0,1\}^n$ given many independent {\em traces} of $x$, where a trace is a subsequence obtained from deleting bits of $x$ independently with some given probability $p\in [0,1).$ A recent result of Chase (STOC 2021) shows how $x$ can be determined (in exponential time) from $\exp(\widetilde{O}(n^{1/5}))$ traces. This is the state-of-the-art result on the sample complexity of trace reconstruction. In this paper we consider two kinds of algorithms for the trace reconstruction problem. Our first, and technically more involved, result shows that any $k$-mer-based algorithm for trace reconstruction must use $\exp(\Omega(n^{1/5}))$ traces, under the assumption that the estimator requires $poly(2^k, 1/\varepsilon)$ traces, thus establishing the optimality of this number of traces. The analysis of this result also shows that the analysis technique used by Chase (STOC 2021) is essentially tight, and hence new techniques are needed in order to improve the worst-case upper bound. Our second, simple, result considers the performance of the Maximum Likelihood Estimator (MLE), which specifically picks the source string that has the maximum likelihood to generate the samples (traces). We show that the MLE algorithm uses a nearly optimal number of traces, \ie, up to a factor of $n$ in the number of samples needed for an optimal algorithm, and show that this factor of $n$ loss may be necessary under general ``model estimation'' settings.
The Shift Equivalence Testing (SET) of polynomials is deciding whether two polynomials $p(x_1, \ldots, x_m)$ and $q(x_1, \ldots, x_m)$ satisfy the relation $p(x_1 + a_1, \ldots, x_m + a_m) = q(x_1, \ldots, x_m)$ for some $a_1, \ldots, a_m$ in the coefficient field. The SET problem is one of basic computational problems in computer algebra and algebraic complexity theory, which was reduced by Dvir, Oliveira and Shpilka in 2014 to the Polynomial Identity Testing (PIT) problem. This paper presents a general scheme for designing algorithms to solve the SET problem which includes Dvir-Oliveira-Shpilka's algorithm as a special case. With the algorithms for the SET problem over integers, we give complete solutions to two challenging problems in symbolic summation of multivariate rational functions, namely the rational summability problem and the existence problem of telescopers for multivariate rational functions. Our approach is based on the structure of isotropy groups of polynomials introduced by Sato in 1960s. Our results can be used to detect the applicability of the Wilf-Zeilberger method to multivariate rational functions.