We show that there exist infinitely many $n \in \mathbb{Z}^+$ such that for any constant $\epsilon > 0$, any deterministic algorithm to solve $k$-\textsf{SAT} for $k \geq 3$ must perform at least $(2^{k-\frac{3}{2}-\epsilon})^{\frac{n}{k+1}}$ operations, where $n$ is the number of variables in the $k$\textsf{-SAT} instance.
The {\it inversion} of a set $X$ of vertices in a digraph $D$ consists of reversing the direction of all arcs of $D\langle X\rangle$. We study $sinv'_k(D)$ (resp. $sinv_k(D)$) which is the minimum number of inversions needed to transform $D$ into a $k$-arc-strong (resp. $k$-strong) digraph and $sinv'_k(n) = \max\{sinv'_k(D) \mid D~\mbox{is a $2k$-edge-connected digraph of order $n$}\}$. We show : $(i): \frac{1}{2} \log (n - k+1) \leq sinv'_k(n) \leq \log n + 4k -3$ ; $(ii):$ for any fixed positive integers $k$ and $t$, deciding whether a given oriented graph $\vec{G}$ satisfies $sinv'_k(\vec{G}) \leq t$ (resp. $sinv_k(\vec{G}) \leq t$) is NP-complete ; $(iii):$ if $T$ is a tournament of order at least $2k+1$, then $sinv'_k(T) \leq sinv_k(T) \leq 2k$, and $sinv'_k(T) \leq \frac{4}{3}k+o(k)$; $(iv):\frac{1}{2}\log(2k+1) \leq sinv'_k(T) \leq sinv_k(T)$ for some tournament $T$ of order $2k+1$; $(v):$ if $T$ is a tournament of order at least $19k-2$ (resp. $11k-2$), then $sinv'_k(T) \leq sinv_k(T) \leq 1$ (resp. $sinv_k(T) \leq 3$); $(vi):$ for every $\epsilon>0$, there exists $C$ such that $sinv'_k(T) \leq sinv_k(T) \leq C$ for every tournament $T$ on at least $2k+1 + \epsilon k$ vertices.
Moving horizon estimation (MHE) is a widely studied state estimation approach in several practical applications. In the MHE problem, the state estimates are obtained via the solution of an approximated nonlinear optimization problem. However, this optimization step is known to be computationally complex. Given this limitation, this paper investigates the idea of iteratively preconditioned gradient-descent (IPG) to solve MHE problem with the aim of an improved performance than the existing solution techniques. To our knowledge, the preconditioning technique is used for the first time in this paper to reduce the computational cost and accelerate the crucial optimization step for MHE. The convergence guarantee of the proposed iterative approach for a class of MHE problems is presented. Additionally, sufficient conditions for the MHE problem to be convex are also derived. Finally, the proposed method is implemented on a unicycle localization example. The simulation results demonstrate that the proposed approach can achieve better accuracy with reduced computational costs.
State transformation problems such as compressing quantum information or breaking quantum commitments are fundamental quantum tasks. However, their computational difficulty cannot easily be characterized using traditional complexity theory, which focuses on tasks with classical inputs and outputs. To study the complexity of such state transformation tasks, we introduce a framework for unitary synthesis problems, including notions of reductions and unitary complexity classes. We use this framework to study the complexity of transforming one entangled state into another via local operations. We formalize this as the Uhlmann Transformation Problem, an algorithmic version of Uhlmann's theorem. Then, we prove structural results relating the complexity of the Uhlmann Transformation Problem, polynomial space quantum computation, and zero knowledge protocols. The Uhlmann Transformation Problem allows us to characterize the complexity of a variety of tasks in quantum information processing, including decoding noisy quantum channels, breaking falsifiable quantum cryptographic assumptions, implementing optimal prover strategies in quantum interactive proofs, and decoding the Hawking radiation of black holes. Our framework for unitary complexity thus provides new avenues for studying the computational complexity of many natural quantum information processing tasks.
Structural graph parameters play an important role in parameterized complexity, including in kernelization. Notably, vertex cover, neighborhood diversity, twin-cover, and modular-width have been studied extensively in the last few years. However, there are many fundamental problems whose preprocessing complexity is not fully understood under these parameters. Indeed, the existence of polynomial kernels or polynomial Turing kernels for famous problems such as Clique, Chromatic Number, and Steiner Tree has only been established for a subset of structural parameters. In this work, we use several techniques to obtain a complete preprocessing complexity landscape for over a dozen of fundamental algorithmic problems.
By the MAXSAT problem, we are given a set $V$ of $m$ variables and a collection $C$ of $n$ clauses over $V$. We will seek a truth assignment to maximize the number of satisfied clauses. This problem is $\textit{NP}$-hard even for its restricted version, the 2-MAXSAT problem by which every clause contains at most 2 literals. In this paper, we discuss an efficient algorithm to solve this problem. Its worst case time complexity is bounded by O($(nm)^2(log_2\;nm)^{log_2\;nm}$). This shows that the 2-MAXSAT problem can be solved in polynomial time.
In stochastic zeroth-order optimization, a problem of practical relevance is understanding how to fully exploit the local geometry of the underlying objective function. We consider a fundamental setting in which the objective function is quadratic, and provide the first tight characterization of the optimal Hessian-dependent sample complexity. Our contribution is twofold. First, from an information-theoretic point of view, we prove tight lower bounds on Hessian-dependent complexities by introducing a concept called energy allocation, which captures the interaction between the searching algorithm and the geometry of objective functions. A matching upper bound is obtained by solving the optimal energy spectrum. Then, algorithmically, we show the existence of a Hessian-independent algorithm that universally achieves the asymptotic optimal sample complexities for all Hessian instances. The optimal sample complexities achieved by our algorithm remain valid for heavy-tailed noise distributions, which are enabled by a truncation method.
The approximate stabilizer rank of a quantum state is the minimum number of terms in any approximate decomposition of that state into stabilizer states. Bravyi and Gosset showed that the approximate stabilizer rank of a so-called "magic" state like $|T\rangle^{\otimes n}$, up to polynomial factors, is an upper bound on the number of classical operations required to simulate an arbitrary quantum circuit with Clifford gates and $n$ number of $T$ gates. As a result, an exponential lower bound on this quantity seems inevitable. Despite this intuition, several attempts using various techniques could not lead to a better than a linear lower bound on the "exact" rank of ${|T\rangle}^{\otimes n}$, meaning the minimal size of a decomposition that exactly produces the state. For the "approximate" rank, which is more realistically related to the cost of simulating quantum circuits, no lower bound better than $\tilde \Omega(\sqrt n)$ has been known. In this paper, we improve the lower bound on the approximate rank to $\tilde \Omega (n^2)$ for a wide range of the approximation parameters. An immediate corollary of our result is the existence of polynomial time computable functions which require a super-linear number of terms in any decomposition into exponentials of quadratic forms over $\mathbb{F}_2$, resolving a question in [Wil18]. Our approach is based on a strong lower bound on the approximate rank of a quantum state sampled from the Haar measure, a step-by-step analysis of the approximate rank of a magic-state teleportation protocol to sample from the Haar measure, and a result about trading Clifford operations with $T$ gates by [LKS18].
The algebraic degree is an important parameter of Boolean functions used in cryptography. When a function in a large number of variables is not given explicitly in algebraic normal form, it might not be feasible to compute its degree. Instead, one can try to estimate the degree using probabilistic tests. We propose a probabilistic test for deciding whether the algebraic degree of a Boolean function $f$ is below a certain value $k$. The test involves picking an affine space of dimension $k$ and testing whether the values on $f$ on that space sum up to zero. If $deg(f)<k$, then $f$ will always pass the test, otherwise it will sometimes pass and sometimes fail the test, depending on which affine space was chosen. The probability of failing the proposed test is closely related to the number of monomials of degree $k$ in a polynomial $g$, averaged over all the polynomials $g$ which are affine equivalent to $f$. We initiate the study of the probability of failing the proposed ``$deg(f)<k$'' test. We show that in the particular case when the degree of $f$ is actually equal to $k$, the probability will be in the interval $(0.288788, 0.5]$, and therefore a small number of runs of the test is sufficient to give, with very high probability, the correct answer. Exact values of this probability for all the polynomials in 8 variables were computed using the representatives listed by Hou and by Langevin and Leander.
We give improved and almost optimal testers for several classes of Boolean functions on $n$ inputs that have concise representation in the uniform and distribution-free model. Classes, such as $k$-junta, $k$-linear functions, $s$-term DNF, $s$-term monotone DNF, $r$-DNF, decision list, $r$-decision list, size-$s$ decision tree, size-$s$ Boolean formula, size-$s$ branching programs, $s$-sparse polynomials over the binary field and function with Fourier degree at most $d$. The method can be extended to several other classes of functions over any domain that can be approximated by functions that have a small number of relevant variables.
Partial differential equations (PDEs) are ubiquitous in science and engineering. Prior quantum algorithms for solving the system of linear algebraic equations obtained from discretizing a PDE have a computational complexity that scales at least linearly with the condition number $\kappa$ of the matrices involved in the computation. For many practical applications, $\kappa$ scales polynomially with the size $N$ of the matrices, rendering a polynomial-in-$N$ complexity for these algorithms. Here we present a quantum algorithm with a complexity that is polylogarithmic in $N$ but is independent of $\kappa$ for a large class of PDEs. Our algorithm generates a quantum state that enables extracting features of the solution. Central to our methodology is using a wavelet basis as an auxiliary system of coordinates in which the condition number of associated matrices is independent of $N$ by a simple diagonal preconditioner. We present numerical simulations showing the effect of the wavelet preconditioner for several differential equations. Our work could provide a practical way to boost the performance of quantum-simulation algorithms where standard methods are used for discretization.