The Strong Exponential Time Hypothesis (SETH) asserts that for every $\varepsilon>0$ there exists $k$ such that $k$-SAT requires time $(2-\varepsilon)^n$. The field of fine-grained complexity has leveraged SETH to prove quite tight conditional lower bounds for dozens of problems in various domains and complexity classes, including Edit Distance, Graph Diameter, Hitting Set, Independent Set, and Orthogonal Vectors. Yet, it has been repeatedly asked in the literature whether SETH-hardness results can be proven for other fundamental problems such as Hamiltonian Path, Independent Set, Chromatic Number, MAX-$k$-SAT, and Set Cover. In this paper, we show that fine-grained reductions implying even $\lambda^n$-hardness of these problems from SETH for any $\lambda>1$, would imply new circuit lower bounds: super-linear lower bounds for Boolean series-parallel circuits or polynomial lower bounds for arithmetic circuits (each of which is a four-decade open question). We also extend this barrier result to the class of parameterized problems. Namely, for every $\lambda>1$ we conditionally rule out fine-grained reductions implying SETH-based lower bounds of $\lambda^k$ for a number of problems parameterized by the solution size $k$. Our main technical tool is a new concept called polynomial formulations. In particular, we show that many problems can be represented by relatively succinct low-degree polynomials, and that any problem with such a representation cannot be proven SETH-hard (without proving new circuit lower bounds).
We show that any nonzero polynomial in the ideal generated by the $r \times r$ minors of an $n \times n$ matrix $X$ can be used to efficiently approximate the determinant. For any nonzero polynomial $f$ in this ideal, we construct a small depth-three $f$-oracle circuit that approximates the determinant of size $\Theta(r^{1/3})$ in the sense of border complexity. For many classes of algebraic circuits, this implies that every nonzero polynomial in the ideal generated by $r \times r$ minors is at least as hard to approximately compute as the determinant of size $\Theta(r^{1/3})$. We also prove an analogous result for the Pfaffian of a $2n \times 2n$ skew-symmetric matrix and the ideal generated by Pfaffians of $2r \times 2r$ principal submatrices. This answers a recent question of Grochow about complexity in polynomial ideals in the setting of border complexity. We give several applications of our result, two of which are highlighted below. $\bullet$ We prove super-polynomial lower bounds for Ideal Proof System refutations computed by low-depth circuits. This extends the recent breakthrough low-depth circuit lower bounds of Limaye, Srinivasan, and Tavenas to the setting of proof complexity. For many natural circuit classes, we show that the approximative proof complexity of our hard instance is governed by the approximative circuit complexity of the determinant. $\bullet$ We construct new hitting set generators for polynomial-size low-depth circuits. For any $\varepsilon > 0$, we construct generators with seed length $O(n^\varepsilon)$ that attain a near-optimal tradeoff between their seed length and degree, and are computable by low-depth circuits of near-linear size (with respect to the size of their output). This matches the seed length of the generators recently obtained by Limaye, Srinivasan, and Tavenas, but improves on the generator's degree and circuit complexity.
We consider the maximum weight $b$-matching problem in the random-order semi-streaming model. Assuming all weights are small integers drawn from $[1,W]$, we present a $2 - \frac{1}{2W} + \varepsilon$ approximation algorithm, using a memory of $O(\max(|M_G|, n) \cdot poly(\log(m),W,1/\varepsilon))$, where $|M_G|$ denotes the cardinality of the optimal matching. Our result generalizes that of Bernstein [Bernstein, 2015], which achieves a $3/2 + \varepsilon$ approximation for the maximum cardinality simple matching. When $W$ is small, our result also improves upon that of Gamlath et al. [Gamlath et al., 2019], which obtains a $2 - \delta$ approximation (for some small constant $\delta \sim 10^{-17}$) for the maximum weight simple matching. In particular, for the weighted $b$-matching problem, ours is the first result beating the approximation ratio of $2$. Our technique hinges on a generalized weighted version of edge-degree constrained subgraphs, originally developed by Bernstein and Stein [Bernstein and Stein, 2015]. Such a subgraph has bounded vertex degree (hence uses only a small number of edges), and can be easily computed. The fact that it contains a $2 - \frac{1}{2W} + \varepsilon$ approximation of the maximum weight matching is proved using the classical K\H{o}nig-Egerv\'ary's duality theorem.
Trajectory optimization is an efficient approach for solving optimal control problems for complex robotic systems. It relies on two key components: first the transcription into a sparse nonlinear program, and second the corresponding solver to iteratively compute its solution. On one hand, differential dynamic programming (DDP) provides an efficient approach to transcribe the optimal control problem into a finite-dimensional problem while optimally exploiting the sparsity induced by time. On the other hand, augmented Lagrangian methods make it possible to formulate efficient algorithms with advanced constraint-satisfaction strategies. In this paper, we propose to combine these two approaches into an efficient optimal control algorithm accepting both equality and inequality constraints. Based on the augmented Lagrangian literature, we first derive a generic primal-dual augmented Lagrangian strategy for nonlinear problems with equality and inequality constraints. We then apply it to the dynamic programming principle to solve the value-greedy optimization problems inherent to the backward pass of DDP, which we combine with a dedicated globalization strategy, resulting in a Newton-like algorithm for solving constrained trajectory optimization problems. Contrary to previous attempts of formulating an augmented Lagrangian version of DDP, our approach exhibits adequate convergence properties without any switch in strategies. We empirically demonstrate its interest with several case-studies from the robotics literature.
We study the constrained reinforcement learning problem, in which an agent aims to maximize the expected cumulative reward subject to a constraint on the expected total value of a utility function. In contrast to existing model-based approaches or model-free methods accompanied with a `simulator', we aim to develop the first model-free, simulator-free algorithm that achieves a sublinear regret and a sublinear constraint violation even in large-scale systems. To this end, we consider the episodic constrained Markov decision processes with linear function approximation, where the transition dynamics and the reward function can be represented as a linear function of some known feature mapping. We show that $\tilde{\mathcal{O}}(\sqrt{d^3H^3T})$ regret and $\tilde{\mathcal{O}}(\sqrt{d^3H^3T})$ constraint violation bounds can be achieved, where $d$ is the dimension of the feature mapping, $H$ is the length of the episode, and $T$ is the total number of steps. Our bounds are attained without explicitly estimating the unknown transition model or requiring a simulator, and they depend on the state space only through the dimension of the feature mapping. Hence our bounds hold even when the number of states goes to infinity. Our main results are achieved via novel adaptations of the standard LSVI-UCB algorithms. In particular, we first introduce primal-dual optimization into the LSVI-UCB algorithm to balance between regret and constraint violation. More importantly, we replace the standard greedy selection with respect to the state-action function in LSVI-UCB with a soft-max policy. This turns out to be key in establishing uniform concentration for the constrained case via its approximation-smoothness trade-off. We also show that one can achieve an even zero constraint violation while still maintaining the same order with respect to $T$.
The work is devoted to the construction of a new type of intervals -- functional intervals. These intervals are built on the idea of expanding boundaries from numbers to functions. Functional intervals have shown themselves to be promising for further study and use, since they have more rich algebraic properties compared to classical intervals lamy. In the work, linear functional arithmetic was constructed from one variable. This arithmetic was applied to solve such problems of interval analysis, as minimization of a function on an interval and finding zeros of a function on an interval. Results of numerical experiments for linear functional arithmetic showed a high order of convergence and a higher speed the growth of algorithms when using intervals of a new type, despite the fact that the calculations did not use information about derivative function. Also in the work, a modification of the minimization algorithms functions of several variables, based on the use of the function rational intervals of several variables. As a result, it was Improved speedup of algorithms, but only up to a certain number of unknowns.
We consider the problem of finding the matching map between two sets of $d$ dimensional vectors from noisy observations, where the second set contains outliers. The matching map is then an injection, which can be consistently estimated only if the vectors of the second set are well separated. The main result shows that, in the high-dimensional setting, a detection region of unknown injection can be characterized by the sets of vectors for which the inlier-inlier distance is of order at least $d^{1/4}$ and the inlier-outlier distance is of order at least $d^{1/2}$. These rates are achieved using the estimated matching minimizing the sum of logarithms of distances between matched pairs of points. We also prove lower bounds establishing optimality of these rates. Finally, we report results of numerical experiments on both synthetic and real world data that illustrate our theoretical results and provide further insight into the properties of the estimators studied in this work.
Spatially inhomogeneous functions, which may be smooth in some regions and rough in other regions, are modelled naturally in a Bayesian manner using so-called Besov priors which are given by random wavelet expansions with Laplace-distributed coefficients. This paper studies theoretical guarantees for such prior measures - specifically, we examine their frequentist posterior contraction rates in the setting of non-linear inverse problems with Gaussian white noise. Our results are first derived under a general local Lipschitz assumption on the forward map. We then verify the assumption for two non-linear inverse problems arising from elliptic partial differential equations, the Darcy flow model from geophysics as well as a model for the Schr\"odinger equation appearing in tomography. In the course of the proofs, we also obtain novel concentration inequalities for penalized least squares estimators with $\ell^1$ wavelet penalty, which have a natural interpretation as maximum a posteriori (MAP) estimators. The true parameter is assumed to belong to some spatially inhomogeneous Besov class $B^{\alpha}_{11}$, $\alpha>0$. In a setting with direct observations, we complement these upper bounds with a lower bound on the rate of contraction for arbitrary Gaussian priors. An immediate consequence of our results is that while Laplace priors can achieve minimax-optimal rates over $B^{\alpha}_{11}$-classes, Gaussian priors are limited to a (by a polynomial factor) slower contraction rate. This gives information-theoretical justification for the intuition that Laplace priors are more compatible with $\ell^1$ regularity structure in the underlying parameter.
An $r$-quasiplanar graph is a graph drawn in the plane with no $r$ pairwise crossing edges. Let $s \geq 3$ be an integer and $r=2^s$. We prove that there is a constant $C$ such that every $r$-quasiplanar graph with $n \geq r$ vertices has at most $n\left(Cs^{-1}\log n\right)^{2s-4}$ edges. A graph whose vertices are continuous curves in the plane, two being connected by an edge if and only if they intersect, is called a string graph. We show that for every $\epsilon>0$, there exists $\delta>0$ such that every string graph with $n$ vertices, whose chromatic number is at least $n^{\epsilon}$ contains a clique of size at least $n^{\delta}$. A clique of this size or a coloring using fewer than $n^{\epsilon}$ colors can be found by a polynomial time algorithm in terms of the size of the geometric representation of the set of strings. In the process, we use, generalize, and strengthen previous results of Lee, Tomon, and others. All of our theorems are related to geometric variants of the following classical graph-theoretic problem of Erdos, Gallai, and Rogers. Given a $K_r$-free graph on $n$ vertices and an integer $s<r$, at least how many vertices can we find such that the subgraph induced by them is $K_s$-free?
We prove new lower bounds for statistical estimation tasks under the constraint of $(\varepsilon, \delta)$-differential privacy. First, we provide tight lower bounds for private covariance estimation of Gaussian distributions. We show that estimating the covariance matrix in Frobenius norm requires $\Omega(d^2)$ samples, and in spectral norm requires $\Omega(d^{3/2})$ samples, both matching upper bounds up to logarithmic factors. We prove these bounds via our main technical contribution, a broad generalization of the fingerprinting method to exponential families. Additionally, using the private Assouad method of Acharya, Sun, and Zhang, we show a tight $\Omega(d/(\alpha^2 \varepsilon))$ lower bound for estimating the mean of a distribution with bounded covariance to $\alpha$-error in $\ell_2$-distance. Prior known lower bounds for all these problems were either polynomially weaker or held under the stricter condition of $(\varepsilon,0)$-differential privacy.
A new approach to calculating the finite Fourier transform is suggested throughout the process of this study. The idea that the series has been updated with the appropriate modification and purification, which serves as the basis for the study, and that this update functions as the basis for the investigation is the conceptual goal of this method, which was designed especially for the purpose of this study. It is provided here that this methodology, which was designed especially for the purpose of this study, has been updated with the appropriate modification and purification, which serves as the basis for the study, is provided here. This study also used this update as the premise to get started. In order for this approach to be successful, the starting point must be the presumption that the series has been appropriately purified and organized to the point where it can be considered adequate. The attributes of this series were discovered as a result of the work that was ordered to choose an acceptable application of the Fourier series, to apply it, and to conduct an analysis of it in relation to the finite Fourier transform. These qualities were determined this study. The results of this study provided a better understanding of the characteristics of this series.