Given a sound first-order p-time theory $T$ capable of formalizing syntax of first-order logic we define a p-time function $g_T$ that stretches all inputs by one bit and we use its properties to show that $T$ must be incomplete. We leave it as an open problem whether for some $T$ the range of $g_T$ intersects all infinite NP sets (i.e. whether it is a proof complexity generator hard for all proof systems). A propositional version of the construction shows that at least one of the following three statements is true: - there is no p-optimal propositional proof system (this is equivalent to the non-existence of a time-optimal propositional proof search algorithm), - $E \not\subseteq P/poly$, - there exists function $h$ that stretches all inputs by one bit, is computable in sub-exponential time and its range $Rng(h)$ intersects all infinite NP sets.
A code of length $n$ is said to be (combinatorially) $(\rho,L)$-list decodable if the Hamming ball of radius $\rho n$ around any vector in the ambient space does not contain more than $L$ codewords. We study a recently introduced class of higher order MDS codes, which are closely related (via duality) to codes that achieve a generalized Singleton bound for list decodability. For some $\ell\geq 1$, higher order MDS codes of length $n$, dimension $k$, and order $\ell$ are denoted as $(n,k)$-MDS($\ell$) codes. We present a number of results on the structure of these codes, identifying the `extend-ability' of their parameters in various scenarios. Specifically, for some parameter regimes, we identify conditions under which $(n_1,k_1)$-MDS($\ell_1$) codes can be obtained from $(n_2,k_2)$-MDS($\ell_2$) codes, via various techniques. We believe that these results will aid in efficient constructions of higher order MDS codes. We also obtain a new field size upper bound for the existence of such codes, which arguably improves over the best known existing bound, in some parameter regimes.
We consider the following decision problems: given a finite, rational Markov chain, source and target states, and a rational threshold, does there exist an n such that the probability of reaching the target from the source in n steps is equal to the threshold (resp. crosses the threshold)? These problems are known to be equivalent to the Skolem (resp. Positivity) problems for Linear Recurrence Sequences (LRS). These are number-theoretic problems whose decidability has been open for decades. We present a short, self-contained, and elementary reduction from LRS to Markov Chains that improves the state of the art as follows: (a) We reduce to ergodic Markov Chains, a class that is widely used in Model Checking. (b) We reduce LRS to Markov Chains of significantly lower order than before. We thus get sharper hardness results for a more ubiquitous class of Markov Chains. Immediate applications include problems in modeling biological systems, and regular automata-based counting problems.
A random algebraic graph is defined by a group ${G}$ with a "uniform" distribution over it and a connection $\sigma:{G}\longrightarrow [0,1]$ with expectation $p,$ satisfying $\sigma({g}) = \sigma({g}^{-1}).$ The random graph $\mathsf{RAG}(n,{G},p,\sigma)$ with vertex set $[n]$ is formed as follows. First, $n$ independent latent vectors ${x}_1, \ldots, {x}_n$ are sampled uniformly from ${G}.$ Then, vertices $i,j$ are connected with probability $\sigma({x}_i{x}_j^{-1}).$ This model captures random geometric graphs with latent space the unit sphere and the hypercube, certain regimes of the stochastic block model, and random subgraphs of Cayley graphs. The main question of interest to the current paper is: when is a random algebraic graph statistically and/or computationally distinguishable from $\mathsf{G}(n,p)$? Our results fall into two main categories. 1) Geometric. We focus on the case ${G} =\{\pm1\}^d$ and use Fourier-analytic tools. For hard threshold connections, we match [LMSY22b] for $p = \omega(1/n)$ and for connections that are $\frac{1}{r\sqrt{d}}$-Lipschitz we extend the results of [LR21b] when $d = \Omega(n\log n)$ to the non-monotone setting. 2) Algebraic. We provide evidence for an exponential statistical-computational gap. Consider any finite group ${G}$ and let $A\subseteq {G}$ be a set of elements formed by including each set of the form $\{{g}, {g}^{-1}\}$ independently with probability $1/2.$ Let $\Gamma_n({G},A)$ be the distribution of random graphs formed by taking a uniformly random induced subgraph of size $n$ of the Cayley graph $\Gamma({G},A).$ Then, $\Gamma_n({G}, A)$ and $\mathsf{G}(n,1/2)$ are statistically indistinguishable with high probability over $A$ if and only if $\log |{G}| \gtrsim n.$ However, low-degree polynomial tests fail to distinguish $\Gamma_n({G}, A)$ and $\mathsf{G}(n,1/2)$ with high probability over $A$ when $\log |{G}| = \log^{\Omega(1)}n.$
Let a polytope $P$ be defined by a system $A x \leq b$. We consider the problem of counting the number of integer points inside $P$, assuming that $P$ is $\Delta$-modular, where the polytope $P$ is called $\Delta$-modular if all the rank sub-determinants of $A$ are bounded by $\Delta$ in the absolute value. We present a new FPT-algorithm, parameterized by $\Delta$ and by the maximal number of vertices in $P$, where the maximum is taken by all r.h.s. vectors $b$. We show that our algorithm is more efficient for $\Delta$-modular problems than the approach of A. Barvinok et al. To this end, we do not directly compute the short rational generating function for $P \cap Z^n$, which is commonly used for the considered problem. Instead, we use the dynamic programming principle to compute its particular representation in the form of exponential series that depends on a single variable. We completely do not rely to the Barvinok's unimodular sign decomposition technique. Using our new complexity bound, we consider different special cases that may be of independent interest. For example, we give FPT-algorithms for counting the integer points number in $\Delta$-modular simplices and similar polytopes that have $n + O(1)$ facets. As a special case, for any fixed $m$, we give an FPT-algorithm to count solutions of the unbounded $m$-dimensional $\Delta$-modular subset-sum problem.
Many recent developments in causal inference, and functional estimation problems more generally, have been motivated by the fact that classical one-step (first-order) debiasing methods, or their more recent sample-split double machine-learning avatars, can outperform plugin estimators under surprisingly weak conditions. These first-order corrections improve on plugin estimators in a black-box fashion, and consequently are often used in conjunction with powerful off-the-shelf estimation methods. These first-order methods are however provably suboptimal in a minimax sense for functional estimation when the nuisance functions live in Holder-type function spaces. This suboptimality of first-order debiasing has motivated the development of "higher-order" debiasing methods. The resulting estimators are, in some cases, provably optimal over Holder-type spaces, but both the estimators which are minimax-optimal and their analyses are crucially tied to properties of the underlying function space. In this paper we investigate the fundamental limits of structure-agnostic functional estimation, where relatively weak conditions are placed on the underlying nuisance functions. We show that there is a strong sense in which existing first-order methods are optimal. We achieve this goal by providing a formalization of the problem of functional estimation with black-box nuisance function estimates, and deriving minimax lower bounds for this problem. Our results highlight some clear tradeoffs in functional estimation -- if we wish to remain agnostic to the underlying nuisance function spaces, impose only high-level rate conditions, and maintain compatibility with black-box nuisance estimators then first-order methods are optimal. When we have an understanding of the structure of the underlying nuisance functions then carefully constructed higher-order estimators can outperform first-order estimators.
We revisit a natural variant of geometric set cover, called minimum-membership geometric set cover (MMGSC). In this problem, the input consists of a set $S$ of points and a set $\mathcal{R}$ of geometric objects, and the goal is to find a subset $\mathcal{R}^*\subseteq\mathcal{R}$ to cover all points in $S$ such that the \textit{membership} of $S$ with respect to $\mathcal{R}^*$, denoted by $\mathsf{memb}(S,\mathcal{R}^*)$, is minimized, where $\mathsf{memb}(S,\mathcal{R}^*)=\max_{p\in S}|\{R\in\mathcal{R}^*: p\in R\}|$. We achieve the following two main results. * We give the first polynomial-time constant-approximation algorithm for MMGSC with unit squares. This answers a question left open since the work of Erlebach and Leeuwen [SODA'08], who gave a constant-approximation algorithm with running time $n^{O(\mathsf{opt})}$ where $\mathsf{opt}$ is the optimum of the problem (i.e., the minimum membership). * We give the first polynomial-time approximation scheme (PTAS) for MMGSC with halfplanes. Prior to this work, it was even unknown whether the problem can be approximated with a factor of $o(\log n)$ in polynomial time, while it is well-known that the minimum-size set cover problem with halfplanes can be solved in polynomial time. We also consider a problem closely related to MMGSC, called minimum-ply geometric set cover (MPGSC), in which the goal is to find $\mathcal{R}^*\subseteq\mathcal{R}$ to cover $S$ such that the ply of $\mathcal{R}^*$ is minimized, where the ply is defined as the maximum number of objects in $\mathcal{R}^*$ which have a nonempty common intersection. Very recently, Durocher et al. gave the first constant-approximation algorithm for MPGSC with unit squares which runs in $O(n^{12})$ time. We give a significantly simpler constant-approximation algorithm with near-linear running time.
The union-closed sets conjecture states that in any nonempty union-closed family $\mathcal{F}$ of subsets of a finite set, there exists an element contained in at least a proportion $1/2$ of the sets of $\mathcal{F}$. Using the information-theoretic method, Gilmer \cite{gilmer2022constant} recently showed that there exists an element contained in at least a proportion $0.01$ of the sets of such $\mathcal{F}$. He conjectured that his technique can be pushed to the constant $\frac{3-\sqrt{5}}{2}$ which was subsequently confirmed by several researchers \cite{sawin2022improved,chase2022approximate,alweiss2022improved,pebody2022extension}. Furthermore, Sawin \cite{sawin2022improved} showed that Gilmer's technique can be improved to obtain a bound better than $\frac{3-\sqrt{5}}{2}$, but this new bound is not explicitly given by Sawin. This paper further improves Gilmer's technique to derive new bounds in the optimization form for the union-closed sets conjecture. These bounds include Sawin's improvement as a special case. By providing cardinality bounds on auxiliary random variables, we make Sawin's improvement computable, and then evaluate it numerically which yields a bound around $0.38234$, slightly better than $\frac{3-\sqrt{5}}{2}\approx0.38197$. }
Motivated by an application from geodesy, we introduce a novel clustering problem which is a $k$-center (or k-diameter) problem with a side constraint. For the side constraint, we are given an undirected connectivity graph $G$ on the input points, and a clustering is now only feasible if every cluster induces a connected subgraph in $G$. We call the resulting problems the connected $k$-center problem and the connected $k$-diameter problem. We prove several results on the complexity and approximability of these problems. Our main result is an $O(\log^2{k})$-approximation algorithm for the connected $k$-center and the connected $k$-diameter problem. For Euclidean metrics and metrics with constant doubling dimension, the approximation factor of this algorithm improves to $O(1)$. We also consider the special cases that the connectivity graph is a line or a tree. For the line we give optimal polynomial-time algorithms and for the case that the connectivity graph is a tree, we either give an optimal polynomial-time algorithm or a $2$-approximation algorithm for all variants of our model. We complement our upper bounds by several lower bounds.
It is not difficult to think of applications that can be modelled as graph problems in which placing some facility or commodity at a vertex has some positive or negative effect on the values of all the vertices out to some distance, and we want to be able to calculate quickly the cumulative effect on any vertex's value at any time or the list of the most beneficial or most detrimential effects on a vertex. In this paper we show how, given an edge-weighted graph with constant-size separators, we can support the following operations on it in time polylogarithmic in the number of vertices and the number of facilities placed on the vertices, where distances between vertices are measured with respect to the edge weights: Add (v, f, w, d) places a facility of weight w and with effect radius d onto vertex v. Remove (v, f) removes a facility f previously placed on v using Add from v. Sum (v) or Sum (v, d) returns the total weight of all facilities affecting v or, with a distance parameter d, the total weight of all facilities whose effect region intersects the ``circle'' with radius d around v. Top (v, k) or Top (v, k, d) returns the k facilities of greatest weight that affect v or, with a distance parameter d, whose effect region intersects the ``circle'' with radius d around v. The weights of the facilities and the operation that Sum uses to ``sum'' them must form a semigroup. For Top queries, the weights must be drawn from a total order.
We study the query version of the approximate heavy hitter and quantile problems. In the former problem, the input is a parameter $\varepsilon$ and a set $P$ of $n$ points in $\mathbb{R}^d$ where each point is assigned a color from a set $C$, and we want to build a structure s.t. given any geometric range $\gamma$, we can efficiently find a list of approximate heavy hitters in $\gamma\cap P$, i.e., colors that appear at least $\varepsilon |\gamma \cap P|$ times in $\gamma \cap P$, as well as their frequencies with an additive error of $\varepsilon |\gamma \cap P|$. In the latter problem, each point is assigned a weight from a totally ordered universe and the query must output a sequence $S$ of $1+1/\varepsilon$ weights s.t. the $i$-th weight in $S$ has approximate rank $i\varepsilon|\gamma\cap P|$, meaning, rank $i\varepsilon|\gamma\cap P|$ up to an additive error of $\varepsilon|\gamma\cap P|$. Previously, optimal results were only known in 1D [WY11] but a few sub-optimal methods were available in higher dimensions [AW17, ACH+12]. We study the problems for 3D halfspace and dominance queries. We consider the real RAM model with integer registers of size $w=\Theta(\log n)$ bits. For dominance queries, we show optimal solutions for both heavy hitter and quantile problems: using linear space, we can answer both queries in time $O(\log n + 1/\varepsilon)$. Note that as the output size is $\frac{1}{\varepsilon}$, after investing the initial $O(\log n)$ searching time, our structure takes on average $O(1)$ time to find a heavy hitter or a quantile! For more general halfspace heavy hitter queries, the same optimal query time can be achieved by increasing the space by an extra $\log_w\frac{1}{\varepsilon}$ (resp. $\log\log_w\frac{1}{\varepsilon}$) factor in 3D (resp. 2D). By spending extra $\log^{O(1)}\frac{1}{\varepsilon}$ factors in time and space, we can also support quantile queries.