A set $D \subseteq V$ of a graph $G=(V, E)$ is a dominating set of $G$ if each vertex $v\in V\setminus D$ is adjacent to at least one vertex in $D,$ whereas a set $D_2\subseteq V$ is a $2$-dominating (double dominating) set of $G$ if each vertex $v\in V \setminus D_2$ is adjacent to at least two vertices in $D_2.$ A graph $G$ is a $DD_2$-graph if there exists a pair ($D, D_2$) of dominating set and $2$-dominating set of $G$ which are disjoint. In this paper, we solve some open problems posed by M.Miotk, J.~Topp and P.{\.Z}yli{\'n}ski (Disjoint dominating and 2-dominating sets in graphs, Discrete Optimization, 35:100553, 2020) by giving approximation algorithms for the problem of determining a minimal spanning $DD_2$-graph of minimum size (Min-$DD_2$) with an approximation ratio of $3$; a minimal spanning $DD_2$-graph of maximum size (Max-$DD_2$) with an approximation ratio of $3$; and for the problem of adding minimum number of edges to a graph $G$ to make it a $DD_2$-graph (Min-to-$DD_2$) with an $O(\log n)$ approximation ratio. Furthermore, we prove that Min-$DD_2$ and Max-$DD_2$ are APX-complete for graphs with maximum degree $4$. We also show that Min-$DD_2$ and Max-$DD_2$ are approximable within a factor of $1.8$ and $1.5$ respectively, for any $3$-regular graph. Finally, we show the inapproximability result of Max-Min-to-$DD_2$ for bipartite graphs, that this problem can not be approximated within $n^{\frac{1}{6}-\varepsilon}$ for any $\varepsilon >0,$ unless P=NP.
The random walk $d$-ary cuckoo hashing algorithm was defined by Fotakis, Pagh, Sanders, and Spirakis to generalize and improve upon the standard cuckoo hashing algorithm of Pagh and Rodler. Random walk $d$-ary cuckoo hashing has low space overhead, guaranteed fast access, and fast in practice insertion time. In this paper, we give a theoretical insertion time bound for this algorithm. More precisely, for every $d\ge 3$ hashes, let $c_d^*$ be the sharp threshold for the load factor at which a valid assignment of $cm$ objects to a hash table of size $m$ likely exists. We show that for any $d\ge 4$ hashes and load factor $c<c_d^*$, the expectation of the random walk insertion time is $O(1)$, that is, a constant depending only on $d$ and $c$ but not $m$.
Given a metric space $(X,d_X)$, a $(\beta,s,\Delta)$-sparse cover is a collection of clusters $\mathcal{C}\subseteq P(X)$ with diameter at most $\Delta$, such that for every point $x\in X$, the ball $B_X(x,\frac\Delta\beta)$ is fully contained in some cluster $C\in \mathcal{C}$, and $x$ belongs to at most $s$ clusters in $\mathcal{C}$. Our main contribution is to show that the shortest path metric of every $K_r$-minor free graphs admits $(O(r),O(r^2),\Delta)$-sparse cover, and for every $\epsilon>0$, $(4+\epsilon,O(\frac1\epsilon)^r,\Delta)$-sparse cover (for arbitrary $\Delta>0$). We then use this sparse cover to show that every $K_r$-minor free graph embeds into $\ell_\infty^{\tilde{O}(\frac1\epsilon)^{r+1}\cdot\log n}$ with distortion $3+\eps$ (resp. into $\ell_\infty^{\tilde{O}(r^2)\cdot\log n}$ with distortion $O(r)$). Further, we provide applications of these sparse covers into padded decompositions, sparse partitions, universal TSP / Steiner tree, oblivious buy at bulk, name independent routing, and path reporting distance oracles.
Given a metric space $(V, d)$ along with an integer $k$, the $k$-Median problem asks to open $k$ centers $C \subseteq V$ to minimize $\sum_{v \in V} d(v, C)$, where $d(v, C) := \min_{c \in C} d(v, c)$. While the best-known approximation ratio of $2.613$ holds for the more general supplier version where an additional set $F \subseteq V$ is given with the restriction $C \subseteq F$, the best known hardness for these two versions are $1+1/e \approx 1.36$ and $1+2/e \approx 1.73$ respectively, using the same reduction from Max $k$-Coverage. We prove the following two results separating them. First, we show a $1.546$-parameterized approximation algorithm that runs in time $f(k) n^{O(1)}$. Since $1+2/e$ is proved to be the optimal approximation ratio for the supplier version in the parameterized setting, this result separates the original $k$-Median from the supplier version. Next, we prove a $1.416$-hardness for polynomial-time algorithms assuming the Unique Games Conjecture. This is achieved via a new fine-grained hardness of Max-$k$-Coverage for small set sizes. Our upper bound and lower bound are derived from almost the same expression, with the only difference coming from the well-known separation between the powers of LP and SDP on (hypergraph) vertex cover.
In this note we give proofs for results relating to the Instrumental Variable (IV) model with binary response $Y$ and binary treatment $X$, but with an instrument $Z$ that takes $K$ states that were originally stated in Richardson & Robins (2014), "ACE Bounds; SEMS with Equilibrium Conditions," arXiv:1410.0470.
In this work we investigate the Weihrauch degree of the problem $\mathsf{DS}$ of finding an infinite descending sequence through a given ill-founded linear order, which is shared by the problem $\mathsf{BS}$ of finding a bad sequence through a given non-well quasi-order. We show that $\mathsf{DS}$, despite being hard to solve (it has computable inputs with no hyperarithmetic solution), is rather weak in terms of uniform computational strength. To make the latter precise, we introduce the notion of the deterministic part of a Weihrauch degree. We then generalize $\mathsf{DS}$ and $\mathsf{BS}$ by considering $\boldsymbol{\Gamma}$-presented orders, where $\boldsymbol{\Gamma}$ is a Borel pointclass or $\boldsymbol{\Delta}^1_1$, $\boldsymbol{\Sigma}^1_1$, $\boldsymbol{\Pi}^1_1$. We study the obtained $\mathsf{DS}$-hierarchy and $\mathsf{BS}$-hierarchy of problems in comparison with the (effective) Baire hierarchy and show that they do not collapse at any finite level.
Let $\hat\Sigma=\frac{1}{n}\sum_{i=1}^n X_i\otimes X_i$ denote the sample covariance operator of centered i.i.d.~observations $X_1,\dots,X_n$ in a real separable Hilbert space, and let $\Sigma=\mathbb{E}(X_1\otimes X_1)$. The focus of this paper is to understand how well the bootstrap can approximate the distribution of the operator norm error $\sqrt n\|\hat\Sigma-\Sigma\|_{\text{op}}$, in settings where the eigenvalues of $\Sigma$ decay as $\lambda_j(\Sigma)\asymp j^{-2\beta}$ for some fixed parameter $\beta>1/2$. Our main result shows that the bootstrap can approximate the distribution of $\sqrt n\|\hat\Sigma-\Sigma\|_{\text{op}}$ at a rate of order $n^{-\frac{\beta-1/2}{2\beta+4+\epsilon}}$ with respect to the Kolmogorov metric, for any fixed $\epsilon>0$. In particular, this shows that the bootstrap can achieve near $n^{-1/2}$ rates in the regime of large $\beta$ -- which substantially improves on previous near $n^{-1/6}$ rates in the same regime. In addition to obtaining faster rates, our analysis leverages a fundamentally different perspective based on coordinate-free techniques. Moreover, our result holds in greater generality, and we propose a model that is compatible with both elliptical and Mar\v{c}enko-Pastur models in high-dimensional Euclidean spaces, which may be of independent interest.
We provide a framework to analyze the convergence of discretized kinetic Langevin dynamics for $M$-$\nabla$Lipschitz, $m$-convex potentials. Our approach gives convergence rates of $\mathcal{O}(m/M)$, with explicit stepsize restrictions, which are of the same order as the stability threshold for Gaussian targets and are valid for a large interval of the friction parameter. We apply this methodology to various integration schemes which are popular in the molecular dynamics and machine learning communities. Finally, we introduce the property "$\gamma$-limit convergent" (GLC) to characterize underdamped Langevin schemes that converge to overdamped dynamics in the high-friction limit and which have stepsize restrictions that are independent of the friction parameter; we show that this property is not generic by exhibiting methods from both the class and its complement. We further provide asymptotic bias estimates for the BAOAB scheme, which remain accurate in the high-friction limit by comparison to a modified stochastic dynamics which preserves the invariant measure.
This paper shows that calculating $k$-CLIQUE on $n$ vertex graphs, requires the AND of at least $2^{n/4k}$ monotone, constant-depth, and polynomial-sized circuits, for sufficiently large values of $k$. The proof relies on a new, monotone, one-sided switching lemma, designed for cliques.
For an odd prime $p$, we say $f(X) \in {\mathbb F}_p[X]$ computes square roots in $\mathbb F_p$ if, for all nonzero perfect squares $a \in \mathbb F_p$, we have $f(a)^2 = a$. When $p \equiv 3 \mod 4$, it is well known that $f(X) = X^{(p+1)/4}$ computes square roots. This degree is surprisingly low (and in fact lowest possible), since we have specified $(p-1)/2$ evaluations (up to sign) of the polynomial $f(X)$. On the other hand, for $p \equiv 1 \mod 4$ there was previously no nontrivial bound known on the lowest degree of a polynomial computing square roots in $\mathbb F_p$; it could have been anywhere between $\frac{p}{4}$ and $\frac{p}{2}$. We show that for all $p \equiv 1 \mod 4$, the degree of a polynomial computing square roots has degree at least $p/3$. Our main new ingredient is a general lemma which may be of independent interest: powers of a low degree polynomial cannot have too many consecutive zero coefficients. The proof method also yields a robust version: any polynomial that computes square roots for 99\% of the squares also has degree almost $p/3$. In the other direction, a result of Agou, Deligl\'ese, and Nicolas (Designs, Codes, and Cryptography, 2003) shows that for infinitely many $p \equiv 1 \mod 4$, the degree of a polynomial computing square roots can be as small as $3p/8$.
We introduce a new class of algorithms for finding a short vector in lattices defined by codes of co-dimension $k$ over $\mathbb{Z}_P^d$, where $P$ is prime. The co-dimension $1$ case is solved by exploiting the packing properties of the projections mod $P$ of an initial set of non-lattice vectors onto a single dual codeword. The technical tools we introduce are sorting of the projections followed by single-step pairwise Euclidean reduction of the projections, resulting in monotonic convergence of the positive-valued projections to zero. The length of vectors grows by a geometric factor each iteration. For fixed $P$ and $d$, and large enough user-defined input sets, we show that it is possible to minimize the number of iterations, and thus the overall length expansion factor, to obtain a short lattice vector. Thus we obtain a novel approach for controlling the output length, which resolves an open problem posed by Noah Stephens-Davidowitz (the possibility of an approximation scheme for the shortest-vector problem (SVP) which does not reduce to near-exact SVP). In our approach, one may obtain short vectors even when the lattice dimension is quite large, e.g., 8000. For fixed $P$, the algorithm yields shorter vectors for larger $d$. We additionally present a number of extensions and generalizations of our fundamental co-dimension $1$ method. These include a method for obtaining many different lattice vectors by multiplying the dual codeword by an integer and then modding by $P$; a co-dimension $k$ generalization; a large input set generalization; and finally, a "block" generalization, which involves the replacement of pairwise (Euclidean) reduction by a $k$-party (non-Euclidean) reduction. The $k$-block generalization of our algorithm constitutes a class of polynomial-time algorithms indexed by $k\geq 2$, which yield successively improved approximations for the short vector problem.