A \emph{covering array} is an $N \times k$ array of elements from a $v$-ary alphabet such that every $N \times t$ subarray contains all $v^t$ tuples from the alphabet of size $t$ at least $\lambda$ times; this is denoted as $\CA_\lambda(N; t, k, v)$. Covering arrays have applications in the testing of large-scale complex systems; in systems that are nondeterministic, increasing $\lambda$ gives greater confidence in the system's correctness. The \emph{covering array number}, $\CAN_\lambda(t,k,v)$ is the smallest number of rows for which a covering array on the other parameters exists. For general $\lambda$, only several nontrivial bounds are known, the smallest of which was asymptotically $\log k + \lambda \log \log k + o(\lambda)$ when $v, t$ are fixed. Additionally it has been conjectured that the $\log \log k$ term can be removed. First, we affirm the conjecture by deriving an asymptotically optimal bound for $\CAN_\lambda(t,k,v)$ for general $\lambda$ and when $v, t$ are constant using the Stein--Lov\'asz--Johnson paradigm. Second, we improve upon the constants of this method using the Lov\'asz local lemma. Third, when $\lambda=2$, we extend a two-stage paradigm of Sarkar and Colbourn that improves on the general bound and often produces better bounds than even when $\lambda=1$ of other results. Fourth, we extend this two-stage paradigm further for general $\lambda$ to obtain an even stronger upper bound, including using graph coloring. And finally, we determine a bound on how large $\lambda$ can be for when the number of rows is fixed.
The Gromov--Hausdorff distance measures the difference in shape between compact metric spaces and poses a notoriously difficult problem in combinatorial optimization. We introduce its quadratic relaxation over a convex polytope whose solutions provably deliver the Gromov--Hausdorff distance. The optimality guarantee is enabled by the fact that the search space of our approach is not constrained to a generalization of bijections, unlike in other relaxations such as the Gromov--Wasserstein distance. We suggest the Frank--Wolfe algorithm with $O(n^3)$-time iterations for solving the relaxation and numerically demonstrate its performance on metric spaces of hundreds of points. In particular, we obtain a new upper bound of the Gromov--Hausdorff distance between the unit circle and the unit hemisphere equipped with Euclidean metric. Our approach is implemented as a Python package dGH.
We consider the problem of deriving upper bounds on the parameters of sum-rank-metric codes, with focus on their dimension and block length. The sum-rank metric is a combination of the Hamming and the rank metric, and most of the available techniques to investigate it seem to be unable to fully capture its hybrid nature. In this paper, we introduce a new approach based on sum-rank-metric graphs, in which the vertices are tuples of matrices over a finite field, and where two such tuples are connected when their sum-rank distance is equal to one. We establish various structural properties of sum-rank-metric graphs and combine them with eigenvalue techniques to obtain bounds on the cardinality of sum-rank-metric codes. The bounds we derive improve on the best known bounds for several choices of the parameters. While our bounds are explicit only for small values of the minimum distance, they clearly indicate that spectral theory is able to capture the nature of the sum-rank-metric better than the currently available methods. They also allow us to establish new non-existence results for (possibly nonlinear) MSRD codes.
In this paper we propose a method to approximate the Gaussian function on ${\mathbb R}$ by a short cosine sum. We extend the differential approximation method proposed in [4,39] to approximate $\mathrm{e}^{-t^{2}/2\sigma}$ in the weighted space $L_2({\mathbb R}, \mathrm{e}^{-t^{2}/2\rho})$ where $\sigma, \, \rho >0$. We prove that the optimal frequency parameters $\lambda_1, \ldots , \lambda_{N}$ for this method in the approximation problem $ \min\limits_{\lambda_{1},\ldots, \lambda_{N}, \gamma_{1} \ldots \gamma_{N}}\|\mathrm{e}^{-\cdot^{2}/2\sigma} - \sum\limits_{j=1}^{N} \gamma_{j} \, {\mathrm e}^{\lambda_{j} \cdot}\|_{L_{2}({\mathbb R}, \mathrm{e}^{-t^{2}/2\rho})}$, are zeros of a scaled Hermite polynomial. This observation leads us to a numerically stable approximation method with low computational cost of $\mathit{O}(N^{3})$ operations. Furthermore, we derive a direct algorithm to solve this approximation problem based on a matrix pencil method for a special structured matrix. The entries of this matrix are determined by hypergeometric functions. For the weighted $L_{2}$-norm, we prove that the approximation error decays exponentially with respect to the length $N$ of the sum. An exponentially decaying error in the (unweighted) $L^{2}$-norm is achieved using a truncated cosine sum.
In this paper, we study a general low-rank matrix recovery problem with linear measurements corrupted by some noise. The objective is to understand under what conditions on the restricted isometry property (RIP) of the problem local search methods can find the ground truth with a small error. By analyzing the landscape of the non-convex problem, we first propose a global guarantee on the maximum distance between an arbitrary local minimizer and the ground truth under the assumption that the RIP constant is smaller than $1/2$. We show that this distance shrinks to zero as the intensity of the noise reduces. Our new guarantee is sharp in terms of the RIP constant and is much stronger than the existing results. We then present a local guarantee for problems with an arbitrary RIP constant, which states that any local minimizer is either considerably close to the ground truth or far away from it. Next, we prove the strict saddle property, which guarantees the global convergence of the perturbed gradient descent method in polynomial time. The developed results demonstrate how the noise intensity and the RIP constant of the problem affect the landscape of the problem.
We study the fundamental problem of fairly allocating a set of indivisible goods among $n$ agents with additive valuations using the desirable fairness notion of maximin share (MMS). MMS is the most popular share-based notion, in which an agent finds an allocation fair to her if she receives goods worth at least her MMS value. An allocation is called MMS if all agents receive at least their MMS value. Since MMS allocations need not exist when $n>2$, a series of works showed the existence of approximate MMS allocations with the current best factor of $\frac34 + O(\frac{1}{n})$. However, a simple example in [DFL82, BEF21, AGST23] showed the limitations of existing approaches and proved that they cannot improve this factor to $3/4 + \Omega(1)$. In this paper, we bypass these barriers to show the existence of $(\frac{3}{4} + \frac{3}{3836})$-MMS allocations by developing new reduction rules and analysis techniques.
We study the query complexity of geodesically convex (g-convex) optimization on a manifold. To isolate the effect of that manifold's curvature, we primarily focus on hyperbolic spaces. In a variety of settings (smooth or not; strongly g-convex or not; high- or low-dimensional), known upper bounds worsen with curvature. It is natural to ask whether this is warranted, or an artifact. For many such settings, we propose a first set of lower bounds which indeed confirm that (negative) curvature is detrimental to complexity. To do so, we build on recent lower bounds (Hamilton and Moitra, 2021; Criscitiello and Boumal, 2022) for the particular case of smooth, strongly g-convex optimization. Using a number of techniques, we also secure lower bounds which capture dependence on condition number and optimality gap, which was not previously the case. We suspect these bounds are not optimal. We conjecture optimal ones, and support them with a matching lower bound for a class of algorithms which includes subgradient descent, and a lower bound for a related game. Lastly, to pinpoint the difficulty of proving lower bounds, we study how negative curvature influences (and sometimes obstructs) interpolation with g-convex functions.
In this paper, we present a low-diameter decomposition algorithm in the LOCAL model of distributed computing that succeeds with probability $1 - 1/poly(n)$. Specifically, we show how to compute an $\left(\epsilon, O\left(\frac{\log n}{\epsilon}\right)\right)$ low-diameter decomposition in $O\left(\frac{\log^3(1/\epsilon)\log n}{\epsilon}\right)$ round Further developing our techniques, we show new distributed algorithms for approximating general packing and covering integer linear programs in the LOCAL model. For packing problems, our algorithm finds an $(1-\epsilon)$-approximate solution in $O\left(\frac{\log^3 (1/\epsilon) \log n}{\epsilon}\right)$ rounds with probability $1 - 1/poly(n)$. For covering problems, our algorithm finds an $(1+\epsilon)$-approximate solution in $O\left(\frac{\left(\log \log n + \log (1/\epsilon)\right)^3 \log n}{\epsilon}\right)$ rounds with probability $1 - 1/poly(n)$. These results improve upon the previous $O\left(\frac{\log^3 n}{\epsilon}\right)$-round algorithm by Ghaffari, Kuhn, and Maus [STOC 2017] which is based on network decompositions. Our algorithms are near-optimal for many fundamental combinatorial graph optimization problems in the LOCAL model, such as minimum vertex cover and minimum dominating set, as their $(1\pm \epsilon)$-approximate solutions require $\Omega\left(\frac{\log n}{\epsilon}\right)$ rounds to compute.
In this paper, we study the weighted $k$-server problem on the uniform metric in both the offline and online settings. We start with the offline setting. In contrast to the (unweighted) $k$-server problem which has a polynomial-time solution using min-cost flows, there are strong computational lower bounds for the weighted $k$-server problem, even on the uniform metric. Specifically, we show that assuming the unique games conjecture, there are no polynomial-time algorithms with a sub-polynomial approximation factor, even if we use $c$-resource augmentation for $c < 2$. Furthermore, if we consider the natural LP relaxation of the problem, then obtaining a bounded integrality gap requires us to use at least $\ell$ resource augmentation, where $\ell$ is the number of distinct server weights. We complement these results by obtaining a constant-approximation algorithm via LP rounding, with a resource augmentation of $(2+\epsilon)\ell$ for any constant $\epsilon > 0$. In the online setting, an $\exp(k)$ lower bound is known for the competitive ratio of any randomized algorithm for the weighted $k$-server problem on the uniform metric. In contrast, we show that $2\ell$-resource augmentation can bring the competitive ratio down by an exponential factor to only $O(\ell^2 \log \ell)$. Our online algorithm uses the two-stage approach of first obtaining a fractional solution using the online primal-dual framework, and then rounding it online.
While most theoretical run time analyses of discrete randomized search heuristics focused on finite search spaces, we consider the search space $\mathbb{Z}^n$. This is a further generalization of the search space of multi-valued decision variables $\{0,\ldots,r-1\}^n$. We consider as fitness functions the distance to the (unique) non-zero optimum $a$ (based on the $L_1$-metric) and the \ooea which mutates by applying a step-operator on each component that is determined to be varied. For changing by $\pm 1$, we show that the expected optimization time is $\Theta(n \cdot (|a|_{\infty} + \log(|a|_H)))$. In particular, the time is linear in the maximum value of the optimum $a$. Employing a different step operator which chooses a step size from a distribution so heavy-tailed that the expectation is infinite, we get an optimization time of $O(n \cdot \log^2 (|a|_1) \cdot \left(\log (\log (|a|_1))\right)^{1 + \epsilon})$. Furthermore, we show that RLS with step size adaptation achieves an optimization time of $\Theta(n \cdot \log(|a|_1))$. We conclude with an empirical analysis, comparing the above algorithms also with a variant of CMA-ES for discrete search spaces.
In this work, we give a statistical characterization of the $\gamma$-regret for arbitrary structured bandit problems, the regret which arises when comparing against a benchmark that is $\gamma$ times the optimal solution. The $\gamma$-regret emerges in structured bandit problems over a function class $\mathcal{F}$ where finding an exact optimum of $f \in \mathcal{F}$ is intractable. Our characterization is given in terms of the $\gamma$-DEC, a statistical complexity parameter for the class $\mathcal{F}$, which is a modification of the constrained Decision-Estimation Coefficient (DEC) of Foster et al., 2023 (and closely related to the original offset DEC of Foster et al., 2021). Our lower bound shows that the $\gamma$-DEC is a fundamental limit for any model class $\mathcal{F}$: for any algorithm, there exists some $f \in \mathcal{F}$ for which the $\gamma$-regret of that algorithm scales (nearly) with the $\gamma$-DEC of $\mathcal{F}$. We provide an upper bound showing that there exists an algorithm attaining a nearly matching $\gamma$-regret. Due to significant challenges in applying the prior results on the DEC to the $\gamma$-regret case, both our lower and upper bounds require novel techniques and a new algorithm.