In the metric distortion problem there is a set of candidates $C$ and voters $V$ in the same metric space. The goal is to select a candidate minimizing the social cost: the sum of distances of the selected candidate from all the voters, and the challenge arises from the algorithm receiving only ordinaL input: each voter's ranking of candidate, while the objective function is cardinal, determined by the underlying metric. The distortion of an algorithm is its worst-case approximation factor of the optimal social cost. A key concept here is the (p,q)-veto core, with $p\in \Delta(V)$ and $q\in \Delta(C)$ being normalized weight vectors representing voters' veto power and candidates' support, respectively. The (p,q)-veto core corresponds to a set of winners from a specific class of deterministic algorithms. Notably, the optimal distortion of $3$ is obtained from this class, by selecting veto core candidates using uniform $p$ and $q$ proportional to candidates' plurality scores. Bounding the distortion of other algorithms from this class is an open problem. Our contribution is twofold. First, we establish upper bounds on the distortion of candidates from the (p,q)-veto core for arbitrary weight vectors $p$ and $q$. Second, we revisit the metric distortion problem through the \emph{learning-augmented} framework, which equips the algorithm with a (machine-learned) prediction regarding the optimal candidate. The quality of this prediction is unknown, and the goal is to optimize the algorithm's performance under accurate predictions (consistency), while simultaneously providing worst-case guarantees under arbitrarily inaccurate predictions (robustness). We propose an algorithm that chooses candidates from the (p,q)-veto core, using a prediction-guided q vector and, leveraging our distortion bounds, we prove that this algorithm achieves the optimal robustness-consistency trade-off.
Dispersion by mobile agents is a well studied problem in the literature on computing by mobile robots. In this problem, $l$ robots placed arbitrarily on nodes of a network having $n$ nodes are asked to relocate themselves autonomously so that each node contains at most $\lfloor \frac{l}{n}\rfloor$ robots. When $l\le n$, then each node of the network contains at most one robot. Recently, in NETYS'23, Kaur et al. introduced a variant of dispersion called \emph{Distance-2-Dispersion}. In this problem, $l$ robots have to solve dispersion with an extra condition that no two adjacent nodes contain robots. In this work, we generalize the problem of Dispersion and Distance-2-Dispersion by introducing another variant called \emph{Distance-$k$-Dispersion (D-$k$-D)}. In this problem, the robots have to disperse on a network in such a way that shortest distance between any two pair of robots is at least $k$ and there exist at least one pair of robots for which the shortest distance is exactly $k$. Note that, when $k=1$ we have normal dispersion and when $k=2$ we have D-$2$-D. Here, we studied this variant for a dynamic ring (1-interval connected ring) for rooted initial configuration. We have proved the necessity of fully synchronous scheduler to solve this problem and provided an algorithm that solves D-$k$-D in $\Theta(n)$ rounds under a fully synchronous scheduler. So, the presented algorithm is time optimal too. To the best of our knowledge, this is the first work that considers this specific variant.
A subset $S$ of vertices in a graph $G=(V, E)$ is a Dominating Set if each vertex in $V(G)\setminus S$ is adjacent to at least one vertex in $S$. Chellali et al. in 2013, by restricting the number of neighbors in $S$ of a vertex outside $S$, introduced the concept of $[1,j]$-dominating set. A set $D \subseteq V$ of a graph $G = (V, E)$ is called a $[1,j]$-Dominating Set of $G$ if every vertex not in $D$ has at least one neighbor and at most $j$ neighbors in $D$. The Minimum $[1,j]$-Domination problem is the problem of finding the minimum $[1,j]$-dominating set $D$. Given a positive integer $k$ and a graph $G = (V, E)$, the $[1,j]$-Domination Decision problem is to decide whether $G$ has a $[1,j]$-dominating set of cardinality at most $k$. A polynomial-time algorithm was obtained in split graphs for a constant $j$ in contrast to the Dominating Set problem which is NP-hard for split graphs. This result motivates us to investigate the effect of restriction $j$ on the complexity of $[1,j]$-domination problem on various classes of graphs. Although for $j\geq 3$, it has been proved that the minimum of classical domination is equal to minimum $[1,j]$-domination in interval graphs, the complexity of finding the minimum $[1,2]$-domination in interval graphs is still outstanding. In this paper, we propose a polynomial-time algorithm for computing a minimum $[1,2]$-dominating set on interval graphs by a dynamic programming technique. Next, on the negative side, we show that the minimum $[1,2]$-dominating set problem on circle graphs is $NP$-complete.
The mutation strength adaptation properties of a multi-recombinative $(\mu/\mu_I, \lambda)$-ES are studied for isotropic mutations. To this end, standard implementations of cumulative step-size adaptation (CSA) and mutative self-adaptation ($\sigma$SA) are investigated experimentally and theoretically by assuming large population sizes ($\mu$) in relation to the search space dimensionality ($N$). The adaptation is characterized in terms of the scale-invariant mutation strength on the sphere in relation to its maximum achievable value for positive progress. %The results show how the different $\sigma$-adaptation variants behave as $\mu$ and $N$ are varied. Standard CSA-variants show notably different adaptation properties and progress rates on the sphere, becoming slower or faster as $\mu$ or $N$ are varied. This is shown by investigating common choices for the cumulation and damping parameters. Standard $\sigma$SA-variants (with default learning parameter settings) can achieve faster adaptation and larger progress rates compared to the CSA. However, it is shown how self-adaptation affects the progress rate levels negatively. Furthermore, differences regarding the adaptation and stability of $\sigma$SA with log-normal and normal mutation sampling are elaborated.
We explore a novel variant of the classical prophet inequality problem, where the values of a sequence of items are drawn i.i.d. from some distribution, and an online decision maker must select one item irrevocably. We establish that the competitive ratio between the expected optimal performance of the online decision maker compared to that of a prophet, who uses the average of the top $\ell$ items, must be greater than $\ell/c_{\ell}$, with $c_{\ell}$ the solution to an integral equation. We prove that this lower bound is larger than $1-1/(\exp(\ell)-1)$. This implies that the bound converges exponentially fast to $1$ as $\ell$ grows. In particular, the bound for $\ell=2$ is $2/c_{2} \approx 0.966$ which is much closer to $1$ than the classical bound of $0.745$ for $\ell=1$. Additionally, the proposed algorithm can be extended to a more general scenario, where the decision maker is permitted to select $k$ items. This subsumes the $k$ multi-unit i.i.d. prophet problem and provides the current best asymptotic guarantees, as well as enables broader understanding in the more general framework. Finally, we prove a nearly tight competitive ratio when only static threshold policies are allowed.
A set of probabilistic forecasts is calibrated if each prediction of the forecaster closely approximates the empirical distribution of outcomes on the subset of timesteps where that prediction was made. We study the fundamental problem of online calibrated forecasting of binary sequences, which was initially studied by Foster & Vohra (1998). They derived an algorithm with $O(T^{2/3})$ calibration error after $T$ time steps, and showed a lower bound of $\Omega(T^{1/2})$. These bounds remained stagnant for two decades, until Qiao & Valiant (2021) improved the lower bound to $\Omega(T^{0.528})$ by introducing a combinatorial game called sign preservation and showing that lower bounds for this game imply lower bounds for calibration. In this paper, we give the first improvement to the $O(T^{2/3})$ upper bound on calibration error of Foster & Vohra. We do this by introducing a variant of Qiao & Valiant's game that we call sign preservation with reuse (SPR). We prove that the relationship between SPR and calibrated forecasting is bidirectional: not only do lower bounds for SPR translate into lower bounds for calibration, but algorithms for SPR also translate into new algorithms for calibrated forecasting. We then give an improved \emph{upper bound} for the SPR game, which implies, via our equivalence, a forecasting algorithm with calibration error $O(T^{2/3 - \varepsilon})$ for some $\varepsilon > 0$, improving Foster & Vohra's upper bound for the first time. Using similar ideas, we then prove a slightly stronger lower bound than that of Qiao & Valiant, namely $\Omega(T^{0.54389})$. Our lower bound is obtained by an oblivious adversary, marking the first $\omega(T^{1/2})$ calibration lower bound for oblivious adversaries.
Multi-Agent Path Finding (MAPF) is NP-hard to solve optimally, even on graphs, suggesting no polynomial-time algorithms can compute exact optimal solutions for them. This raises a natural question: How optimal can polynomial-time algorithms reach? Whereas algorithms for computing constant-factor optimal solutions have been developed, the constant factor is generally very large, limiting their application potential. In this work, among other breakthroughs, we propose the first low-polynomial-time MAPF algorithms delivering $1$-$1.5$ (resp., $1$-$1.67$) asymptotic makespan optimality guarantees for 2D (resp., 3D) grids for random instances at a very high $1/3$ agent density, with high probability. Moreover, when regularly distributed obstacles are introduced, our methods experience no performance degradation. These methods generalize to support $100\%$ agent density. Regardless of the dimensionality and density, our high-quality methods are enabled by a unique hierarchical integration of two key building blocks. At the higher level, we apply the labeled Grid Rearrangement Algorithm (RTA), capable of performing efficient reconfiguration on grids through row/column shuffles. At the lower level, we devise novel methods that efficiently simulate row/column shuffles returned by RTA. Our implementations of RTA-based algorithms are highly effective in extensive numerical evaluations, demonstrating excellent scalability compared to other SOTA methods. For example, in 3D settings, \rta-based algorithms readily scale to grids with over $370,000$ vertices and over $120,000$ agents and consistently achieve conservative makespan optimality approaching $1.5$, as predicted by our theoretical analysis.
Standard interpolatory subdivision schemes and their underlying interpolating refinable functions are of interest in CAGD, numerical PDEs, and approximation theory. Generalizing these notions, we introduce and study $n_s$-step interpolatory $M$-subdivision schemes and their interpolating $M$-refinable functions with $n_s\in \mathbb{N} \cup\{\infty\}$ and a dilation factor $M\in \mathbb{N}\backslash\{1\}$. We completely characterize $\mathscr{C}^m$-convergence and smoothness of $n_s$-step interpolatory subdivision schemes and their interpolating $M$-refinable functions in terms of their masks. Inspired by $n_s$-step interpolatory stationary subdivision schemes, we further introduce the notion of $r$-mask quasi-stationary subdivision schemes, and then we characterize their $\mathscr{C}^m$-convergence and smoothness properties using only their masks. Moreover, combining $n_s$-step interpolatory subdivision schemes with $r$-mask quasi-stationary subdivision schemes, we can obtain $r n_s$-step interpolatory subdivision schemes. Examples and construction procedures of convergent $n_s$-step interpolatory $M$-subdivision schemes are provided to illustrate our results with dilation factors $M=2,3,4$. In addition, for the dyadic dilation $M=2$ and $r=2,3$, using $r$ masks with only two-ring stencils, we provide examples of $\mathscr{C}^r$-convergent $r$-step interpolatory $r$-mask quasi-stationary dyadic subdivision schemes.
In the parameterized $k$-clique problem, or $k$-Clique for short, we are given a graph $G$ and a parameter $k\ge 1$. The goal is to decide whether there exist $k$ vertices in $G$ that induce a complete subgraph (i.e., a $k$-clique). This problem plays a central role in the theory of parameterized intractability as one of the first W[1]-complete problems. Existing research has shown that even an FPT-approximation algorithm for $k$-Clique with arbitrary ratio does not exist, assuming the Gap-Exponential-Time Hypothesis (Gap-ETH) [Chalermsook et al., FOCS'17 and SICOMP]. However, whether this inapproximability result can be based on the standard assumption of $\mathrm{W} 1\ne \mathrm{FPT}$ remains unclear. The recent breakthrough of Bingkai Lin [STOC'21] and subsequent works by Karthik C.S. and Khot [CCC'22], and by Lin, Ren, Sun Wang [ICALP'22] give a technique that bypasses Gap-ETH, thus leading to the inapproximability ratio of $O(1)$ and $k^{o(1)}$ under $\mathrm{W}[1]$-hardness (the first two) and ETH (for the latter one). All the work along this line follows the framework developed by Lin, which starts from the $k$-vector-sum problem and requires some involved algebraic techniques. This paper presents an alternative framework for proving the W[1]-hardness of the $k^{o(1)}$-FPT-inapproximability of $k$-Clique. Using this framework, we obtain a gap-producing self-reduction of $k$-Clique without any intermediate algebraic problem. More precisely, we reduce from $(k,k-1)$-Gap Clique to $(q^k, q^{k-1})$-Gap Clique, for any function $q$ depending only on the parameter $k$, thus implying the $k^{o(1)}$-inapproximability result when $q$ is sufficiently large. Our proof is relatively simple and mostly combinatorial. At the core of our construction is a novel encoding of $k$-element subset stemming from the theory of "network coding" and a "Sidon set" representation of a graph.
The Johnson-Lindenstrauss (JL) Lemma introduced the concept of dimension reduction via a random linear map, which has become a fundamental technique in many computational settings. For a set of $n$ points in $\mathbb{R}^d$ and any fixed $\epsilon>0$, it reduces the dimension $d$ to $O(\log n)$ while preserving, with high probability, all the pairwise Euclidean distances within factor $1+\epsilon$. Perhaps surprisingly, the target dimension can be lower if one only wishes to preserve the optimal value of a certain problem on the pointset, e.g., Euclidean max-cut or $k$-means. However, for some notorious problems, like diameter (aka furthest pair), dimension reduction via the JL map to below $O(\log n)$ does not preserve the optimal value within factor $1+\epsilon$. We propose to focus on another regime, of \emph{moderate dimension reduction}, where a problem's value is preserved within factor $\alpha>1$ using target dimension $\tfrac{\log n}{poly(\alpha)}$. We establish the viability of this approach and show that the famous $k$-center problem is $\alpha$-approximated when reducing to dimension $O(\tfrac{\log n}{\alpha^2}+\log k)$. Along the way, we address the diameter problem via the special case $k=1$. Our result extends to several important variants of $k$-center (with outliers, capacities, or fairness constraints), and the bound improves further with the input's doubling dimension. While our $poly(\alpha)$-factor improvement in the dimension may seem small, it actually has significant implications for streaming algorithms, and easily yields an algorithm for $k$-center in dynamic geometric streams, that achieves $O(\alpha)$-approximation using space $poly(kdn^{1/\alpha^2})$. This is the first algorithm to beat $O(n)$ space in high dimension $d$, as all previous algorithms require space at least $\exp(d)$. Furthermore, it extends to the $k$-center variants mentioned above.
Estimating parameters of functional ARMA, GARCH and invertible processes requires estimating lagged covariance and cross-covariance operators of Cartesian product Hilbert space-valued processes. Asymptotic results have been derived in recent years, either less generally or under a strict condition. This article derives upper bounds of the estimation errors for such operators based on the mild condition Lp-m-approximability for each lag, Cartesian power(s) and sample size, where the two processes can take values in different spaces in the context of lagged cross-covariance operators. Implications of our results on eigenelements, parameters in functional AR(MA) models and other general situations are also discussed.