We study the query complexity of the metric Steiner Tree problem, where we are given an $n \times n$ metric on a set $V$ of vertices along with a set $T \subseteq V$ of $k$ terminals, and the goal is to find a tree of minimum cost that contains all terminals in $T$. The query complexity for the related minimum spanning tree (MST) problem is well-understood: for any fixed $\varepsilon > 0$, one can estimate the MST cost to within a $(1+\varepsilon)$-factor using only $\tilde{O}(n)$ queries, and this is known to be tight. This implies that a $(2 + \varepsilon)$-approximate estimate of Steiner Tree cost can be obtained with $\tilde{O}(k)$ queries by simply applying the MST cost estimation algorithm on the metric induced by the terminals. Our first result shows that any (randomized) algorithm that estimates the Steiner Tree cost to within a $(5/3 - \varepsilon)$-factor requires $\Omega(n^2)$ queries, even if $k$ is a constant. This lower bound is in sharp contrast to an upper bound of $O(nk)$ queries for computing a $(5/3)$-approximate Steiner Tree, which follows from previous work by Du and Zelikovsky. Our second main result, and the main technical contribution of this work, is a sublinear query algorithm for estimating the Steiner Tree cost to within a strictly better-than-$2$ factor, with query complexity $\tilde{O}(n^{12/7} + n^{6/7}\cdot k)=\tilde{O}(n^{13/7})=o(n^2)$. We complement this result by showing an $\tilde{\Omega}(n + k^{6/5})$ query lower bound for any algorithm that estimates Steiner Tree cost to a strictly better than $2$ factor. Thus $\tilde{\Omega}(n^{6/5})$ queries are needed to just beat $2$-approximation when $k = \Omega(n)$; a sharp contrast to MST cost estimation where a $(1+o(1))$-approximate estimate of cost is achievable with only $\tilde{O}(n)$ queries.
Let $G$ be a graph of a network system with vertices, $V(G)$, representing physical locations and edges, $E(G)$, representing informational connectivity. A \emph{locating-dominating (LD)} set $S \subseteq V(G)$ is a subset of vertices representing detectors capable of sensing an "intruder" at precisely their location or somewhere in their open-neighborhood -- an LD set must be capable of locating an intruder anywhere in the graph. We explore three types of fault-tolerant LD sets: \emph{redundant LD} sets, which allow a detector to be removed, \emph{error-detecting LD} sets, which allow at most one false negative, and \emph{error-correcting LD} sets, which allow at most one error (false positive or negative). In particular, we determine lower and upper bounds for the minimum density of these three fault-tolerant locating-dominating sets in the \emph{infinite king grid}.
In this paper, we introduce and study the following question. Let $\mathcal G$ be a family of graphs and let $k\geq 3$ be an integer. What is the largest value $f_k(n)$ such that every $n$-vertex graph in $\mathcal G$ has an induced subgraph with degree at most $k$ and with $f_k(n)$ vertices? Similar questions, in which one seeks a large induced forest, or a large induced linear forest, or a large induced $d$-degenerate graph, rather than a large induced graph of bounded degree, have been studied for decades and have given rise to some of the most fascinating and elusive conjectures in Graph Theory. We tackle our problem when $\mathcal G$ is the class of the outerplanar graphs, or the class of the planar graphs, or the class of the graphs whose degree is bounded by a value $d>k$. In all cases, we provide upper and lower bounds on the value of $f_k(n)$. For example, we prove that every $n$-vertex planar graph has an induced subgraph with degree at most $3$ and with $\frac{5n}{13}>0.384n$ vertices, and that there exist $n$-vertex planar graphs whose largest induced subgraph with degree at most $3$ has $\frac{4n}{7}+O(1)<0.572n+O(1)$ vertices.
We propose a method for estimating a log-concave density on $\mathbb R^d$ from samples, under the assumption that there exists an orthogonal transformation that makes the components of the random vector independent. While log-concave density estimation is hard both computationally and statistically, the independent components assumption alleviates both issues, while still maintaining a large non-parametric class. We prove that under mild conditions, at most $\tilde{\mathcal{O}}(\epsilon^{-4})$ samples (suppressing constants and log factors) suffice for our proposed estimator to be within $\epsilon$ of the original density in squared Hellinger distance. On the computational front, while the usual log-concave maximum likelihood estimate can be obtained via a finite-dimensional convex program, it is slow to compute -- especially in higher dimensions. We demonstrate through numerical experiments that our estimator can be computed efficiently, making it more practical to use.
We study the classic Text-to-Pattern Hamming Distances problem: given a pattern $P$ of length $m$ and a text $T$ of length $n$, both over a polynomial-size alphabet, compute the Hamming distance between $P$ and $T[i\, .\, . \, i+m-1]$ for every shift $i$, under the standard Word-RAM model with $\Theta(\log n)$-bit words. - We provide an $O(n\sqrt{m})$ time Las Vegas randomized algorithm for this problem, beating the decades-old $O(n \sqrt{m \log m})$ running time [Abrahamson, SICOMP 1987]. We also obtain a deterministic algorithm, with a slightly higher $O(n\sqrt{m}(\log m\log\log m)^{1/4})$ running time. Our randomized algorithm extends to the $k$-bounded setting, with running time $O\big(n+\frac{nk}{\sqrt{m}}\big)$, removing all the extra logarithmic factors from earlier algorithms [Gawrychowski and Uzna\'{n}ski, ICALP 2018; Chan, Golan, Kociumaka, Kopelowitz and Porat, STOC 2020]. - For the $(1+\epsilon)$-approximate version of Text-to-Pattern Hamming Distances, we give an $\tilde{O}(\epsilon^{-0.93}n)$ time Monte Carlo randomized algorithm, beating the previous $\tilde{O}(\epsilon^{-1}n)$ running time [Kopelowitz and Porat, FOCS 2015; Kopelowitz and Porat, SOSA 2018]. Our approximation algorithm exploits a connection with $3$SUM, and uses a combination of Fredman's trick, equality matrix product, and random sampling; in particular, we obtain new results on approximate counting versions of $3$SUM and Exact Triangle, which may be of independent interest. Our exact algorithms use a novel combination of hashing, bit-packed FFT, and recursion; in particular, we obtain a faster algorithm for computing the sumset of two integer sets, in the regime when the universe size is close to quadratic in the number of elements. We also prove a fine-grained equivalence between the exact Text-to-Pattern Hamming Distances problem and a range-restricted, counting version of $3$SUM.
In a graph bisection problem, we are given a graph $G$ with two equally-sized unlabeled communities, and the goal is to recover the vertices in these communities. A popular heuristic, known as spectral clustering, is to output an estimated community assignment based on the eigenvector corresponding to the second smallest eigenvalue of the Laplacian of $G$. Spectral algorithms can be shown to provably recover the cluster structure for graphs generated from certain probabilistic models, such as the Stochastic Block Model (SBM). However, spectral clustering is known to be non-robust to model mis-specification. Techniques based on semidefinite programming have been shown to be more robust, but they incur significant computational overheads. In this work, we study the robustness of spectral algorithms against semirandom adversaries. Informally, a semirandom adversary is allowed to ``helpfully'' change the specification of the model in a way that is consistent with the ground-truth solution. Our semirandom adversaries in particular are allowed to add edges inside clusters or increase the probability that an edge appears inside a cluster. Semirandom adversaries are a useful tool to determine the extent to which an algorithm has overfit to statistical assumptions on the input. On the positive side, we identify classes of semirandom adversaries under which spectral bisection using the _unnormalized_ Laplacian is strongly consistent, i.e., it exactly recovers the planted partitioning. On the negative side, we show that in these classes spectral bisection with the _normalized_ Laplacian outputs a partitioning that makes a classification mistake on a constant fraction of the vertices. Finally, we demonstrate numerical experiments that complement our theoretical findings.
We study the discrete quantum walk on a regular graph $X$ that assigns negative identity coins to marked vertices $S$ and Grover coins to the unmarked ones. We find combinatorial bases for the eigenspaces of the transtion matrix, and derive a formula for the average vertex mixing matrix $\AMM$. We then find bounds for entries in $\AMM$, and study when these bounds are tight. In particular, the average probabilities between marked vertices are lower bounded by a matrix determined by the induced subgraph $X[S]$, the vertex-deleted subgraph $X\backslash S$, and the edge deleted subgraph $X-E(S)$. We show this bound is achieved if and only if the marked vertices have walk-equitable neighborhoods in the vertex-deleted subgraph. Finally, for quantum walks attaining this bound, we determine when $\AMM[S,S]$ is symmetric, positive semidefinite or uniform.
We consider temporal numeric planning problems $\Pi$ expressed in PDDL2.1 level 3, and show how to produce SMT formulas $(i)$ whose models correspond to valid plans of $\Pi$, and $(ii)$ that extend the recently proposed planning with patterns approach from the numeric to the temporal case. We prove the correctness and completeness of the approach and show that it performs very well on 10 domains with required concurrency.
This paper presents an algorithmic method for generating random orthogonal matrices \(A\) that satisfy the property \(A^t S A = S\), where \(S\) is a fixed real invertible symmetric or skew-symmetric matrix. This method is significant as it generalizes the procedures for generating orthogonal matrices that fix a general fixed symmetric or skew-symmetric bilinear form. These include orthogonal matrices that fall to groups such as the symplectic group, Lorentz group, Poincar\'e group, and more generally the indefinite orthogonal group, to name a few. These classes of matrices play crucial roles in diverse fields such as theoretical physics, where they are used to describe symmetries and conservation laws, as well as in computational geometry, numerical analysis, and number theory, where they are integral to the study of quadratic forms and modular forms. The implementation of our algorithms can be accomplished using standard linear algebra libraries.
Let $P$ be a set of points in the plane and let $m$ be an integer. The goal of Max Cover by Unit Disks problem is to place $m$ unit disks whose union covers the maximum number of points from~$P$. We are interested in the dynamic version of Max Cover by Unit Disks problem, where the points in $P$ appear and disappear over time, and the algorithm must maintain a set \cDalg of $m$ disks whose union covers many points. A dynamic algorithm for this problem is a $k$-stable $\alpha$-approximation algorithm when it makes at most $k$ changes to \cDalg upon each update to the set $P$ and the number of covered points at time $t$ is always at least $\alpha \cdot \opt(t)$, where $\opt(t)$ is the maximum number of points that can be covered by m disks at time $t$. We show that for any constant $\varepsilon>0$, there is a $k_{\varepsilon}$-stable $(1-\varepsilon)$-approximation algorithm for the dynamic Max Cover by Unit Disks problem, where $k_{\varepsilon}=O(1/\varepsilon^3)$. This improves the stability of $\Theta(1/\eps^4)$ that can be obtained by combining results of Chaplick, De, Ravsky, and Spoerhase (ESA 2018) and De~Berg, Sadhukhan, and Spieksma (APPROX 2023). Our result extends to other fat similarly-sized objects used in the covering, such as arbitrarily-oriented unit squares, or arbitrarily-oriented fat ellipses of fixed diameter. We complement the above result by showing that the restriction to fat objects is necessary to obtain a SAS. To this end, we study the Max Cover by Unit Segments problem, where the goal is to place $m$ unit-length segments whose union covers the maximum number of points from $P$. We show that there is a constant $\varepsilon^* > 0$ such that any $k$-stable $(1 + \varepsilon^*)$-approximation algorithm must have $k=\Omega(m)$, even when the point set never has more than four collinear points.
Neural machine translation (NMT) is a deep learning based approach for machine translation, which yields the state-of-the-art translation performance in scenarios where large-scale parallel corpora are available. Although the high-quality and domain-specific translation is crucial in the real world, domain-specific corpora are usually scarce or nonexistent, and thus vanilla NMT performs poorly in such scenarios. Domain adaptation that leverages both out-of-domain parallel corpora as well as monolingual corpora for in-domain translation, is very important for domain-specific translation. In this paper, we give a comprehensive survey of the state-of-the-art domain adaptation techniques for NMT.