We introduce the natural notion of a matching frame in a $2$-dimensional string. A matching frame in a $2$-dimensional $n\times m$ string $M$, is a rectangle such that the strings written on the horizontal sides of the rectangle are identical, and so are the strings written on the vertical sides of the rectangle. Formally, a matching frame in $M$ is a tuple $(u,d,\ell,r)$ such that $M[u][\ell ..r] = M[d][\ell ..r]$ and $M[u..d][\ell] = M[u..d][r]$. In this paper, we present an algorithm for finding the maximum perimeter matching frame in a matrix $M$ in $\tilde{O}(n^{2.5})$ time (assuming $n \ge m)$. Additionally, for every constant $\epsilon> 0$ we present a near-linear $(1-\epsilon)$-approximation algorithm for the maximum perimeter of a matching frame. In the development of the aforementioned algorithms, we introduce inventive technical elements and uncover distinctive structural properties that we believe will captivate the curiosity of the community.
A random $m\times n$ matrix $S$ is an oblivious subspace embedding (OSE) with parameters $\epsilon>0$, $\delta\in(0,1/3)$ and $d\leq m\leq n$, if for any $d$-dimensional subspace $W\subseteq R^n$, $P\big(\,\forall_{x\in W}\ (1+\epsilon)^{-1}\|x\|\leq\|Sx\|\leq (1+\epsilon)\|x\|\,\big)\geq 1-\delta.$ It is known that the embedding dimension of an OSE must satisfy $m\geq d$, and for any $\theta > 0$, a Gaussian embedding matrix with $m\geq (1+\theta) d$ is an OSE with $\epsilon = O_\theta(1)$. However, such optimal embedding dimension is not known for other embeddings. Of particular interest are sparse OSEs, having $s\ll m$ non-zeros per column, with applications to problems such as least squares regression and low-rank approximation. We show that, given any $\theta > 0$, an $m\times n$ random matrix $S$ with $m\geq (1+\theta)d$ consisting of randomly sparsified $\pm1/\sqrt s$ entries and having $s= O(\log^4(d))$ non-zeros per column, is an oblivious subspace embedding with $\epsilon = O_{\theta}(1)$. Our result addresses the main open question posed by Nelson and Nguyen (FOCS 2013), who conjectured that sparse OSEs can achieve $m=O(d)$ embedding dimension, and it improves on $m=O(d\log(d))$ shown by Cohen (SODA 2016). We use this to construct the first oblivious subspace embedding with $O(d)$ embedding dimension that can be applied faster than current matrix multiplication time, and to obtain an optimal single-pass algorithm for least squares regression. We further extend our results to construct even sparser non-oblivious embeddings, leading to the first subspace embedding with low distortion $\epsilon=o(1)$ and optimal embedding dimension $m=O(d/\epsilon^2)$ that can be applied in current matrix multiplication time.
We derive novel and sharp high-dimensional Berry--Esseen bounds for the sum of $m$-dependent random vectors over the class of hyper-rectangles exhibiting only a poly-logarithmic dependence in the dimension. Our results hold under minimal assumptions, such as non-degenerate covariances and finite third moments, and yield a sample complexity of order $\sqrt{m/n}$, aside from logarithmic terms, matching the optimal rates established in the univariate case. When specialized to the sums of independent non-degenerate random vectors, we obtain sharp rates under the weakest possible conditions. On the technical side, we develop an inductive relationship between anti-concentration inequalities and Berry--Esseen bounds, inspired by the classical Lindeberg swapping method and the concentration inequality approach for dependent data, that may be of independent interest.
We present algorithms that compute the terminal configurations for sandpile instances in $O(n \log n)$ time on trees and $O(n)$ time on paths, where $n$ is the number of vertices. The Abelian Sandpile model is a well-known model used in exploring self-organized criticality. Despite a large amount of work on other aspects of sandpiles, there have been limited results in efficiently computing the terminal state, known as the sandpile prediction problem. Our algorithm improves the previous best runtime of $O(n \log^5 n)$ on trees [Ramachandran-Schild SODA '17] and $O(n \log n)$ on paths [Moore-Nilsson '99]. To do so, we move beyond the simulation of individual events by directly computing the number of firings for each vertex. The computation is accelerated using splittable binary search trees. In addition, we give algorithms in $O(n)$ time on cliques and $O(n \log^2 n)$ time on pseudotrees. Towards solving on general graphs, we provide a reduction that transforms the prediction problem on an arbitrary graph into problems on its subgraphs separated by any vertex set $P$. The reduction gives a time complexity of $O(\log^{|P|} n \cdot T)$ where $T$ denotes the total time for solving on each subgraph. We also give algorithms that works well with this reduction scheme.
Chan, Har-Peled, and Jones [SICOMP 2020] developed locality-sensitive orderings (LSO) for Euclidean space. A $(\tau,\rho)$-LSO is a collection $\Sigma$ of orderings such that for every $x,y\in\mathbb{R}^d$ there is an ordering $\sigma\in\Sigma$, where all the points between $x$ and $y$ w.r.t. $\sigma$ are in the $\rho$-neighborhood of either $x$ or $y$. In essence, LSO allow one to reduce problems to the $1$-dimensional line. Later, Filtser and Le [STOC 2022] developed LSO's for doubling metrics, general metric spaces, and minor free graphs. For Euclidean and doubling spaces, the number of orderings in the LSO is exponential in the dimension, which made them mainly useful for the low dimensional regime. In this paper, we develop new LSO's for Euclidean, $\ell_p$, and doubling spaces that allow us to trade larger stretch for a much smaller number of orderings. We then use our new LSO's (as well as the previous ones) to construct path reporting low hop spanners, fault tolerant spanners, reliable spanners, and light spanners for different metric spaces. While many nearest neighbor search (NNS) data structures were constructed for metric spaces with implicit distance representations (where the distance between two metric points can be computed using their names, e.g. Euclidean space), for other spaces almost nothing is known. In this paper we initiate the study of the labeled NNS problem, where one is allowed to artificially assign labels (short names) to metric points. We use LSO's to construct efficient labeled NNS data structures in this model.
The \emph{$f$-fault-tolerant connectivity labeling} ($f$-FTC labeling) is a scheme of assigning each vertex and edge with a small-size label such that one can determine the connectivity of two vertices $s$ and $t$ under the presence of at most $f$ faulty edges only from the labels of $s$, $t$, and the faulty edges. This paper presents a new deterministic $f$-FTC labeling scheme attaining $O(f^2 \mathrm{polylog}(n))$-bit label size and a polynomial construction time, which settles the open problem left by Dory and Parter [PODC'21]. The key ingredient of our construction is to develop a deterministic counterpart of the graph sketch technique by Ahn, Guha, and McGreger [SODA'12], via some natural connection with the theory of error-correcting codes. This technique removes one major obstacle in de-randomizing the Dory-Parter scheme. The whole scheme is obtained by combining this technique with a new deterministic graph sparsification algorithm derived from the seminal $\epsilon$-net theory, which is also of independent interest. As byproducts, our result deduces the first deterministic fault-tolerant approximate distance labeling scheme with a non-trivial performance guarantee and an improved deterministic fault-tolerant compact routing. The authors believe that our new technique is potentially useful in the future exploration of more efficient FTC labeling schemes and other related applications based on graph sketches.
Subgraph and homomorphism counting are fundamental algorithmic problems. Given a constant-sized pattern graph $H$ and a large input graph $G$, we wish to count the number of $H$-homomorphisms/subgraphs in $G$. Given the massive sizes of real-world graphs and the practical importance of counting problems, we focus on when (near) linear time algorithms are possible. The seminal work of Chiba-Nishizeki (SICOMP 1985) shows that for bounded degeneracy graphs $G$, clique and $4$-cycle counting can be done linear time. Recent works (Bera et al, SODA 2021, JACM 2022) show a dichotomy theorem characterizing the patterns $H$ for which $H$-homomorphism counting is possible in linear time, for bounded degeneracy inputs $G$. At the other end, Ne\v{s}et\v{r}il and Ossona de Mendez used their deep theory of "sparsity" to define bounded expansion graphs. They prove that, for all $H$, $H$-homomorphism counting can be done in linear time for bounded expansion inputs. What lies between? For a specific $H$, can we characterize input classes where $H$-homomorphism counting is possible in linear time? We discover a hierarchy of dichotomy theorems that precisely answer the above questions. We show the existence of an infinite sequence of graph classes $\mathcal{G}_0$ $\supseteq$ $\mathcal{G}_1$ $\supseteq$ ... $\supseteq$ $\mathcal{G}_\infty$ where $\mathcal{G}_0$ is the class of bounded degeneracy graphs, and $\mathcal{G}_\infty$ is the class of bounded expansion graphs. Fix any constant sized pattern graph $H$. Let $LICL(H)$ denote the length of the longest induced cycle in $H$. We prove the following. If $LICL(H) < 3(r+2)$, then $H$-homomorphisms can be counted in linear time for inputs in $\mathcal{G}_r$. If $LICL(H) \geq 3(r+2)$, then $H$-homomorphism counting on inputs from $\mathcal{G}_r$ takes $\Omega(m^{1+\gamma})$ time. We prove similar dichotomy theorems for subgraph counting.
We give near-optimal algorithms for computing an ellipsoidal rounding of a convex polytope whose vertices are given in a stream. The approximation factor is linear in the dimension (as in John's theorem) and only loses an excess logarithmic factor in the aspect ratio of the polytope. Our algorithms are nearly optimal in two senses: first, their runtimes nearly match those of the most efficient known algorithms for the offline version of the problem. Second, their approximation factors nearly match a lower bound we show against a natural class of geometric streaming algorithms. In contrast to existing works in the streaming setting that compute ellipsoidal roundings only for centrally symmetric convex polytopes, our algorithms apply to general convex polytopes. We also show how to use our algorithms to construct coresets from a stream of points that approximately preserve both the ellipsoidal rounding and the convex hull of the original set of points.
We consider the problem of variational Bayesian inference in a latent variable model where a (possibly complex) observed stochastic process is governed by the solution of a latent stochastic differential equation (SDE). Motivated by the challenges that arise when trying to learn an (almost arbitrary) latent neural SDE from large-scale data, such as efficient gradient computation, we take a step back and study a specific subclass instead. In our case, the SDE evolves on a homogeneous latent space and is induced by stochastic dynamics of the corresponding (matrix) Lie group. In learning problems, SDEs on the unit $n$-sphere are arguably the most relevant incarnation of this setup. Notably, for variational inference, the sphere not only facilitates using a truly uninformative prior SDE, but we also obtain a particularly simple and intuitive expression for the Kullback-Leibler divergence between the approximate posterior and prior process in the evidence lower bound. Experiments demonstrate that a latent SDE of the proposed type can be learned efficiently by means of an existing one-step geometric Euler-Maruyama scheme. Despite restricting ourselves to a less diverse class of SDEs, we achieve competitive or even state-of-the-art performance on various time series interpolation and classification benchmarks.
The orthogonality dimension of a graph $G$ over $\mathbb{R}$ is the smallest integer $k$ for which one can assign a nonzero $k$-dimensional real vector to each vertex of $G$, such that every two adjacent vertices receive orthogonal vectors. We prove that for every sufficiently large integer $k$, it is $\mathsf{NP}$-hard to decide whether the orthogonality dimension of a given graph over $\mathbb{R}$ is at most $k$ or at least $2^{(1-o(1)) \cdot k/2}$. We further prove such hardness results for the orthogonality dimension over finite fields as well as for the closely related minrank parameter, which is motivated by the index coding problem in information theory. This in particular implies that it is $\mathsf{NP}$-hard to approximate these graph quantities to within any constant factor. Previously, the hardness of approximation was known to hold either assuming certain variants of the Unique Games Conjecture or for approximation factors smaller than $3/2$. The proofs involve the concept of line digraphs and bounds on their orthogonality dimension and on the minrank of their complement.
We study the emptiness and $\lambda$-reachability problems for unary and binary Probabilistic Finite Automata (PFA) and characterise the complexity of these problems in terms of the degree of ambiguity of the automaton and the size of its alphabet. Our main result is that emptiness and $\lambda$-reachability are solvable in EXPTIME for polynomially ambiguous unary PFA and if, in addition, the transition matrix is binary, we show they are in NP. In contrast to the Skolem-hardness of the $\lambda$-reachability and emptiness problems for exponentially ambiguous unary PFA, we show that these problems are NP-hard even for finitely ambiguous unary PFA. For binary polynomially ambiguous PFA with fixed and commuting transition matrices, we prove NP-hardness of the $\lambda$-reachability (dimension 9), nonstrict emptiness (dimension 37) and strict emptiness (dimension 40) problems.