We study the Fr\'echet queries problem. It is a data structure problem, where we are given a set $S$ of $n$ polygonal curves and a distance threshold $\rho$. The data structure should support queries with a polygonal curve $q$ for the elements of $S$, for which the continuous Fr\'echet distance to $q$ is at most $\rho$. Afshani and Driemel in 2018 studied this problem for two-dimensional polygonal curves and gave upper and lower bounds on the space-query time tradeoff. We study the case that the ambient space of the curves is one-dimensional and show an intimate connection to the well-studied rectangle stabbing problem. Here, we are given a set of hyperrectangles as input and a query with a point $q$ should return all input rectangles that contain this point. Using known data structures for rectangle stabbing or orthogonal range searching this directly leads to a data structure with $\mathcal{O}(n \log ^{t-1} n)$ storage and $\mathcal{O}(\log^{t-1} n+k)$ query time, where $k$ denotes the output size and $t$ can be chosen as the maximum number of vertices of either (a) the stored curves or (b) the query curves. The resulting bounds improve upon the bounds by Afshani and Driemel in both the storage and query time. In addition, we show that known lower bounds for rectangle stabbing and orthogonal range reporting with dimension parameter $d= \lfloor t/2 \rfloor$ can be applied to our problem via reduction. .
This is an expository note explaining how the geometric notions of local connectedness and properness are related to the $\Sigma$-type and $\Pi$-type constructors of dependent type theory.
We study the Generalized Red-Blue Annulus Cover problem for two sets of points, red ($R$) and blue ($B$), where each point $p \in R\cup B$ is associated with a positive penalty ${\cal P}(p)$. The red points have non-covering penalties, and the blue points have covering penalties. The objective is to compute a circular annulus ${\cal A}$ such that the value of the function ${\cal P}({R}^{out})$ + ${\cal P}({ B}^{in})$ is minimum, where ${R}^{out} \subseteq {R}$ is the set of red points not covered by ${\cal A}$ and ${B}^{in} \subseteq {B}$ is the set of blue points covered by $\cal A$. We also study another version of this problem, where all the red points in $R$ and the minimum number of points in $B$ are covered by the circular annulus in two dimensions. We design polynomial-time algorithms for all such circular annulus problems.
In this work we study multi-server queues on a Euclidean space. Consider $N$ servers that are distributed uniformly in $[0,1]^d$. Customers (users) arrive at the servers according to independent Poisson processes of intensity $\lambda$. However, they probabilistically decide whether to join the queue they arrived at, or move to one of the nearest neighbours. The strategy followed by the customers affects the load on the servers in the long run. In this paper, we are interested in characterizing the fraction of servers that bear a larger load as compared to when the users do not follow any strategy, i.e., they join the queue they arrive at. These are called overloaded servers. In the one-dimensional case ($d=1$), we evaluate the expected fraction of overloaded servers for any finite $N$ when the users follow probabilistic nearest neighbour shift strategies. Additionally, for servers distributed in a $d$-dimensional space we provide expressions for the fraction of overloaded servers in the system as the total number of servers $N\rightarrow \infty$. Numerical experiments are provided to support our claims. Typical applications of our results include electric vehicles queueing at charging stations, and queues in airports or supermarkets.
More than 40 years ago, Schroeppel and Shamir presented an algorithm that solves the Subset Sum problem for $n$ integers in time $O^*(2^{0.5n})$ and space $O^*(2^{0.25n})$. The time upper bound remains unbeaten, but the space upper bound has been improved to $O^*(2^{0.249999n})$ in a recent breakthrough paper by Nederlof and W\k{e}grzycki (STOC 2021). Their algorithm is a clever combination of a number of previously known techniques with a new reduction and a new algorithm for the Orthogonal Vectors problem. In this paper, we give two new algorithms for Subset Sum. We start by presenting an Arthur--Merlin algorithm: upon receiving the verifier's randomness, the prover sends an $n/4$-bit long proof to the verifier who checks it in (deterministic) time and space $O^*(2^{n/4})$. The simplicity of this algorithm has a number of interesting consequences: it can be parallelized easily; also, by enumerating all possible proofs, one recovers upper bounds on time and space for Subset Sum proved by Schroeppel and Shamir in 1979. As it is the case with the previously known algorithms for Subset Sum, our algorithm follows from an algorithm for $4$-SUM: we prove that, using verifier's coin tosses, the prover can prepare a $\log_2 n$-bit long proof verifiable in time $\tilde{O}(n)$. Another interesting consequence of this result is the following fine-grained lower bound: assuming that $4$-SUM cannot be solved in time $O(n^{2-\varepsilon})$ for all $\varepsilon>0$, Circuit SAT cannot be solved in time $O(g2^{(1-\varepsilon)n})$, for all $\varepsilon>0$. Then, we improve the space bound by Nederlof and W\k{e}grzycki to $O^*(2^{0.246n})$ and also simplify their algorithm and its analysis. We achieve this space bound by further filtering sets of subsets using a random prime number. This allows us to reduce an instance of Subset Sum to a larger number of instances of smaller size.
Given a graph $G$ with $n$ nodes and two nodes $u,v\in G$, the {\em CoSimRank} value $s(u,v)$ quantifies the similarity between $u$ and $v$ based on graph topology. Compared to SimRank, CoSimRank is shown to be more accurate and effective in many real-world applications, including synonym expansion, lexicon extraction, and entity relatedness in knowledge graphs. The computation of all pairwise CoSimRanks in $G$ is highly expensive and challenging. Existing solutions all focus on devising approximate algorithms for the computation of all pairwise CoSimRanks. To attain a desired absolute accuracy guarantee $\epsilon$, the state-of-the-art approximate algorithm for computing all pairwise CoSimRanks requires $O(n^3\log_2(\ln(\frac{1}{\epsilon})))$ time, which is prohibitively expensive even though $\epsilon$ is large. In this paper, we propose \rsim, a fast randomized algorithm for computing all pairwise CoSimRank values. The basic idea of \rsim is to approximate the $n\times n$ matrix multiplications in CoSimRank computation via random projection. Theoretically, \rsim runs in $O(\frac{n^2\ln(n)}{\epsilon^2}\ln(\frac{1}{\epsilon}))$ time and meanwhile ensures an absolute error of at most $\epsilon$ in each CoSimRank value in $G$ with a high probability. Extensive experiments using six real graphs demonstrate that \rsim is more than orders of magnitude faster than the state of the art. In particular, on a million-edge Twitter graph, \rsim answers the $\epsilon$-approximate ($\epsilon=0.1$) all pairwise CoSimRank query within 4 hours, using a single commodity server, while existing solutions fail to terminate within a day.
We study lower bounds on adaptive sensing algorithms for recovering low rank matrices using linear measurements. Given an $n \times n$ matrix $A$, a general linear measurement $S(A)$, for an $n \times n$ matrix $S$, is just the inner product of $S$ and $A$, each treated as $n^2$-dimensional vectors. By performing as few linear measurements as possible on a rank-$r$ matrix $A$, we hope to construct a matrix $\hat{A}$ that satisfies $\|A - \hat{A}\|_F^2 \le c\|A\|_F^2$, for a small constant $c$. It is commonly assumed that when measuring $A$ with $S$, the response is corrupted with an independent Gaussian random variable of mean $0$ and variance $\sigma^2$. Cand\'es and Plan study non-adaptive algorithms for low rank matrix recovery using random linear measurements. At a certain noise level, it is known that their non-adaptive algorithms need to perform $\Omega(n^2)$ measurements, which amounts to reading the entire matrix. An important question is whether adaptivity helps in decreasing the overall number of measurements. We show that any adaptive algorithm that uses $k$ linear measurements in each round and outputs an approximation to the underlying matrix with probability $\ge 9/10$ must run for $t = \Omega(\log(n^2/k)/\log\log n)$ rounds showing that any adaptive algorithm which uses $n^{2-\beta}$ linear measurements in each round must run for $\Omega(\log n/\log\log n)$ rounds to compute a reconstruction with probability $\ge 9/10$. Hence any adaptive algorithm that has $o(\log n/\log\log n)$ rounds must use an overall $\Omega(n^2)$ linear measurements. Our techniques also readily extend to obtain lower bounds on adaptive algorithms for tensor recovery and obtain measurement-vs-rounds trade-off for many sensing problems in numerical linear algebra, such as spectral norm low rank approximation, Frobenius norm low rank approximation, singular vector approximation, and more.
Fr\'echet means, conceptually appealing, generalize the Euclidean expectation to general metric spaces. We explore how well Fr\'echet means can be estimated from independent and identically distributed samples and uncover a fundamental limitation: In the vicinity of a probability distribution $P$ with nonunique means, independent of sample size, it is not possible to uniformly estimate Fr\'echet means below a precision determined by the diameter of the set of Fr\'echet means of $P$. Implications were previously identified for empirical plug-in estimators as part of the phenomenon \emph{finite sample smeariness}. Our findings thus confirm inevitable statistical challenges in the estimation of Fr\'echet means on metric spaces for which there exist distributions with nonunique means. Illustrating the relevance of our lower bound, examples of extrinsic, intrinsic, Procrustes, diffusion and Wasserstein means showcase either deteriorating constants or slow convergence rates of empirical Fr\'echet means for samples near the regime of nonunique means.
The bi-sparse blind deconvolution problem is studied -- that is, from the knowledge of $h*(Qb)$, where $Q$ is some linear operator, recovering $h$ and $b$, which are both assumed to be sparse. The approach rests upon lifting the problem to a linear one, and then applying the hierarchical sparsity framework. In particular, the efficient HiHTP algorithm is proposed for performing the recovery. Then, under a random model on the matrix $Q$, it is theoretically shown that an $s$-sparse $h \in \mathbb{K}^\mu$ and $\sigma$-sparse $b \in \mathbb{K}^n$ with high probability can be recovered when $\mu \succcurlyeq s\log(s)^2\log(\mu)\log(\mu n) + s\sigma \log(n)$.
Given two sets $\mathit{R}$ and $\mathit{B}$ of at most $\mathit{n}$ points in the plane, we present efficient algorithms to find a two-line linear classifier that best separates the "red" points in $\mathit{R}$ from the "blue" points in $B$ and is robust to outliers. More precisely, we find a region $\mathit{W}_\mathit{B}$ bounded by two lines, so either a halfplane, strip, wedge, or double wedge, containing (most of) the blue points $\mathit{B}$, and few red points. Our running times vary between optimal $O(n\log n)$ and $O(n^4)$, depending on the type of region $\mathit{W}_\mathit{B}$ and whether we wish to minimize only red outliers, only blue outliers, or both.
We study a variant of Collaborative PAC Learning, in which we aim to learn an accurate classifier for each of the $n$ data distributions, while minimizing the number of samples drawn from them in total. Unlike in the usual collaborative learning setup, it is not assumed that there exists a single classifier that is simultaneously accurate for all distributions. We show that, when the data distributions satisfy a weaker realizability assumption, sample-efficient learning is still feasible. We give a learning algorithm based on Empirical Risk Minimization (ERM) on a natural augmentation of the hypothesis class, and the analysis relies on an upper bound on the VC dimension of this augmented class. In terms of the computational efficiency, we show that ERM on the augmented hypothesis class is NP-hard, which gives evidence against the existence of computationally efficient learners in general. On the positive side, for two special cases, we give learners that are both sample- and computationally-efficient.