We consider the problem of clustering privately a dataset in $\mathbb{R}^d$ that undergoes both insertion and deletion of points. Specifically, we give an $\varepsilon$-differentially private clustering mechanism for the $k$-means objective under continual observation. This is the first approximation algorithm for that problem with an additive error that depends only logarithmically in the number $T$ of updates. The multiplicative error is almost the same as non privately. To do so we show how to perform dimension reduction under continual observation and combine it with a differentially private greedy approximation algorithm for $k$-means. We also partially extend our results to the $k$-median problem.
In the $t$-online-erasure model in property testing, an adversary is allowed to erase $t$ values of a queried function for each query the tester makes. This model was recently formulated by Kalemaj, Raskhodnikova andVarma, who showed that the properties of linearity of functions as well as quadraticity can be tested in$O_t(1)$ many queries: $O(\log (t))$ for linearity and $2^{2^{O(t)}}$ for quadraticity. They asked whether the more general property of low-degreeness can be tested in the online erasure model, whether better testers exist for quadraticity, and if similar results hold when ``erasures'' are replaced with ``corruptions''. We show that, in the $t$-online-erasure model, for a prime power $q$, given query access to a function $f: \mathbb{F}_q^n \xrightarrow[]{} \mathbb{F}_q$, one can distinguish in $\mathrm{poly}(\log^{d+q}(t)/\delta)$ queries between the case that $f$ is degree at most $d$, and the case that $f$ is $\delta$-far from any degree $d$ function (with respect to the fractional hamming distance). This answers the aforementioned questions and brings the query complexity to nearly match the query complexity of low-degree testing in the classical property testing model. Our results are based on the observation that the property of low-degreeness admits a large and versatile family of query efficient testers. Our testers operates by querying a uniformly random, sufficiently large set of points in a large enough affine subspace, and finding a tester for low-degreeness that only utilizes queries from that set of points. We believe that this tester may find other applications to algorithms in the online-erasure model or other related models, and may be of independent interest.
The goal of the trace reconstruction problem is to recover a string $x\in\{0,1\}^n$ given many independent {\em traces} of $x$, where a trace is a subsequence obtained from deleting bits of $x$ independently with some given probability $p\in [0,1).$ A recent result of Chase (STOC 2021) shows how $x$ can be determined (in exponential time) from $\exp(\widetilde{O}(n^{1/5}))$ traces. This is the state-of-the-art result on the sample complexity of trace reconstruction. In this paper we consider two kinds of algorithms for the trace reconstruction problem. Our first, and technically more involved, result shows that any $k$-mer-based algorithm for trace reconstruction must use $\exp(\Omega(n^{1/5}))$ traces, under the assumption that the estimator requires $poly(2^k, 1/\varepsilon)$ traces, thus establishing the optimality of this number of traces. The analysis of this result also shows that the analysis technique used by Chase (STOC 2021) is essentially tight, and hence new techniques are needed in order to improve the worst-case upper bound. Our second, simple, result considers the performance of the Maximum Likelihood Estimator (MLE), which specifically picks the source string that has the maximum likelihood to generate the samples (traces). We show that the MLE algorithm uses a nearly optimal number of traces, \ie, up to a factor of $n$ in the number of samples needed for an optimal algorithm, and show that this factor of $n$ loss may be necessary under general ``model estimation'' settings.
We present a $\widetilde{O}(m\sqrt{\tau}+n\tau)$ time algorithm for finding a minimum-cost flow in graphs with $n$ vertices and $m$ edges, given a tree decomposition of width $\tau$ and polynomially bounded integer costs and capacities. This improves upon the current best algorithms for general linear programs bounded by treewidth which run in $\widetilde{O}(m \tau^{(\omega+1)/2})$ time by [Dong-Lee-Ye,21] and [Gu-Song,22], where $\omega \approx 2.37$ is the matrix multiplication exponent. Our approach leverages recent advances in structured linear program solvers and robust interior point methods. As a corollary, for any graph $G$ with $n$ vertices, $m$ edges, and treewidth $\tau$, we obtain a $\widetilde{O}(\tau^3 \cdot m)$ time algorithm to compute a tree decomposition of $G$ with width $O(\tau \cdot \log n)$.
We study local filters for the Lipschitz property of real-valued functions $f: V \to [0,r]$, where the Lipschitz property is defined with respect to an arbitrary undirected graph $G=(V,E)$. We give nearly optimal local Lipschitz filters both with respect to $\ell_1$ distance and $\ell_0$ distance. Previous work only considered unbounded-range functions over $[n]^d$. Jha and Raskhodnikova (SICOMP `13) gave an algorithm for such functions with lookup complexity exponential in $d$, which Awasthi et al.\ (ACM Trans. Comput. Theory) showed was necessary in this setting. By considering the natural class of functions whose range is bounded in $[0,r]$, we circumvent this lower bound and achieve running time $(d^r\log n)^{O(\log r)}$ for the $\ell_1$-respecting filter and $d^{O(r)}\text{polylog }n$ for the $\ell_0$-respecting filter for functions over $[n]^d$. Furthermore, we show that our algorithms are nearly optimal in terms of the dependence on $r$ for the domain $\{0,1\}^d$, an important special case of the domain $[n]^d$. In addition, our lower bound resolves an open question of Awasthi et al., removing one of the conditions necessary for their lower bound for general range. We prove our lower bound via a reduction from distribution-free Lipschitz testing. Finally, we provide two applications of our local filters. First, they can be used in conjunction with the Laplace mechanism for differential privacy to provide filter mechanisms for privately releasing outputs of black box functions even in the presence of malicious clients. Second, we use them to obtain the first tolerant testers for the Lipschitz property.
I study how the shadow prices of a linear program that allocates an endowment of $n\beta \in \mathbb{R}^{m}$ resources to $n$ customers behave as $n \rightarrow \infty$. I show the shadow prices (i) adhere to a concentration of measure, (ii) converge to a multivariate normal under central-limit-theorem scaling, and (iii) have a variance that decreases like $\Theta(1/n)$. I use these results to prove that the expected regret in \cites{Li2019b} online linear program is $\Theta(\log n)$, both when the customer variable distribution is known upfront and must be learned on the fly. I thus tighten \citeauthors{Li2019b} upper bound from $O(\log n \log \log n)$ to $O(\log n)$, and extend \cites{Lueker1995} $\Omega(\log n)$ lower bound to the multi-dimensional setting. I illustrate my new techniques with a simple analysis of \cites{Arlotto2019} multisecretary problem.
We study several polygonal curve problems under the Fr\'{e}chet distance via algebraic geometric methods. Let $\mathbb{X}_m^d$ and $\mathbb{X}_k^d$ be the spaces of all polygonal curves of $m$ and $k$ vertices in $\mathbb{R}^d$, respectively. We assume that $k \leq m$. Let $\mathcal{R}^d_{k,m}$ be the set of ranges in $\mathbb{X}_m^d$ for all possible metric balls of polygonal curves in $\mathbb{X}_k^d$ under the Fr\'{e}chet distance. We prove a nearly optimal bound of $O(dk\log (km))$ on the VC dimension of the range space $(\mathbb{X}_m^d,\mathcal{R}_{k,m}^d)$, improving on the previous $O(d^2k^2\log(dkm))$ upper bound and approaching the current $\Omega(dk\log k)$ lower bound. Our upper bound also holds for the weak Fr\'{e}chet distance. We also obtain exact solutions that are hitherto unknown for curve simplification, range searching, nearest neighbor search, and distance oracle.
We prove that, given a polyhedron $\mathcal P$ in $\mathbb{R}^3$, every point in $\mathbb R^3$ that does not see any vertex of $\mathcal P$ must see eight or more edges of $\mathcal P$, and this bound is tight. More generally, this remains true if $\mathcal P$ is any finite arrangement of internally disjoint polygons in $\mathbb{R}^3$. We also prove that every point in $\mathbb{R}^3$ can see six or more edges of $\mathcal{P}$ (possibly only the endpoints of some these edges) and every point in the interior of $\mathcal{P}$ can see a positive portion of at least six edges of $\mathcal{P}$. These bounds are also tight.
Safe and optimal controller synthesis for switched-controlled hybrid systems, which combine differential equations and discrete changes of the system's state, is known to be intricately hard. Reinforcement learning has been leveraged to construct near-optimal controllers, but their behavior is not guaranteed to be safe, even when it is encouraged by reward engineering. One way of imposing safety to a learned controller is to use a shield, which is correct by design. However, obtaining a shield for non-linear and hybrid environments is itself intractable. In this paper, we propose the construction of a shield using the so-called barbaric method, where an approximate finite representation of an underlying partition-based two-player safety game is extracted via systematically picked samples of the true transition function. While hard safety guarantees are out of reach, we experimentally demonstrate strong statistical safety guarantees with a prototype implementation and UPPAAL STRATEGO. Furthermore, we study the impact of the synthesized shield when applied as either a pre-shield (applied before learning a controller) or a post-shield (only applied after learning a controller). We experimentally demonstrate superiority of the pre-shielding approach. We apply our technique on a range of case studies, including two industrial examples, and further study post-optimization of the post-shielding approach.
For $1\le p \le \infty$, the Fr\'echet $p$-mean of a probability measure on a metric space is an important notion of central tendency that generalizes the usual notions in the real line of mean ($p=2$) and median ($p=1$). In this work we prove a collection of limit theorems for Fr\'echet means and related objects, which, in general, constitute a sequence of random closed sets. On the one hand, we show that many limit theorems (a strong law of large numbers, an ergodic theorem, and a large deviations principle) can be simply descended from analogous theorems on the space of probability measures via purely topological considerations. On the other hand, we provide the first sufficient conditions for the strong law of large numbers to hold in a $T_2$ topology (in particular, the Fell topology), and we show that this condition is necessary in some special cases. We also discuss statistical and computational implications of the results herein.
Shannon proved that almost all Boolean functions require a circuit of size $\Theta(2^n/n)$. We prove a quantum analog of this classical result. Unlike in the classical case the number of quantum circuits of any fixed size that we allow is uncountably infinite. Our main tool is a classical result in real algebraic geometry bounding the number of realizable sign conditions of any finite set of real polynomials in many variables.