Given $n$-vertex simple graphs $X$ and $Y$, the friends-and-strangers graph $\mathsf{FS}(X, Y)$ has as its vertices all $n!$ bijections from $V(X)$ to $V(Y)$, with bijections $\sigma, \tau$ adjacent if and only if they differ on two adjacent elements of $V(X)$ whose mappings are adjacent in $Y$. We consider the setting where $X$ and $Y$ are both edge-subgraphs of $K_{r,r}$: due to a parity obstruction, $\mathsf{FS}(X,Y)$ is always disconnected in this setting. Sharpening a result of Bangachev, we show that if $X$ and $Y$ respectively have minimum degrees $\delta(X)$ and $\delta(Y)$ and they satisfy $\delta(X) + \delta(Y) \geq \lfloor 3r/2 \rfloor + 1$, then $\mathsf{FS}(X,Y)$ has exactly two connected components. This proves that the cutoff for $\mathsf{FS}(X,Y)$ to avoid isolated vertices is equal to the cutoff for $\mathsf{FS}(X,Y)$ to have exactly two connected components. We also consider a probabilistic setup in which we fix $Y$ to be $K_{r,r}$, but randomly generate $X$ by including each edge in $K_{r,r}$ independently with probability $p$. Invoking a result of Zhu, we exhibit a phase transition phenomenon with threshold function $(\log r)/r$: below the threshold, $\mathsf{FS}(X,Y)$ has more than two connected components with high probability, while above the threshold, $\mathsf{FS}(X,Y)$ has exactly two connected components with high probability. Altogether, our results settle a conjecture and completely answer two problems of Alon, Defant, and Kravitz.
The Delaunay filtration $\mathcal{D}_{\bullet}(X)$ of a point cloud $X\subset \mathbb{R}^d$ is a central tool of computational topology. Its use is justified by the topological equivalence of $\mathcal{D}_{\bullet}(X)$ and the offset (i.e., union-of-balls) filtration of $X$. Given a function $\gamma: X \to \mathbb{R}$, we introduce a Delaunay bifiltration $\mathcal{DC}_{\bullet}(\gamma)$ that satisfies an analogous topological equivalence, ensuring that $\mathcal{DC}_{\bullet}(\gamma)$ topologically encodes the offset filtrations of all sublevel sets of $\gamma$, as well as the topological relations between them. $\mathcal{DC}_{\bullet}(\gamma)$ is of size $O(|X|^{\lceil\frac{d+1}{2}\rceil})$, which for $d$ odd matches the worst-case size of $\mathcal{D}_{\bullet}(X)$. Adapting the Bowyer-Watson algorithm for computing Delaunay triangulations, we give a simple, practical algorithm to compute $\mathcal{DC}_{\bullet}(\gamma)$ in time $O(|X|^{\lceil \frac{d}{2}\rceil +1})$. Our implementation, based on CGAL, computes $\mathcal{DC}_{\bullet}(\gamma)$ with modest overhead compared to computing $\mathcal{D}_{\bullet}(X)$, and handles tens of thousands of points in $\mathbb{R}^3$ within seconds.
We study optimization problems in a metric space $(\mathcal{X},d)$ where we can compute distances in two ways: via a ''strong'' oracle that returns exact distances $d(x,y)$, and a ''weak'' oracle that returns distances $\tilde{d}(x,y)$ which may be arbitrarily corrupted with some probability. This model captures the increasingly common trade-off between employing both an expensive similarity model (e.g. a large-scale embedding model), and a less accurate but cheaper model. Hence, the goal is to make as few queries to the strong oracle as possible. We consider both so-called ''point queries'', where the strong oracle is queried on a set of points $S \subset \mathcal{X} $ and returns $d(x,y)$ for all $x,y \in S$, and ''edge queries'' where it is queried for individual distances $d(x,y)$. Our main contributions are optimal algorithms and lower bounds for clustering and Minimum Spanning Tree (MST) in this model. For $k$-centers, $k$-median, and $k$-means, we give constant factor approximation algorithms with only $\tilde{O}(k)$ strong oracle point queries, and prove that $\Omega(k)$ queries are required for any bounded approximation. For edge queries, our upper and lower bounds are both $\tilde{\Theta}(k^2)$. Surprisingly, for the MST problem we give a $O(\sqrt{\log n})$ approximation algorithm using no strong oracle queries at all, and a matching $\Omega(\sqrt{\log n})$ lower bound. We empirically evaluate our algorithms, and show that their quality is comparable to that of the baseline algorithms that are given all true distances, but while querying the strong oracle on only a small fraction ($<1\%$) of points.
Given a conjunctive query $Q$ and a database $\mathbf{D}$, a direct access to the answers of $Q$ over $\mathbf{D}$ is the operation of returning, given an index $j$, the $j^{\mathsf{th}}$ answer for some order on its answers. While this problem is $\#\mathsf{P}$-hard in general with respect to combined complexity, many conjunctive queries have an underlying structure that allows for a direct access to their answers for some lexicographical ordering that takes polylogarithmic time in the size of the database after a polynomial time precomputation. Previous work has precisely characterised the tractable classes and given fine-grained lower bounds on the precomputation time needed depending on the structure of the query. In this paper, we generalise these tractability results to the case of signed conjunctive queries, that is, conjunctive queries that may contain negative atoms. Our technique is based on a class of circuits that can represent relational data. We first show that this class supports tractable direct access after a polynomial time preprocessing. We then give bounds on the size of the circuit needed to represent the answer set of signed conjunctive queries depending on their structure. Both results combined together allow us to prove the tractability of direct access for a large class of conjunctive queries. On the one hand, we recover the known tractable classes from the literature in the case of positive conjunctive queries. On the other hand, we generalise and unify known tractability results about negative conjunctive queries -- that is, queries having only negated atoms. In particular, we show that the class of $\beta$-acyclic negative conjunctive queries and the class of bounded nest set width negative conjunctive queries admit tractable direct access.
We revisit the moving least squares (MLS) approximation scheme on the sphere $\mathbb S^{d-1} \subset \mathbb R^d$, where $d>1$. It is well known that using the spherical harmonics up to degree $L \in \mathbb N$ as ansatz space yields for functions in $\mathcal C^{L+1}(\mathbb S^{d-1})$ the approximation order $\mathcal O \left( h^{L+1} \right)$, where $h$ denotes the fill distance of the sampling nodes. In this paper we show that the dimension of the ansatz space can be almost halved, by including only spherical harmonics of even or odd degree up to $L$, while preserving the same order of approximation. Numerical experiments indicate that using the reduced ansatz space is essential to ensure the numerical stability of the MLS approximation scheme as $h \to 0$. Finally, we compare our approach with an MLS approximation scheme that uses polynomials on the tangent space as ansatz space.
Posterior sampling, i.e., exponential mechanism to sample from the posterior distribution, provides $\varepsilon$-pure differential privacy (DP) guarantees and does not suffer from potentially unbounded privacy breach introduced by $(\varepsilon,\delta)$-approximate DP. In practice, however, one needs to apply approximate sampling methods such as Markov chain Monte Carlo (MCMC), thus re-introducing the unappealing $\delta$-approximation error into the privacy guarantees. To bridge this gap, we propose the Approximate SAample Perturbation (abbr. ASAP) algorithm which perturbs an MCMC sample with noise proportional to its Wasserstein-infinity ($W_\infty$) distance from a reference distribution that satisfies pure DP or pure Gaussian DP (i.e., $\delta=0$). We then leverage a Metropolis-Hastings algorithm to generate the sample and prove that the algorithm converges in W$_\infty$ distance. We show that by combining our new techniques with a careful localization step, we obtain the first nearly linear-time algorithm that achieves the optimal rates in the DP-ERM problem with strongly convex and smooth losses.
In 2016, a breakthrough result of Chechik and Wulff-Nilsen [SODA '16] established that every $n$-node graph $G$ has a $(1+\varepsilon)(2k-1)$-spanner of lightness $O_{\varepsilon}(n^{1/k})$, and recent followup work by Le and Solomon [STOC '23] generalized the proof strategy and improved the dependence on $\varepsilon$. We give a new proof of this result, with the improved $\varepsilon$-dependence. Our proof is a direct analysis of the often-studied greedy spanner, and can be viewed as an extension of the folklore Moore bounds used to analyze spanner sparsity.
A cubic hypermatrix of order $d$ can be considered as a structure matrix of a tensor with covariant order $r$ and contra-variant order $s=d-r$. Corresponding to this matrix expression of the hypermatrix, an eigenvector $x$ with respect to an eigenvalue $\lambda$ is proposed, called the universal eigenvector and eigenvalue of the hypermatrix. According to the action of tensors, if $x$ is decomposable, it is called a universal hyper-(UH-)eigenvector. Particularly, if all decomposed components are the same, $x$ is called a universal diagonal hyper (UDH-)eigenvector, which covers most of existing definitions of eigenvalue/eigenvector of hypermatrices. Using Semi-tensor product (STP) of matrices, the properties of universal eigenvalues/eigenvectors are investigated. Algorithms are developed to calculate universal eigenvalues/eigenvectors for hypermatrices. Particular efforts have been put on UDH- eigenvalues/eigenvectors, because they cover most of the existing eigenvalues/eigenvectors for hypermatrices. Some numerical examples are presented to illustrate that the proposed technique is universal and efficient.
Constructing a similarity graph from a set $X$ of data points in $\mathbb{R}^d$ is the first step of many modern clustering algorithms. However, typical constructions of a similarity graph have high time complexity, and a quadratic space dependency with respect to $|X|$. We address this limitation and present a new algorithmic framework that constructs a sparse approximation of the fully connected similarity graph while preserving its cluster structure. Our presented algorithm is based on the kernel density estimation problem, and is applicable for arbitrary kernel functions. We compare our designed algorithm with the well-known implementations from the scikit-learn library and the FAISS library, and find that our method significantly outperforms the implementation from both libraries on a variety of datasets.
We study the classic Text-to-Pattern Hamming Distances problem: given a pattern $P$ of length $m$ and a text $T$ of length $n$, both over a polynomial-size alphabet, compute the Hamming distance between $P$ and $T[i\, .\, . \, i+m-1]$ for every shift $i$, under the standard Word-RAM model with $\Theta(\log n)$-bit words. - We provide an $O(n\sqrt{m})$ time Las Vegas randomized algorithm for this problem, beating the decades-old $O(n \sqrt{m \log m})$ running time [Abrahamson, SICOMP 1987]. We also obtain a deterministic algorithm, with a slightly higher $O(n\sqrt{m}(\log m\log\log m)^{1/4})$ running time. Our randomized algorithm extends to the $k$-bounded setting, with running time $O\big(n+\frac{nk}{\sqrt{m}}\big)$, removing all the extra logarithmic factors from earlier algorithms [Gawrychowski and Uzna\'{n}ski, ICALP 2018; Chan, Golan, Kociumaka, Kopelowitz and Porat, STOC 2020]. - For the $(1+\epsilon)$-approximate version of Text-to-Pattern Hamming Distances, we give an $\tilde{O}(\epsilon^{-0.93}n)$ time Monte Carlo randomized algorithm, beating the previous $\tilde{O}(\epsilon^{-1}n)$ running time [Kopelowitz and Porat, FOCS 2015; Kopelowitz and Porat, SOSA 2018]. Our approximation algorithm exploits a connection with $3$SUM, and uses a combination of Fredman's trick, equality matrix product, and random sampling; in particular, we obtain new results on approximate counting versions of $3$SUM and Exact Triangle, which may be of independent interest. Our exact algorithms use a novel combination of hashing, bit-packed FFT, and recursion; in particular, we obtain a faster algorithm for computing the sumset of two integer sets, in the regime when the universe size is close to quadratic in the number of elements. We also prove a fine-grained equivalence between the exact Text-to-Pattern Hamming Distances problem and a range-restricted, counting version of $3$SUM.
A universal partial cycle (or upcycle) for $\mathcal{A}^n$ is a cyclic sequence that covers each word of length $n$ over the alphabet $\mathcal{A}$ exactly once -- like a De Bruijn cycle, except that we also allow a wildcard symbol $\mathord{\diamond}$ that can represent any letter of $\mathcal{A}$. Chen et al. in 2017 and Goeckner et al. in 2018 showed that the existence and structure of upcycles are highly constrained, unlike those of De Bruijn cycles, which exist for any alphabet size and word length. Moreover, it was not known whether any upcycles existed for $n \ge 5$. We present several examples of upcycles over both binary and non-binary alphabets for $n = 8$. We generalize two graph-theoretic representations of De Bruijn cycles to upcycles. We then introduce novel approaches to constructing new upcycles from old ones. Notably, given any upcycle for an alphabet of size $a$, we show how to construct an upcycle for an alphabet of size $ak$ for any $k \in \mathbb{N}$, so each example generates an infinite family of upcycles. We also define folds and lifts of upcycles, which relate upcycles with differing densities of $\mathord{\diamond}$ characters. In particular, we show that every upcycle lifts to a De Bruijn cycle. Our constructions rely on a different generalization of De Bruijn cycles known as perfect necklaces, and we introduce several new examples of perfect necklaces. We extend the definitions of certain pseudorandomness properties to partial words and determine which are satisfied by all upcycles, then draw a conclusion about linear feedback shift registers. Finally, we prove new nonexistence results based on the word length $n$, alphabet size, and $\mathord{\diamond}$ density.