A 2-packing set for an undirected graph $G=(V,E)$ is a subset $\mathcal{S} \subset V$ such that any two vertices $v_1,v_2 \in \mathcal{S}$ have no common neighbors. Finding a 2-packing set of maximum cardinality is a NP-hard problem. We develop a new approach to solve this problem on arbitrary graphs using its close relation to the independent set problem. Thereby, our algorithm red2pack uses new data reduction rules specific to the 2-packing set problem as well as a graph transformation. Our experiments show that we outperform the state-of-the-art for arbitrary graphs with respect to solution quality and also are able to compute solutions multiple orders of magnitude faster than previously possible. For example, we are able to solve 63% of the graphs in the tested data set to optimality in less than a second while the competitor for arbitrary graphs can only solve 5% of these graphs to optimality even with a 10 hour time limit. Moreover, our approach can solve a wide range of large instances that have previously been unsolved.
In this paper we consider the problem of estimating the $f$-moment ($\sum_{v\in [n]} (f(\mathbf{x}(v))-f(0))$) of a dynamic vector $\mathbf{x}\in \mathbb{G}^n$ over some abelian group $(\mathbb{G},+)$, where the $\|f\|_\infty$ norm is bounded. We propose a simple sketch and new estimation framework based on the \emph{Fourier transform} of $f$. I.e., we decompose $f$ into a linear combination of homomorphisms $f_1,f_2,\ldots$ from $(\mathbb{G},+)$ to $(\mathbb{C},\times)$, estimate the $f_k$-moment for each $f_k$, and synthesize them to obtain an estimate of the $f$-moment. Our estimators are asymptotically unbiased and have variance asymptotic to $\|\mathbf{x}\|_0^2 (\|f\|_\infty^2 m^{-1} + \|\hat{f}\|_1^2 m^{-2})$, where the size of the sketch is $O(m\log n\log|\mathbb{G}|)$ bits. When $\mathbb{G}=\mathbb{Z}$ this problem can also be solved using off-the-shelf $\ell_0$-samplers with space $O(m\log^2 n)$ bits, which does not obviously generalize to finite groups. As a concrete benchmark, we extend Ganguly, Garofalakis, and Rastogi's singleton-detector-based sampler to work over $\mathbb{G}$ using $O(m\log n\log|\mathbb{G}|\log(m\log n))$ bits. We give some experimental evidence that the Fourier-based estimation framework is significantly more accurate than sampling-based approaches at the same memory footprint.
We study a general factor analysis framework where the $n$-by-$p$ data matrix is assumed to follow a general exponential family distribution entry-wise. While this model framework has been proposed before, we here further relax its distributional assumption by using a quasi-likelihood setup. By parameterizing the mean-variance relationship on data entries, we additionally introduce a dispersion parameter and entry-wise weights to model large variations and missing values. The resulting model is thus not only robust to distribution misspecification but also more flexible and able to capture non-Gaussian covariance structures of the data matrix. Our main focus is on efficient computational approaches to perform the factor analysis. Previous modeling frameworks rely on simulated maximum likelihood (SML) to find the factorization solution, but this method was shown to lead to asymptotic bias when the simulated sample size grows slower than the square root of the sample size $n$, eliminating its practical application for data matrices with large $n$. Borrowing from expectation-maximization (EM) and stochastic gradient descent (SGD), we investigate three estimation procedures based on iterative factorization updates. Our proposed solution does not show asymptotic biases, and scales even better for large matrix factorizations with error $O(1/p)$. To support our findings, we conduct simulation experiments and discuss its application in three case studies.
For a locally finite set, $A \subseteq \mathbb{R}^d$, the $k$-th Brillouin zone of $a \in A$ is the region of points $x \in \mathbb{R}^d$ for which $\|x-a\|$ is the $k$-th smallest among the Euclidean distances between $x$ and the points in $A$. If $A$ is a lattice, the $k$-th Brillouin zones of the points in $A$ are translates of each other, which tile space. Depending on the value of $k$, they express medium- or long-range order in the set. We study fundamental geometric and combinatorial properties of Brillouin zones, focusing on the integer lattice and its perturbations. Our results include the stability of a Brillouin zone under perturbations, a linear upper bound on the number of chambers in a zone for lattices in $\mathbb{R}^2$, and the convergence of the maximum volume of a chamber to zero for the integer lattice.
Consider a matroid where all elements are labeled with an element in $\mathbb{Z}$. We are interested in finding a base where the sum of the labels is congruent to $g \pmod m$. We show that this problem can be solved in $\tilde{O}(2^{4m} n r^{5/6})$ time for a matroid with $n$ elements and rank $r$, when $m$ is either the product of two primes or a prime power. The algorithm can be generalized to all moduli and, in fact, to all abelian groups if a classic additive combinatorics conjecture by Schrijver and Seymour holds true. We also discuss the optimization version of the problem.
We show that any bounded integral function $f : A \times B \mapsto \{0,1, \dots, \Delta\}$ with rank $r$ has deterministic communication complexity $\Delta^{O(\Delta)} \cdot \sqrt{r} \cdot \log r$, where the rank of $f$ is defined to be the rank of the $A \times B$ matrix whose entries are the function values. As a corollary, we show that any $n$-dimensional polytope that admits a slack matrix with entries from $\{0,1,\dots,\Delta\}$ has extension complexity at most $\exp(\Delta^{O(\Delta)} \cdot \sqrt{n} \cdot \log n)$.
We investigate a Bayesian $k$-armed bandit problem in the \emph{many-armed} regime, where $k \geq \sqrt{T}$ and $T$ represents the time horizon. Initially, and aligned with recent literature on many-armed bandit problems, we observe that subsampling plays a key role in designing optimal algorithms; the conventional UCB algorithm is sub-optimal, whereas a subsampled UCB (SS-UCB), which selects $\Theta(\sqrt{T})$ arms for execution under the UCB framework, achieves rate-optimality. However, despite SS-UCB's theoretical promise of optimal regret, it empirically underperforms compared to a greedy algorithm that consistently chooses the empirically best arm. This observation extends to contextual settings through simulations with real-world data. Our findings suggest a new form of \emph{free exploration} beneficial to greedy algorithms in the many-armed context, fundamentally linked to a tail event concerning the prior distribution of arm rewards. This finding diverges from the notion of free exploration, which relates to covariate variation, as recently discussed in contextual bandit literature. Expanding upon these insights, we establish that the subsampled greedy approach not only achieves rate-optimality for Bernoulli bandits within the many-armed regime but also attains sublinear regret across broader distributions. Collectively, our research indicates that in the many-armed regime, practitioners might find greater value in adopting greedy algorithms.
We introduce $r$-loopy Weisfeiler-Leman ($r$-$\ell{}$WL), a novel hierarchy of graph isomorphism tests and a corresponding GNN framework, $r$-$\ell{}$MPNN, that can count cycles up to length $r + 2$. Most notably, we show that $r$-$\ell{}$WL can count homomorphisms of cactus graphs. This strictly extends classical 1-WL, which can only count homomorphisms of trees and, in fact, is incomparable to $k$-WL for any fixed $k$. We empirically validate the expressive and counting power of the proposed $r$-$\ell{}$MPNN on several synthetic datasets and present state-of-the-art predictive performance on various real-world datasets. The code is available at //github.com/RPaolino/loopy
Given an intractable distribution $p$, the problem of variational inference (VI) is to compute the best approximation $q$ from some more tractable family $\mathcal{Q}$. Most commonly the approximation is found by minimizing a Kullback-Leibler (KL) divergence. However, there exist other valid choices of divergences, and when $\mathcal{Q}$ does not contain~$p$, each divergence champions a different solution. We analyze how the choice of divergence affects the outcome of VI when a Gaussian with a dense covariance matrix is approximated by a Gaussian with a diagonal covariance matrix. In this setting we show that different divergences can be \textit{ordered} by the amount that their variational approximations misestimate various measures of uncertainty, such as the variance, precision, and entropy. We also derive an impossibility theorem showing that no two of these measures can be simultaneously matched by a factorized approximation; hence, the choice of divergence informs which measure, if any, is correctly estimated. Our analysis covers the KL divergence, the R\'enyi divergences, and a score-based divergence that compares $\nabla\log p$ and $\nabla\log q$. We empirically evaluate whether these orderings hold when VI is used to approximate non-Gaussian distributions.
A $\textit{resolving set}$ $R$ in a graph $G$ is a set of vertices such that every vertex of $G$ is uniquely identified by its distances to the vertices of $R$. Introduced in the 1970s, this concept has been since then extensively studied from both combinatorial and algorithmic point of view. We propose a generalization of the concept of resolving sets to temporal graphs, i.e., graphs with edge sets that change over discrete time-steps. In this setting, the $\textit{temporal distance}$ from $u$ to $v$ is the earliest possible time-step at which a journey with strictly increasing time-steps on edges leaving $u$ reaches $v$, i.e., the first time-step at which $v$ could receive a message broadcast from $u$. A $\textit{temporal resolving set}$ of a temporal graph $\mathcal{G}$ is a subset $R$ of its vertices such that every vertex of $\mathcal{G}$ is uniquely identified by its temporal distances from vertices of $R$. We study the problem of finding a minimum-size temporal resolving set, and show that it is NP-complete even on very restricted graph classes and with strong constraints on the time-steps: temporal complete graphs where every edge appears in either time-step 1 or 2, temporal trees where every edge appears in at most two consecutive time-steps, and even temporal subdivided stars where every edge appears in at most two (not necessarily consecutive) time-steps. On the other hand, we give polynomial-time algorithms for temporal paths and temporal stars where every edge appears in exactly one time-step, and give a combinatorial analysis and algorithms for several temporal graph classes where the edges appear in periodic time-steps.
We study the sample complexity of learning an $\varepsilon$-optimal policy in an average-reward Markov decision process (MDP) under a generative model. We establish the complexity bound $\widetilde{O}\left(SA\frac{H}{\varepsilon^2} \right)$, where $H$ is the span of the bias function of the optimal policy and $SA$ is the cardinality of the state-action space. Our result is the first that is minimax optimal (up to log factors) in all parameters $S,A,H$ and $\varepsilon$, improving on existing work that either assumes uniformly bounded mixing times for all policies or has suboptimal dependence on the parameters. Our result is based on reducing the average-reward MDP to a discounted MDP. To establish the optimality of this reduction, we develop improved bounds for $\gamma$-discounted MDPs, showing that $\widetilde{O}\left(SA\frac{H}{(1-\gamma)^2\varepsilon^2} \right)$ samples suffice to learn a $\varepsilon$-optimal policy in weakly communicating MDPs under the regime that $\gamma \geq 1 - \frac{1}{H}$, circumventing the well-known lower bound of $\widetilde{\Omega}\left(SA\frac{1}{(1-\gamma)^3\varepsilon^2} \right)$ for general $\gamma$-discounted MDPs. Our analysis develops upper bounds on certain instance-dependent variance parameters in terms of the span parameter. These bounds are tighter than those based on the mixing time or diameter of the MDP and may be of broader use.