The Weisfeiler-Leman procedure is a widely-used technique for graph isomorphism testing that works by iteratively computing an isomorphism-invariant coloring of vertex tuples. Meanwhile, a fundamental tool in structural graph theory, which is often exploited in approaches to tackle the graph isomorphism problem, is the decomposition into 2- and 3-connected components. We prove that the 2-dimensional Weisfeiler-Leman algorithm implicitly computes the decomposition of a graph into its 3-connected components. This implies that the dimension of the algorithm needed to distinguish two given non-isomorphic graphs is at most the dimension required to distinguish non-isomorphic 3-connected components of the graphs (assuming dimension at least 2). To obtain our decomposition result, we show that, for k >= 2, the k-dimensional algorithm distinguishes k-separators, i.e., k-tuples of vertices that separate the graph, from other vertex k-tuples. As a byproduct, we also obtain insights about the connectivity of constituent graphs of association schemes. In an application of the results, we show the new upper bound of k on the Weisfeiler-Leman dimension of the class of graphs of treewidth at most k. Using a construction by Cai, F\"urer, and Immerman, we also provide a new lower bound that is asymptotically tight up to a factor of 2.
Given a graph $G$ of degree $k$ over $n$ vertices, we consider the problem of computing a near maximum cut or a near minimum bisection in polynomial time. For graphs of girth $L$, we develop a local message passing algorithm whose complexity is $O(nkL)$, and that achieves near optimal cut values among all $L$-local algorithms. Focusing on max-cut, the algorithm constructs a cut of value $nk/4+ n\mathsf{P}_\star\sqrt{k/4}+\mathsf{err}(n,k,L)$, where $\mathsf{P}_\star\approx 0.763166$ is the value of the Parisi formula from spin glass theory, and $\mathsf{err}(n,k,L)=o_n(n)+no_k(\sqrt{k})+n \sqrt{k} o_L(1)$ (subscripts indicate the asymptotic variables). Our result generalizes to locally treelike graphs, i.e., graphs whose girth becomes $L$ after removing a small fraction of vertices. Earlier work established that, for random $k$-regular graphs, the typical max-cut value is $nk/4+ n\mathsf{P}_\star\sqrt{k/4}+o_n(n)+no_k(\sqrt{k})$. Therefore our algorithm is nearly optimal on such graphs. An immediate corollary of this result is that random regular graphs have nearly minimum max-cut, and nearly maximum min-bisection among all regular locally treelike graphs. This can be viewed as a combinatorial version of the near-Ramanujan property of random regular graphs.
For a graph class $\mathcal{C}$, the $\mathcal{C}$-Edge-Deletion problem asks for a given graph $G$ to delete the minimum number of edges from $G$ in order to obtain a graph in $\mathcal{C}$. We study the $\mathcal{C}$-Edge-Deletion problem for $\mathcal{C}$ the permutation graphs, interval graphs, and other related graph classes. It follows from Courcelle's Theorem that these problems are fixed parameter tractable when parameterized by treewidth. In this paper, we present concrete FPT algorithms for these problems. By giving explicit algorithms and analyzing these in detail, we obtain algorithms that are significantly faster than the algorithms obtained by using Courcelle's theorem.
We consider the energy complexity of the leader election problem in the single-hop radio network model, where each device has a unique identifier in $\{1, 2, \ldots, N\}$. Energy is a scarce resource for small battery-powered devices. For such devices, most of the energy is often spent on communication, not on computation. To approximate the actual energy cost, the energy complexity of an algorithm is defined as the maximum over all devices of the number of time slots where the device transmits or listens. Much progress has been made in understanding the energy complexity of leader election in radio networks, but very little is known about the trade-off between time and energy. $\textbf{Time-energy trade-off:}$ For any $k \geq \log \log N$, we show that a leader among at most $n$ devices can be elected deterministically in $O(k \cdot n^{1+\epsilon}) + O(k \cdot N^{1/k})$ time and $O(k)$ energy if each device can simultaneously transmit and listen, where $\epsilon > 0$ is any small constant. This improves upon the previous $O(N)$-time $O(\log \log N)$-energy algorithm by Chang et al. [STOC 2017]. We provide lower bounds to show that the time-energy trade-off of our algorithm is near-optimal. $\textbf{Dense instances:}$ For the dense instances where the number of devices is $n = \Theta(N)$, we design a deterministic leader election algorithm using only $O(1)$ energy. This improves upon the $O(\log^* N)$-energy algorithm by Jurdzi\'{n}ski et al. [PODC 2002] and the $O(\alpha(N))$-energy algorithm by Chang et al. [STOC 2017]. More specifically, we show that the optimal deterministic energy complexity of leader election is $\Theta\left(\max\left\{1, \log \frac{N}{n}\right\}\right)$ if the devices cannot simultaneously transmit and listen, and it is $\Theta\left(\max\left\{1, \log \log \frac{N}{n}\right\}\right)$ if they can.
In this paper we use the theory of computing to study fractal dimensions of projections in Euclidean spaces. A fundamental result in fractal geometry is Marstrand's projection theorem, which shows that for every analytic set E, for almost every line L, the Hausdorff dimension of the orthogonal projection of E onto L is maximal. We use Kolmogorov complexity to give two new results on the Hausdorff and packing dimensions of orthogonal projections onto lines. The first shows that the conclusion of Marstrand's theorem holds whenever the Hausdorff and packing dimensions agree on the set E, even if E is not analytic. Our second result gives a lower bound on the packing dimension of projections of arbitrary sets. Finally, we give a new proof of Marstrand's theorem using the theory of computing.
The vector-balancing problem is a fundamental problem in discrepancy theory: given T vectors in $[-1,1]^n$, find a signing $\sigma(a) \in \{\pm 1\}$ of each vector $a$ to minimize the discrepancy $\| \sum_{a} \sigma(a) \cdot a \|_{\infty}$. This problem has been extensively studied in the static/offline setting. In this paper we initiate its study in the fully-dynamic setting with recourse: the algorithm sees a stream of T insertions and deletions of vectors, and at each time must maintain a low-discrepancy signing, while also minimizing the amortized recourse (the number of times any vector changes its sign) per update. For general vectors, we show algorithms which almost match Spencer's $O(\sqrt{n})$ offline discrepancy bound, with ${O}(n\cdot poly\!\log T)$ amortized recourse per update. The crucial idea is to compute a basic feasible solution to the linear relaxation in a distributed and recursive manner, which helps find a low-discrepancy signing. To bound recourse we argue that only a small part of the instance needs to be re-computed at each update. Since vector balancing has also been greatly studied for sparse vectors, we then give algorithms for low-discrepancy edge orientation, where we dynamically maintain signings for 2-sparse vectors. Alternatively, this can be seen as orienting a dynamic set of edges of an n-vertex graph to minimize the absolute difference between in- and out-degrees at any vertex. We present a deterministic algorithm with $O(poly\!\log n)$ discrepancy and $O(poly\!\log n)$ amortized recourse. The core ideas are to dynamically maintain an expander-decomposition with low recourse and then to show that, as the expanders change over time, a natural local-search algorithm converges quickly (i.e., with low recourse) to a low-discrepancy solution. We also give strong lower bounds for local-search discrepancy minimization algorithms.
Given a graph $G = (V,E)$, a threshold function $t~ :~ V \rightarrow \mathbb{N}$ and an integer $k$, we study the Harmless Set problem, where the goal is to find a subset of vertices $S \subseteq V$ of size at least $k$ such that every vertex $v\in V$ has less than $t(v)$ neighbors in $S$. We enhance our understanding of the problem from the viewpoint of parameterized complexity. Our focus lies on parameters that measure the structural properties of the input instance. We show that the problem is W[1]-hard parameterized by a wide range of fairly restrictive structural parameters such as the feedback vertex set number, pathwidth, treedepth, and even the size of a minimum vertex deletion set into graphs of pathwidth and treedepth at most three. We also show that the Harmless Set problem with majority thresholds is W[1]-hard when parameterized by the treewidth of the input graph. We prove that the Harmless Set problem can be solved in polynomial time on graph with bounded cliquewidth. On the positive side, we obtain fixed-parameter algorithms for the problem with respect to neighbourhood diversity and twin cover. We show that the problem parameterized by the solution size is fixed parameter tractable on planar graphs. We thereby resolve two open questions stated in C. Bazgan and M. Chopin (2014) concerning the complexity of Harmless Set parameterized by the treewidth of the input graph and on planar graphs with respect to the solution size.
We prove constructively that the maximum possible number of minimal connected dominating sets in a connected undirected graph of order $n$ is in $\Omega(1.489^n)$. This improves the previously known lower bound of $\Omega(1.4422^n)$ and reduces the gap between lower and upper bounds for input-sensitive enumeration of minimal connected dominating sets in general graphs as well as some special graph classes.
We consider the problem of explaining the predictions of graph neural networks (GNNs), which otherwise are considered as black boxes. Existing methods invariably focus on explaining the importance of graph nodes or edges but ignore the substructures of graphs, which are more intuitive and human-intelligible. In this work, we propose a novel method, known as SubgraphX, to explain GNNs by identifying important subgraphs. Given a trained GNN model and an input graph, our SubgraphX explains its predictions by efficiently exploring different subgraphs with Monte Carlo tree search. To make the tree search more effective, we propose to use Shapley values as a measure of subgraph importance, which can also capture the interactions among different subgraphs. To expedite computations, we propose efficient approximation schemes to compute Shapley values for graph data. Our work represents the first attempt to explain GNNs via identifying subgraphs explicitly and directly. Experimental results show that our SubgraphX achieves significantly improved explanations, while keeping computations at a reasonable level.
The pairwise interaction paradigm of graph machine learning has predominantly governed the modelling of relational systems. However, graphs alone cannot capture the multi-level interactions present in many complex systems and the expressive power of such schemes was proven to be limited. To overcome these limitations, we propose Message Passing Simplicial Networks (MPSNs), a class of models that perform message passing on simplicial complexes (SCs) - topological objects generalising graphs to higher dimensions. To theoretically analyse the expressivity of our model we introduce a Simplicial Weisfeiler-Lehman (SWL) colouring procedure for distinguishing non-isomorphic SCs. We relate the power of SWL to the problem of distinguishing non-isomorphic graphs and show that SWL and MPSNs are strictly more powerful than the WL test and not less powerful than the 3-WL test. We deepen the analysis by comparing our model with traditional graph neural networks with ReLU activations in terms of the number of linear regions of the functions they can represent. We empirically support our theoretical claims by showing that MPSNs can distinguish challenging strongly regular graphs for which GNNs fail and, when equipped with orientation equivariant layers, they can improve classification accuracy in oriented SCs compared to a GNN baseline. Additionally, we implement a library for message passing on simplicial complexes that we envision to release in due course.
In this paper, from a theoretical perspective, we study how powerful graph neural networks (GNNs) can be for learning approximation algorithms for combinatorial problems. To this end, we first establish a new class of GNNs that can solve strictly a wider variety of problems than existing GNNs. Then, we bridge the gap between GNN theory and the theory of distributed local algorithms to theoretically demonstrate that the most powerful GNN can learn approximation algorithms for the minimum dominating set problem and the minimum vertex cover problem with some approximation ratios and that no GNN can perform better than with these ratios. This paper is the first to elucidate approximation ratios of GNNs for combinatorial problems. Furthermore, we prove that adding coloring or weak-coloring to each node feature improves these approximation ratios. This indicates that preprocessing and feature engineering theoretically strengthen model capabilities.