A vertex set $D$ in a finite undirected graph $G$ is an {\em efficient dominating set} (\emph{e.d.s.}\ for short) of $G$ if every vertex of $G$ is dominated by exactly one vertex of $D$. The \emph{Efficient Domination} (ED) problem, which asks for the existence of an e.d.s.\ in $G$, is known to be \NP-complete for chordal bipartite graphs as well as for $P_7$-free graphs, and even for very restricted $H$-free bipartite graph classes such as for $K_{1,4}$-free bipartite graphs as well as for $C_4$-free bipartite graphs while it is solvable in polynomial time for $P_8$-free bipartite graphs as well as for $S_{1,3,3}$-free bipartite graphs and for $S_{1,1,5}$-free bipartite graphs. Here we show that ED can be solved in polynomial time for $(S_{1,2,5},S_{3,3,3})$-free chordal bipartite graphs.
We consider a graph-structured change point problem in which we observe a random vector with piecewise constant but unknown mean and whose independent, sub-Gaussian coordinates correspond to the $n$ nodes of a fixed graph. We are interested in the localisation task of recovering the partition of the nodes associated to the constancy regions of the mean vector. When the partition $\mathcal{S}$ consists of only two elements, we characterise the difficulty of the localisation problem in terms of four key parameters: the maximal noise variance $\sigma^2$, the size $\Delta$ of the smaller element of the partition, the magnitude $\kappa$ of the difference in the signal values across contiguous elements of the partition and the sum of the effective resistance edge weights $|\partial_r(\mathcal{S})|$ of the corresponding cut -- a graph theoretic quantity quantifying the size of the partition boundary. In particular, we demonstrate an information theoretical lower bound implying that, in the low signal-to-noise ratio regime $\kappa^2 \Delta \sigma^{-2} |\partial_r(\mathcal{S})|^{-1} \lesssim 1$, no consistent estimator of the true partition exists. On the other hand, when $\kappa^2 \Delta \sigma^{-2} |\partial_r(\mathcal{S})|^{-1} \gtrsim \zeta_n \log\{r(|E|)\}$, with $r(|E|)$ being the sum of effective resistance weighted edges and $\zeta_n$ being any diverging sequence in $n$, we show that a polynomial-time, approximate $\ell_0$-penalised least squared estimator delivers a localisation error -- measured by the symmetric difference between the true and estimated partition -- of order $ \kappa^{-2} \sigma^2 |\partial_r(\mathcal{S})| \log\{r(|E|)\}$. Aside from the $\log\{r(|E|)\}$ term, this rate is minimax optimal. Finally, we provide discussions on the localisation error for more general partitions of unknown sizes.
A homomorphism from a graph $G$ to a graph $H$ is an edge-preserving mapping from $V(G)$ to $V(H)$. Let $H$ be a fixed graph with possible loops. In the list homomorphism problem, denoted by \textsc{LHom}($H$), the instance is a graph $G$, whose every vertex is equipped with a subset of $V(H)$, called list. We ask whether there exists a homomorphism from $G$ to $H$, such that every vertex from $G$ is mapped to a vertex from its list. We study the complexity of the \textsc{LHom}($H$) problem in intersection graphs of various geometric objects. In particular, we are interested in answering the question for what graphs $H$ and for what types of geometric objects, the \textsc{LHom}($H$) problem can be solved in time subexponential in the number of vertices of the instance. We fully resolve this question for string graphs, i.e., intersection graphs of continuous curves in the plane. Quite surprisingly, it turns out that the dichotomy exactly coincides with the analogous dichotomy for graphs excluding a fixed path as an induced subgraph [Okrasa, Rz\k{a}\.zewski, STACS 2021]. Then we turn our attention to subclasses of string graphs, defined as intersections of fat objects. We observe that the (non)existence of subexponential-time algorithms in such classes is closely related to the size $\mathrm{mrc}(H)$ of a maximum reflexive clique in $H$, i.e., maximum number of pairwise adjacent vertices, each of which has a loop. We study the maximum value of $\mathrm{mrc}(H)$ that guarantees the existence of a subexponential-time algorithm for \textsc{LHom}($H$) in intersection graphs of (i) convex fat objects, (ii) fat similarly-sized objects, and (iii) disks. In the first two cases we obtain optimal results, by giving matching algorithms and lower bounds. Finally, we discuss possible extensions of our results to weighted generalizations of \textsc{LHom}($H$).
The \emph{Product Structure Theorem} for planar graphs (Dujmovi\'c et al.\ \emph{JACM}, \textbf{67}(4):22) states that any planar graph is contained in the strong product of a planar $3$-tree, a path, and a $3$-cycle. We give a simple linear-time algorithm for finding this decomposition as well as several related decompositions. This improves on the previous $O(n\log n)$ time algorithm (Morin.\ \emph{Algorithmica}, \textbf{85}(5):1544--1558).
We propose a model for online graph problems where algorithms are given access to an oracle that predicts (e.g., based on past data) the degrees of nodes in the graph. Within this model, we study the classic problem of online bipartite matching, and a natural greedy matching algorithm called MinPredictedDegree, which uses predictions of the degrees of offline nodes. For the bipartite version of a stochastic graph model due to Chung, Lu, and Vu where the expected values of the offline degrees are known and used as predictions, we show that MinPredictedDegree stochastically dominates any other online algorithm, i.e., it is optimal for graphs drawn from this model. Since the "symmetric" version of the model, where all online nodes are identical, is a special case of the well-studied "known i.i.d. model", it follows that the competitive ratio of MinPredictedDegree on such inputs is at least 0.7299. For the special case of graphs with power law degree distributions, we show that MinPredictedDegree frequently produces matchings almost as large as the true maximum matching on such graphs. We complement these results with an extensive empirical evaluation showing that MinPredictedDegree compares favorably to state-of-the-art online algorithms for online matching.
Listing dense subgraphs in large graphs plays a key task in varieties of network analysis applications like community detection. Clique, as the densest model, has been widely investigated. However, in practice, communities rarely form as cliques for various reasons, e.g., data noise. Therefore, $k$-plex, -- graph with each vertex adjacent to all but at most $k$ vertices, is introduced as a relaxed version of clique. Often, to better simulate cohesive communities, an emphasis is placed on connected $k$-plexes with small $k$. In this paper, we continue the research line of listing all maximal $k$-plexes and maximal $k$-plexes of prescribed size. Our first contribution is algorithm \emph{ListPlex} that lists all maximal $k$-plexes in $O^*(\gamma^D)$ time for each constant $k$, where $\gamma$ is a value related to $k$ but strictly smaller than 2, and $D$ is the degeneracy of the graph that is far less than the vertex number $n$ in real-word graphs. Compared to the trivial bound of $2^n$, the improvement is significant, and our bound is better than all previously known results. In practice, we further use several techniques to accelerate listing $k$-plexes of a given size, such as structural-based prune rules, cache-efficient data structures, and parallel techniques. All these together result in a very practical algorithm. Empirical results show that our approach outperforms the state-of-the-art solutions by up to orders of magnitude.
Beame et al. [ITCS 2018 & TALG 2021] introduced and used the Bipartite Independent Set (BIS) and Independent Set (IS) oracle access to an unknown, simple, unweighted and undirected graph and solved the edge estimation problem. The introduction of this oracle set forth a series of works in a short span of time that either solved open questions mentioned by Beame et al. or were generalizations of their work as in Dell and Lapinskas [STOC 2018], Dell, Lapinskas and Meeks [SODA 2020], Bhattacharya et al. [ISAAC 2019 & Theory Comput. Syst. 2021], and Chen et al. [SODA 2020]. Edge estimation using BIS can be done using polylogarithmic queries, while IS queries need sub-linear but more than polylogarithmic queries. Chen et al. improved Beame et al.'s upper bound result for edge estimation using IS and also showed an almost matching lower bound. Beame et al. in their introductory work asked a few open questions out of which one was on estimating structures of higher order than edges, like triangles and cliques, using BIS queries. Motivated by this question, we completely resolve the query complexity of estimating triangles using BIS oracle. While doing so, we prove a lower bound for an even stronger query oracle called Edge Emptiness (EE) oracle, recently introduced by Assadi, Chakrabarty and Khanna [ESA 2021] to test graph connectivity.
We consider the following problem: for a given graph G and two integers k and d, can we apply a fixed graph operation at most k times in order to reduce a given graph parameter $\pi$ by at least d? We show that this problem is NP-hard when the parameter is the independence number and the graph operation is vertex deletion or edge contraction, even for fixed d=1 and when restricted to chordal graphs. We also give a polynomial time algorithm for bipartite graphs when the operation is edge contraction, the parameter is the independence number and d is fixed. Further, we complete the complexity dichotomy on H-free graphs when the parameter is the clique number and the operation is edge contraction by showing that this problem is NP-hard in ($C_3+P_1$)-free graphs even for fixed d=1. Our results answer several open questions stated in [Diner et al., Theoretical Computer Science, 746, p. 49-72 (2012)].
We consider the problem of partitioning a graph into a non-fixed number of non-overlapping subgraphs of maximum density. The density of a partition is the sum of the densities of the subgraphs, where the density of a subgraph is its average degree, that is, the ratio of its number of edges and its number of vertices. This problem, called Dense Graph Partition, is known to be NP-hard on general graphs and polynomial-time solvable on trees, and polynomial-time 2-approximable. In this paper we study the restriction of Dense Graph Partition to particular sparse and dense graph classes. In particular, we prove that it is NP-hard on dense bipartite graphs as well as on cubic graphs. On dense graphs on $n$ vertices, it is polynomial-time solvable on graphs with minimum degree $n-3$ and NP-hard on $(n-4)$-regular graphs. We prove that it is polynomial-time $4/3$-approximable on cubic graphs and admits an efficient polynomial-time approximation scheme on graphs of minimum degree $n-t$ for any constant $t\geq 4$.
A cut sparsifier is a reweighted subgraph that maintains the weights of the cuts of the original graph up to a multiplicative factor of $(1\pm\epsilon)$. This paper considers computing cut sparsifiers of weighted graphs of size $O(n\log (n)/\epsilon^2)$. Our algorithm computes such a sparsifier in time $O(m\cdot\min(\alpha(n)\log(m/n),\log (n)))$, both for graphs with polynomially bounded and unbounded integer weights, where $\alpha(\cdot)$ is the functional inverse of Ackermann's function. This improves upon the state of the art by Bencz\'ur and Karger (SICOMP 2015), which takes $O(m\log^2 (n))$ time. For unbounded weights, this directly gives the best known result for cut sparsification. Together with preprocessing by an algorithm of Fung et al. (SICOMP 2019), this also gives the best known result for polynomially-weighted graphs. Consequently, this implies the fastest approximate min-cut algorithm, both for graphs with polynomial and unbounded weights. In particular, we show that it is possible to adapt the state of the art algorithm of Fung et al. for unweighted graphs to weighted graphs, by letting the partial maximum spanning forest (MSF) packing take the place of the Nagamochi-Ibaraki (NI) forest packing. MSF packings have previously been used by Abraham at al. (FOCS 2016) in the dynamic setting, and are defined as follows: an $M$-partial MSF packing of $G$ is a set $\mathcal{F}=\{F_1, \dots, F_M\}$, where $F_i$ is a maximum spanning forest in $G\setminus \bigcup_{j=1}^{i-1}F_j$. Our method for computing (a sufficient estimation of) the MSF packing is the bottleneck in the running time of our sparsification algorithm.
We present a constant-round algorithm in the massively parallel computation (MPC) model for evaluating a natural join where every input relation has two attributes. Our algorithm achieves a load of $\tilde{O}(m/p^{1/\rho})$ where $m$ is the total size of the input relations, $p$ is the number of machines, $\rho$ is the join's fractional edge covering number, and $\tilde{O}(.)$ hides a polylogarithmic factor. The load matches a known lower bound up to a polylogarithmic factor. At the core of the proposed algorithm is a new theorem (which we name {\em the isolated cartesian product theorem}) that provides fresh insight into the problem's mathematical structure. Our result implies that the {\em subgraph enumeration problem}, where the goal is to report all the occurrences of a constant-sized subgraph pattern, can be settled optimally (up to a polylogarithmic factor) in the MPC model.