For a graph $G=(V,E)$, a subset $D$ of vertex set $V$, is a dominating set of $G$ if every vertex not in $D$ is adjacent to atleast one vertex of $D$. A dominating set $D$ of a graph $G$ with no isolated vertices is called a paired dominating set (PD-set), if $G[D]$, the subgraph induced by $D$ in $G$ has a perfect matching. The Min-PD problem requires to compute a PD-set of minimum cardinality. The decision version of the Min-PD problem remains NP-complete even when $G$ belongs to restricted graph classes such as bipartite graphs, chordal graphs etc. On the positive side, the problem is efficiently solvable for many graph classes including intervals graphs, strongly chordal graphs, permutation graphs etc. In this paper, we study the complexity of the problem in AT-free graphs and planar graph. The class of AT-free graphs contains cocomparability graphs, permutation graphs, trapezoid graphs, and interval graphs as subclasses. We propose a polynomial-time algorithm to compute a minimum PD-set in AT-free graphs. In addition, we also present a linear-time $2$-approximation algorithm for the problem in AT-free graphs. Further, we prove that the decision version of the problem is NP-complete for planar graphs, which answers an open question asked by Lin et al. (in Theor. Comput. Sci., $591 (2015): 99-105$ and Algorithmica, $ 82 (2020) :2809-2840$).
We revisit the complexity of the well-studied notion of Additively Separable Hedonic Games (ASHGs). Such games model a basic clustering or coalition formation scenario in which selfish agents are represented by the vertices of an edge-weighted digraph $G=(V,E)$, and the weight of an arc $uv$ denotes the utility $u$ gains by being in the same coalition as $v$. We focus on (arguably) the most basic stability question about such a game: given a graph, does a Nash stable solution exist and can we find it efficiently? We study the (parameterized) complexity of ASHG stability when the underlying graph has treewidth $t$ and maximum degree $\Delta$. The current best FPT algorithm for this case was claimed by Peters [AAAI 2016], with time complexity roughly $2^{O(\Delta^5t)}$. We present an algorithm with parameter dependence $(\Delta t)^{O(\Delta t)}$, significantly improving upon the parameter dependence on $\Delta$ given by Peters, albeit with a slightly worse dependence on $t$. Our main result is that this slight performance deterioration with respect to $t$ is actually completely justified: we observe that the previously claimed algorithm is incorrect, and that in fact no algorithm can achieve dependence $t^{o(t)}$ for bounded-degree graphs, unless the ETH fails. This, together with corresponding bounds we provide on the dependence on $\Delta$ and the joint parameter establishes that our algorithm is essentially optimal for both parameters, under the ETH. We then revisit the parameterization by treewidth alone and resolve a question also posed by Peters by showing that Nash Stability remains strongly NP-hard on stars under additive preferences. Nevertheless, we also discover an island of mild tractability: we show that Connected Nash Stability is solvable in pseudo-polynomial time for constant $t$, though with an XP dependence on $t$ which, as we establish, cannot be avoided.
A triangle in a hypergraph $\mathcal{H}$ is a set of three distinct edges $e, f, g\in\mathcal{H}$ and three distinct vertices $u, v, w\in V(\mathcal{H})$ such that $\{u, v\}\subseteq e$, $\{v, w\}\subseteq f$, $\{w, u\}\subseteq g$ and $\{u, v, w\}\cap e\cap f\cap g=\emptyset$. Johansson proved in 1996 that $\chi(G)=\mathcal{O}(\Delta/\log\Delta)$ for any triangle-free graph $G$ with maximum degree $\Delta$. Cooper and Mubayi later generalized the Johansson's theorem to all rank $3$ hypergraphs. In this paper we provide a common generalization of both these results for all hypergraphs, showing that if $\mathcal{H}$ is a rank $k$, triangle-free hypergraph, then the list chromatic number \[ \chi_{\ell}(\mathcal{H})\leq \mathcal{O}\left(\max_{2\leq \ell \leq k} \left\{\left( \frac{\Delta_{\ell}}{\log \Delta_{\ell}} \right)^{\frac{1}{\ell-1}} \right\}\right), \] where $\Delta_{\ell}$ is the maximum $\ell$-degree of $\mathcal{H}$. The result is sharp apart from the constant. Moreover, our result implies, generalizes and improves several earlier results on the chromatic number and also independence number of hypergraphs, while its proof is based on a different approach than prior works in hypergraphs (and therefore provides alternative proofs to them). In particular, as an application, we establish a bound on chromatic number of sparse hypergraphs in which each vertex is contained in few triangles, and thus extend results of Alon, Krivelevich and Sudakov, and Cooper and Mubayi from hypergraphs of rank 2 and 3, respectively, to all hypergraphs.
We consider classes of arbitrary (finite or infinite) graphs of bounded shrub-depth, specifically the classes $\mathrm{TM}_r(d)$ of arbitrary graphs that have tree models of height $d$ and $r$ labels. We show that the graphs of $\mathrm{TM}_r(d)$ are $\mathrm{MSO}$-pseudo-finite relative to the class $\mathrm{TM}^{\text{f}}_r(d)$ of finite graphs of $\mathrm{TM}_r(d)$; that is, that every $\mathrm{MSO}$ sentence true in a graph of $\mathrm{TM}_r(d)$ is also true in a graph of $\mathrm{TM}^{\text{f}}_r(d)$. We also show that $\mathrm{TM}_r(d)$ is closed under ultraproducts and ultraroots. These results have two consequences. The first is that the index of the $\mathrm{MSO}[m]$-equivalence relation on graphs of $\mathrm{TM}_r(d)$ is bounded by a $(d+1)$-fold exponential in $m$. The second is that $\mathrm{TM}_r(d)$ is exactly the class of all graphs that are $\mathrm{MSO}$-pseudo-finite relative to $\mathrm{TM}^{\text{f}}_r(d)$.
Given a directed graph $G$ and integers $k$ and $l$, a D-core is the maximal subgraph $H \subseteq G$ such that for every vertex of $H$, its in-degree and out-degree are no smaller than $k$ and $l$, respectively. For a directed graph $G$, the problem of D-core decomposition aims to compute the non-empty D-cores for all possible values of $k$ and $l$. In the literature, several \emph{peeling-based} algorithms have been proposed to handle D-core decomposition. However, the peeling-based algorithms that work in a sequential fashion and require global graph information during processing are mainly designed for \emph{centralized} settings, which cannot handle large-scale graphs efficiently in distributed settings. Motivated by this, we study the \emph{distributed} D-core decomposition problem in this paper. We start by defining a concept called \emph{anchored coreness}, based on which we propose a new H-index-based algorithm for distributed D-core decomposition. Furthermore, we devise a novel concept, namely \emph{skyline coreness}, and show that the D-core decomposition problem is equivalent to the computation of skyline corenesses for all vertices. We design an efficient D-index to compute the skyline corenesses distributedly. We implement the proposed algorithms under both vertex-centric and block-centric distributed graph processing frameworks. Moreover, we theoretically analyze the algorithm and message complexities. Extensive experiments on large real-world graphs with billions of edges demonstrate the efficiency of the proposed algorithms in terms of both the running time and communication overhead.
We give an efficient perfect sampling algorithm for weighted, connected induced subgraphs (or graphlets) of rooted, bounded degree graphs under a vertex-percolation subcriticality condition. We show that this subcriticality condition is optimal in the sense that the problem of (approximately) sampling weighted rooted graphlets becomes impossible for infinite graphs and intractable for finite graphs if the condition does not hold. We apply our rooted graphlet sampling algorithm as a subroutine to give a fast perfect sampling algorithm for polymer models and a fast perfect sampling algorithm for weighted non-rooted graphlets in finite graphs, two widely studied yet very different problems. We apply this polymer model algorithm to give improved sampling algorithms for spin systems at low temperatures on expander graphs and other structured families of graphs: under the least restrictive conditions known we give near linear-time algorithms, while previous algorithms in these regimes required large polynomial running times.
We introduce here the model of growing graphs, a model of dynamic networks in which nodes can generate new nodes, thus expanding the network. This motivates the algorithmic problem of constructing a target graph G, starting from a single node. To properly model this, we assume that every node u can generate at most one node v in every round (or time slot). Every newly generated node v can activate edges with other nodes, only at the time of its birth, provided that these nodes are up to a small distance d away from v. We show that the most interesting case is when d=2. As we prove, in order to achieve the construction of a target graph G in a small number of time slots, we might need to pay for auxiliary edges (the "excess edges"), which will be eventually removed. This creates a trade-off between the number of time slots and the number of excess edges required to construct a target graph. In this paper, we deal with the following algorithmic question: Given a target graph G of n nodes, can G be constructed in at most k time slots and with at most \ell excess edges? On the positive side, we provide polynomial-time algorithms that efficiently construct fundamental graph families, such as lines, stars, trees, and planar graphs. In particular, we show that trees can be constructed in a poly-logarithmic number of slots with linearly many excess edges, while planar graphs can be constructed in a logarithmic number of slots with O(n\log n) excess edges. We also give a polynomial-time algorithm for deciding whether a graph can be constructed in \log n slots with \ell = 0 excess edges. On the negative side, we prove that the problem of determining the minimum number of slots required for a graph to be constructed with zero excess edges (i) is NP-complete and (ii) for any \varepsilon>0, cannot be approximated within n^{1-\varepsilon}, unless P=NP.
In this paper we study temporal design problems of undirected temporally connected graphs. The basic setting of these optimization problems is as follows: given an undirected graph $G$, what is the smallest number $|\lambda|$ of time-labels that we need to add to the edges of $G$ such that the resulting temporal graph $(G,\lambda)$ is temporally connected? As we prove, this basic problem, called MINIMUM LABELING, can be optimally solved in polynomial time, thus resolving an open question. The situation becomes however more complicated if we strengthen, or even if we relax a bit, the requirement of temporal connectivity of $(G,\lambda)$. One way to strengthen the temporal connectivity requirements is to upper-bound the allowed age (i.e., maximum label) of the obtained temporal graph $(G,\lambda)$. On the other hand, we can relax temporal connectivity by only requiring that there exists a temporal path between any pair of ``important'' vertices which lie in a subset $R\subseteq V$, which we call the terminals. This relaxed problem, called MINIMUM STEINER LABELING, resembles the problem STEINER TREE in static (i.e., non-temporal) graphs; however, as it turns out, STEINER TREE is not a special case of MINIMUM STEINER LABELING. We prove that MINIMUM STEINER LABELING is NP-hard and in FPT with respect to the number $|R|$ of terminals. Moreover, we prove that, adding the age restriction makes the above problems strictly harder (unless P=NP or W[1]=FPT). More specifically, we prove that the age-restricted version of MINIMUM LABELING becomes NP-complete on undirected graphs, while the age-restricted version of MINIMUM STEINER LABELING becomes W[1]-hard with respect to the number $|R|$ of terminals.
An embedding of a graph in a book, called book embedding, consists of a linear ordering of its vertices along the spine of the book and an assignment of its edges to the pages of the book, so that no two edges on the same page cross. The book thickness of a graph is the minimum number of pages over all its book embeddings. For planar graphs, a fundamental result is due to Yannakakis, who proposed an algorithm to compute embeddings of planar graphs in books with four pages. Our main contribution is a technique that generalizes this result to a much wider family of nonplanar graphs, which is characterized by a biconnected skeleton of crossing-free edges whose faces have bounded degree. Notably, this family includes all 1-planar, all optimal 2-planar, and all k-map (with bounded k) graphs as subgraphs. We prove that this family of graphs has bounded book thickness, and as a corollary, we obtain the first constant upper bound for the book thickness of optimal 2-planar and k-map graphs.
We study the problem of sampling almost uniform proper $q$-colourings in $k$-uniform simple hypergraphs with maximum degree $\Delta$. For any $\delta > 0$, if $k \geq\frac{20(1+\delta)}{\delta}$ and $q \geq 100\Delta^{\frac{2+\delta}{k-4/\delta-4}}$, the running time of our algorithm is $\tilde{O}(\mathrm{poly}(\Delta k)\cdot n^{1.01})$, where $n$ is the number of vertices. Our result requires fewer colours than previous results for general hypergraphs (Jain, Pham, and Voung, 2021; He, Sun, and Wu, 2021), and does not require $\Omega(\log n)$ colours unlike the work of Frieze and Anastos (2017).
In this paper, we introduce the \emph{interval query problem} on cube-free median graphs. Let $G$ be a cube-free median graph and $\mathcal{S}$ be a commutative semigroup. For each vertex $v$ in $G$, we are given an element $p(v)$ in $\mathcal{S}$. For each query, we are given two vertices $u,v$ in $G$ and asked to calculate the sum of $p(z)$ over all vertices $z$ belonging to a $u-v$ shortest path. This is a common generalization of range query problems on trees and grids. In this paper, we provide an algorithm to answer each interval query in $O(\log^2 n)$ time. The required data structure is constructed in $O(n\log^3 n)$ time and $O(n\log^2 n)$ space. To obtain our algorithm, we introduce a new technique, named the \emph{stairs decomposition}, to decompose an interval of cube-free median graphs into simpler substructures.