An efficient implicit representation of an $n$-vertex graph $G$ in a family $\mathcal{F}$ of graphs assigns to each vertex of $G$ a binary code of length $O(\log n)$ so that the adjacency between every pair of vertices can be determined only as a function of their codes. This function can depend on the family but not on the individual graph. Every family of graphs admitting such a representation contains at most $2^{O(n\log(n))}$ graphs on $n$ vertices, and thus has at most factorial speed of growth. The Implicit Graph Conjecture states that, conversely, every hereditary graph family with at most factorial speed of growth admits an efficient implicit representation. We refute this conjecture by establishing the existence of hereditary graph families with factorial speed of growth that require codes of length $n^{\Omega(1)}$.
In this paper we study the problem of coloring a unit interval graph which changes dynamically. In our model the unit intervals are added or removed one at the time, and have to be colored immediately, so that no two overlapping intervals share the same color. After each update only a limited number of intervals is allowed to be recolored. The limit on the number of recolorings per update is called the recourse budget. In this paper we show, that if the graph remains $k$-colorable at all times, and the updates consist of insertions only, then we can achieve the amortized recourse budget of $O(k^7 \log n)$ while maintaining a proper coloring with $k$ colors. This is an exponential improvement over the result in [Bosek et al., Recoloring Interval Graphs with Limited Recourse Budget. SWAT 2020] in terms of both $k$ and $n$. We complement this result by showing the lower bound of $\Omega(n)$ on the amortized recourse budget in the fully dynamic setting. Our incremental algorithm can be efficiently implemented. As a byproduct of independent interest we include a new result on coloring proper circular arc graphs. Let $L$ be the maximum number of arcs intersecting in one point for some set of unit circular arcs $\mathcal{A}$. We show that if there is a set $\mathcal{A}'$ of non-intersecting unit arcs of size $L^2-1$ such that $\mathcal{A} \cup \mathcal{A}'$ does not contain $L+1$ arcs intersecting in one point, then it is possible to color $\mathcal{A}$ with $L$ colors. This complements the work on unit circular arc coloring, which specifies sufficient conditions needed to color $\mathcal{A}$ with $L+1$ colors or more.
We present a constant-round algorithm in the massively parallel computation (MPC) model for evaluating a natural join where every input relation has two attributes. Our algorithm achieves a load of $\tilde{O}(m/p^{1/\rho})$ where $m$ is the total size of the input relations, $p$ is the number of machines, $\rho$ is the join's fractional edge covering number, and $\tilde{O}(.)$ hides a polylogarithmic factor. The load matches a known lower bound up to a polylogarithmic factor. At the core of the proposed algorithm is a new theorem (which we name {\em the isolated cartesian product theorem}) that provides fresh insight into the problem's mathematical structure. Our result implies that the {\em subgraph enumeration problem}, where the goal is to report all the occurrences of a constant-sized subgraph pattern, can be settled optimally (up to a polylogarithmic factor) in the MPC model.
$\newcommand{\NP}{\mathsf{NP}}\newcommand{\GapSVP}{\textrm{GapSVP}}$We give a simple proof that the (approximate, decisional) Shortest Vector Problem is $\NP$-hard under a randomized reduction. Specifically, we show that for any $p \geq 1$ and any constant $\gamma < 2^{1/p}$, the $\gamma$-approximate problem in the $\ell_p$ norm ($\gamma$-$\GapSVP_p$) is not in $\mathsf{RP}$ unless $\NP \subseteq \mathsf{RP}$. Our proof follows an approach pioneered by Ajtai (STOC 1998), and strengthened by Micciancio (FOCS 1998 and SICOMP 2000), for showing hardness of $\gamma$-$\GapSVP_p$ using locally dense lattices. We construct such lattices simply by applying "Construction A" to Reed-Solomon codes with suitable parameters, and prove their local density via an elementary argument originally used in the context of Craig lattices. As in all known $\NP$-hardness results for $\GapSVP_p$ with $p < \infty$, our reduction uses randomness. Indeed, it is a notorious open problem to prove $\NP$-hardness via a deterministic reduction. To this end, we additionally discuss potential directions and associated challenges for derandomizing our reduction. In particular, we show that a close deterministic analogue of our local density construction would improve on the state-of-the-art explicit Reed-Solomon list-decoding lower bounds of Guruswami and Rudra (STOC 2005 and IEEE Trans. Inf. Theory 2006). As a related contribution of independent interest, we also give a polynomial-time algorithm for decoding $n$-dimensional "Construction A Reed-Solomon lattices" (with different parameters than those used in our hardness proof) to a distance within an $O(\sqrt{\log n})$ factor of Minkowski's bound. This asymptotically matches the best known distance for decoding near Minkowski's bound, due to Mook and Peikert (IEEE Trans. Inf. Theory 2022), whose work we build on with a somewhat simpler construction and analysis.
We revisit the complexity of the well-studied notion of Additively Separable Hedonic Games (ASHGs). Such games model a basic clustering or coalition formation scenario in which selfish agents are represented by the vertices of an edge-weighted digraph $G=(V,E)$, and the weight of an arc $uv$ denotes the utility $u$ gains by being in the same coalition as $v$. We focus on (arguably) the most basic stability question about such a game: given a graph, does a Nash stable solution exist and can we find it efficiently? We study the (parameterized) complexity of ASHG stability when the underlying graph has treewidth $t$ and maximum degree $\Delta$. The current best FPT algorithm for this case was claimed by Peters [AAAI 2016], with time complexity roughly $2^{O(\Delta^5t)}$. We present an algorithm with parameter dependence $(\Delta t)^{O(\Delta t)}$, significantly improving upon the parameter dependence on $\Delta$ given by Peters, albeit with a slightly worse dependence on $t$. Our main result is that this slight performance deterioration with respect to $t$ is actually completely justified: we observe that the previously claimed algorithm is incorrect, and that in fact no algorithm can achieve dependence $t^{o(t)}$ for bounded-degree graphs, unless the ETH fails. This, together with corresponding bounds we provide on the dependence on $\Delta$ and the joint parameter establishes that our algorithm is essentially optimal for both parameters, under the ETH. We then revisit the parameterization by treewidth alone and resolve a question also posed by Peters by showing that Nash Stability remains strongly NP-hard on stars under additive preferences. Nevertheless, we also discover an island of mild tractability: we show that Connected Nash Stability is solvable in pseudo-polynomial time for constant $t$, though with an XP dependence on $t$ which, as we establish, cannot be avoided.
Recently, codes in the sum-rank metric attracted attention due to several applications in e.g. multishot network coding, distributed storage and quantum-resistant cryptography. The sum-rank analogs of Reed-Solomon and Gabidulin codes are linearized Reed-Solomon codes. We show how to construct $h$-folded linearized Reed-Solomon (FLRS) codes and derive an interpolation-based decoding scheme that is capable of correcting sum-rank errors beyond the unique decoding radius. The presented decoder can be used for either list or probabilistic unique decoding and requires at most $\mathcal{O}(sn^2)$ operations in $\mathbb{F}_{q^m}$, where $s \leq h$ is an interpolation parameter and $n$ denotes the length of the unfolded code. We derive a heuristic upper bound on the failure probability of the probabilistic unique decoder and verify the results via Monte Carlo simulations.
A triangle in a hypergraph $\mathcal{H}$ is a set of three distinct edges $e, f, g\in\mathcal{H}$ and three distinct vertices $u, v, w\in V(\mathcal{H})$ such that $\{u, v\}\subseteq e$, $\{v, w\}\subseteq f$, $\{w, u\}\subseteq g$ and $\{u, v, w\}\cap e\cap f\cap g=\emptyset$. Johansson proved in 1996 that $\chi(G)=\mathcal{O}(\Delta/\log\Delta)$ for any triangle-free graph $G$ with maximum degree $\Delta$. Cooper and Mubayi later generalized the Johansson's theorem to all rank $3$ hypergraphs. In this paper we provide a common generalization of both these results for all hypergraphs, showing that if $\mathcal{H}$ is a rank $k$, triangle-free hypergraph, then the list chromatic number \[ \chi_{\ell}(\mathcal{H})\leq \mathcal{O}\left(\max_{2\leq \ell \leq k} \left\{\left( \frac{\Delta_{\ell}}{\log \Delta_{\ell}} \right)^{\frac{1}{\ell-1}} \right\}\right), \] where $\Delta_{\ell}$ is the maximum $\ell$-degree of $\mathcal{H}$. The result is sharp apart from the constant. Moreover, our result implies, generalizes and improves several earlier results on the chromatic number and also independence number of hypergraphs, while its proof is based on a different approach than prior works in hypergraphs (and therefore provides alternative proofs to them). In particular, as an application, we establish a bound on chromatic number of sparse hypergraphs in which each vertex is contained in few triangles, and thus extend results of Alon, Krivelevich and Sudakov, and Cooper and Mubayi from hypergraphs of rank 2 and 3, respectively, to all hypergraphs.
We introduce here the model of growing graphs, a model of dynamic networks in which nodes can generate new nodes, thus expanding the network. This motivates the algorithmic problem of constructing a target graph G, starting from a single node. To properly model this, we assume that every node u can generate at most one node v in every round (or time slot). Every newly generated node v can activate edges with other nodes, only at the time of its birth, provided that these nodes are up to a small distance d away from v. We show that the most interesting case is when d=2. As we prove, in order to achieve the construction of a target graph G in a small number of time slots, we might need to pay for auxiliary edges (the "excess edges"), which will be eventually removed. This creates a trade-off between the number of time slots and the number of excess edges required to construct a target graph. In this paper, we deal with the following algorithmic question: Given a target graph G of n nodes, can G be constructed in at most k time slots and with at most \ell excess edges? On the positive side, we provide polynomial-time algorithms that efficiently construct fundamental graph families, such as lines, stars, trees, and planar graphs. In particular, we show that trees can be constructed in a poly-logarithmic number of slots with linearly many excess edges, while planar graphs can be constructed in a logarithmic number of slots with O(n\log n) excess edges. We also give a polynomial-time algorithm for deciding whether a graph can be constructed in \log n slots with \ell = 0 excess edges. On the negative side, we prove that the problem of determining the minimum number of slots required for a graph to be constructed with zero excess edges (i) is NP-complete and (ii) for any \varepsilon>0, cannot be approximated within n^{1-\varepsilon}, unless P=NP.
In this paper we study temporal design problems of undirected temporally connected graphs. The basic setting of these optimization problems is as follows: given an undirected graph $G$, what is the smallest number $|\lambda|$ of time-labels that we need to add to the edges of $G$ such that the resulting temporal graph $(G,\lambda)$ is temporally connected? As we prove, this basic problem, called MINIMUM LABELING, can be optimally solved in polynomial time, thus resolving an open question. The situation becomes however more complicated if we strengthen, or even if we relax a bit, the requirement of temporal connectivity of $(G,\lambda)$. One way to strengthen the temporal connectivity requirements is to upper-bound the allowed age (i.e., maximum label) of the obtained temporal graph $(G,\lambda)$. On the other hand, we can relax temporal connectivity by only requiring that there exists a temporal path between any pair of ``important'' vertices which lie in a subset $R\subseteq V$, which we call the terminals. This relaxed problem, called MINIMUM STEINER LABELING, resembles the problem STEINER TREE in static (i.e., non-temporal) graphs; however, as it turns out, STEINER TREE is not a special case of MINIMUM STEINER LABELING. We prove that MINIMUM STEINER LABELING is NP-hard and in FPT with respect to the number $|R|$ of terminals. Moreover, we prove that, adding the age restriction makes the above problems strictly harder (unless P=NP or W[1]=FPT). More specifically, we prove that the age-restricted version of MINIMUM LABELING becomes NP-complete on undirected graphs, while the age-restricted version of MINIMUM STEINER LABELING becomes W[1]-hard with respect to the number $|R|$ of terminals.
We initiate a systematic study of algorithms that are both differentially private and run in sublinear time for several problems in which the goal is to estimate natural graph parameters. Our main result is a differentially-private $(1+\rho)$-approximation algorithm for the problem of computing the average degree of a graph, for every $\rho>0$. The running time of the algorithm is roughly the same as its non-private version proposed by Goldreich and Ron (Sublinear Algorithms, 2005). We also obtain the first differentially-private sublinear-time approximation algorithms for the maximum matching size and the minimum vertex cover size of a graph. An overarching technique we employ is the notion of coupled global sensitivity of randomized algorithms. Related variants of this notion of sensitivity have been used in the literature in ad-hoc ways. Here we formalize the notion and develop it as a unifying framework for privacy analysis of randomized approximation algorithms.
In 2017, Krenn reported that certain problems related to the perfect matchings and colourings of graphs emerge out of studying the constructability of general quantum states using modern photonic technologies. He realized that if we can prove that the \emph{weighted matching index} of a graph, a parameter defined in terms of perfect matchings and colourings of the graph is at most 2, that could lead to exciting insights on the potential of resources of quantum inference. Motivated by this, he conjectured that the {weighted matching index} of any graph is at most 2. The first result on this conjecture was by Bogdanov, who proved that the \emph{(unweighted) matching index} of graphs (non-isomorphic to $K_4$) is at most 2, thus classifying graphs non-isomorphic to $K_4$ into Type 0, Type 1 and Type 2. By definition, the weighted matching index of Type 0 graphs is 0. We give a structural characterization for Type 2 graphs, using which we settle Krenn's conjecture for Type 2 graphs. Using this characterization, we provide a simple $O(|V||E|)$ time algorithm to find the unweighted matching index of any graph. In view of our work, Krenn's conjecture remains to be proved only for Type 1 graphs. We give upper bounds for the weighted matching index in terms of connectivity parameters for such graphs. Using these bounds, for a slightly simplified version, we settle Krenn's conjecture for the class of graphs with vertex connectivity at most 2 and the class of graphs with maximum degree at most 4. Krenn has been publicizing his conjecture in various ways since 2017. He has even declared a reward for a resolution of his conjecture. We hope that this article will popularize the problem among computer scientists.