In the \textsc{Waypoint Routing Problem} one is given an undirected capacitated and weighted graph $G$, a source-destination pair $s,t\in V(G)$ and a set $W\subseteq V(G)$, of \emph{waypoints}. The task is to find a walk which starts at the source vertex $s$, visits, in any order, all waypoints, ends at the destination vertex $t$, respects edge capacities, that is, traverses each edge at most as many times as is its capacity, and minimizes the cost computed as the sum of costs of traversed edges with multiplicities. We study the problem for graphs of bounded treewidth and present a new algorithm for the problem working in $2^{O(\mathrm{tw})}\cdot n$ time, significantly improving upon the previously known algorithms. We also show that this running time is optimal for the problem under Exponential Time Hypothesis.
In the density estimation model, we investigate the problem of constructing adaptive honest confidence sets with radius measured in Wasserstein distance $W_p$, $p\geq1$, and for densities with unknown regularity measured on a Besov scale. As sampling domains, we focus on the $d-$dimensional torus $\mathbb{T}^d$, in which case $1\leq p\leq 2$, and $\mathbb{R}^d$, for which $p=1$. We identify necessary and sufficient conditions for the existence of adaptive confidence sets with diameters of the order of the regularity-dependent $W_p$-minimax estimation rate. Interestingly, it appears that the possibility of such adaptation of the diameter depends on the dimension of the underlying space. In low dimensions, $d\leq 4$, adaptation to any regularity is possible. In higher dimensions, adaptation is possible if and only if the underlying regularities belong to some interval of width at least $d/(d-4)$. This contrasts with the usual $L_p-$theory where, independently of the dimension, adaptation requires regularities to lie in a small fixed-width window. For configurations allowing these adaptive sets to exist, we explicitly construct confidence regions via the method of risk estimation, centred at adaptive estimators. Those are the first results in a statistical approach to adaptive uncertainty quantification with Wasserstein distances. Our analysis and methods extend more globally to weak losses such as Sobolev norm distances with negative smoothness indices.
Softwarization and virtualization are key concepts for emerging industries that require ultra-low latency. This is only possible if computing resources, traditionally centralized at the core of communication networks, are moved closer to the user, to the network edge. However, the realization of Edge Computing (EC) in the sixth generation (6G) of mobile networks requires efficient resource allocation mechanisms for the placement of the Virtual Network Functions (VNFs). Machine learning (ML) methods, and more specifically, Reinforcement Learning (RL), are a promising approach to solve this problem. The main contributions of this work are twofold: first, we obtain the theoretical performance bound for VNF placement in EC-enabled6G networks by formulating the problem mathematically as a finite Markov Decision Process (MDP) and solving it using a dynamic programming method called Policy Iteration (PI). Second, we develop a practical solution to the problem using RL, where the problem is treated with Q-Learning that considers both computational and communication resources when placing VNFs in the network. The simulation results under different settings of the system parameters show that the performance of the Q-Learning approach is close to the optimal PI algorithm (without having its restrictive assumptions on service statistics). This is particularly interesting when the EC resources are scarce and efficient management of these resources is required.
The metric dimension dim(G) of a graph $G$ is the minimum cardinality of a subset $S$ of vertices of $G$ such that each vertex of $G$ is uniquely determined by its distances to $S$. It is well-known that the metric dimension of a graph can be drastically increased by the modification of a single edge. Our main result consists in proving that the increase of the metric dimension of an edge addition can be amortized in the sense that if the graph consists of a spanning tree $T$ plus $c$ edges, then the metric dimension of $G$ is at most the metric dimension of $T$ plus $6c$. We then use this result to prove a weakening of a conjecture of Eroh et al. The zero forcing number $Z(G)$ of $G$ is the minimum cardinality of a subset $S$ of black vertices (whereas the other vertices are colored white) of $G$ such that all the vertices will turned black after applying finitely many times the following rule: a white vertex is turned black if it is the only white neighbor of a black vertex. Eroh et al. conjectured that, for any graph $G$, $dim(G)\leq Z(G) + c(G)$, where $c(G)$ is the number of edges that have to be removed from $G$ to get a forest. They proved the conjecture is true for trees and unicyclic graphs. We prove a weaker version of the conjecture: $dim(G)\leq Z(G)+6c(G)$ holds for any graph. We also prove that the conjecture is true for graphs with edge disjoint cycles, widely generalizing the unicyclic result of Eroh et al.
In the localization game on a graph, the goal is to find a fixed but unknown target node $v^\star$ with the least number of distance queries possible. In the $j^{th}$ step of the game, the player queries a single node $v_j$ and receives, as an answer to their query, the distance between the nodes $v_j$ and $v^\star$. The sequential metric dimension (SMD) is the minimal number of queries that the player needs to guess the target with absolute certainty, no matter where the target is. The term SMD originates from the related notion of metric dimension (MD), which can be defined the same way as the SMD, except that the player's queries are non-adaptive. In this work, we extend the results of \cite{bollobas2012metric} on the MD of Erd\H{o}s-R\'enyi graphs to the SMD. We find that, in connected Erd\H{o}s-R\'enyi graphs, the MD and the SMD are a constant factor apart. For the lower bound we present a clean analysis by combining tools developed for the MD and a novel coupling argument. For the upper bound we show that a strategy that greedily minimizes the number of candidate targets in each step uses asymptotically optimal queries in Erd\H{o}s-R\'enyi graphs. Connections with source localization, binary search on graphs and the birthday problem are discussed.
Given an $n$-vertex planar embedded digraph $G$ with non-negative edge weights and a face $f$ of $G$, Klein presented a data structure with $O(n\log n)$ space and preprocessing time which can answer any query $(u,v)$ for the shortest path distance in $G$ from $u$ to $v$ or from $v$ to $u$ in $O(\log n)$ time, provided $u$ is on $f$. This data structure is a key tool in a number of state-of-the-art algorithms and data structures for planar graphs. Klein's data structure relies on dynamic trees and the persistence technique as well as a highly non-trivial interaction between primal shortest path trees and their duals. The construction of our data structure follows a completely different and in our opinion very simple divide-and-conquer approach that solely relies on Single-Source Shortest Path computations and contractions in the primal graph. Our space and preprocessing time bound is $O(n\log |f|)$ and query time is $O(\log |f|)$ which is an improvement over Klein's data structure when $f$ has small size.
We revisit Min-Mean-Cycle, the classical problem of finding a cycle in a weighted directed graph with minimum mean weight. Despite an extensive algorithmic literature, previous work falls short of a near-linear runtime in the number of edges $m$. We propose an approximation algorithm that, for graphs with polylogarithmic diameter, achieves a near-linear runtime. In particular, this is the first algorithm whose runtime scales in the number of vertices $n$ as $\tilde{O}(n^2)$ for the complete graph. Moreover, unconditionally on the diameter, the algorithm uses only $O(n)$ memory beyond reading the input, making it "memory-optimal". Our approach is based on solving a linear programming relaxation using entropic regularization, which reduces the problem to Matrix Balancing -- \'a la the popular reduction of Optimal Transport to Matrix Scaling. The algorithm is practical and simple to implement.
For a graph class $\mathcal{C}$, the $\mathcal{C}$-Edge-Deletion problem asks for a given graph $G$ to delete the minimum number of edges from $G$ in order to obtain a graph in $\mathcal{C}$. We study the $\mathcal{C}$-Edge-Deletion problem for $\mathcal{C}$ the permutation graphs, interval graphs, and other related graph classes. It follows from Courcelle's Theorem that these problems are fixed parameter tractable when parameterized by treewidth. In this paper, we present concrete FPT algorithms for these problems. By giving explicit algorithms and analyzing these in detail, we obtain algorithms that are significantly faster than the algorithms obtained by using Courcelle's theorem.
The induced odd cycle packing number $iocp(G)$ of a graph $G$ is the maximum integer $k$ such that $G$ contains an induced subgraph consisting of $k$ pairwise vertex-disjoint odd cycles. Motivated by applications to geometric graphs, Bonamy et al.~\cite{indoc} proved that graphs of bounded induced odd cycle packing number, bounded VC dimension, and linear independence number admit a randomized EPTAS for the independence number. We show that the assumption of bounded VC dimension is not necessary, exhibiting a randomized algorithm that for any integers $k\ge 0$ and $t\ge 1$ and any $n$-vertex graph $G$ of induced odd cycle packing number at most $k$ returns in time $O_{k,t}(n^{k+4})$ an independent set of $G$ whose size is at least $\alpha(G)-n/t$ with high probability. In addition, we present $\chi$-boundedness results for graphs with bounded odd cycle packing number, and use them to design a QPTAS for the independence number only assuming bounded induced odd cycle packing number.
Graphs, which describe pairwise relations between objects, are essential representations of many real-world data such as social networks. In recent years, graph neural networks, which extend the neural network models to graph data, have attracted increasing attention. Graph neural networks have been applied to advance many different graph related tasks such as reasoning dynamics of the physical system, graph classification, and node classification. Most of the existing graph neural network models have been designed for static graphs, while many real-world graphs are inherently dynamic. For example, social networks are naturally evolving as new users joining and new relations being created. Current graph neural network models cannot utilize the dynamic information in dynamic graphs. However, the dynamic information has been proven to enhance the performance of many graph analytical tasks such as community detection and link prediction. Hence, it is necessary to design dedicated graph neural networks for dynamic graphs. In this paper, we propose DGNN, a new {\bf D}ynamic {\bf G}raph {\bf N}eural {\bf N}etwork model, which can model the dynamic information as the graph evolving. In particular, the proposed framework can keep updating node information by capturing the sequential information of edges, the time intervals between edges and information propagation coherently. Experimental results on various dynamic graphs demonstrate the effectiveness of the proposed framework.
In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.