We show a flow-augmentation algorithm in directed graphs: There exists a polynomial-time algorithm that, given a directed graph $G$, two integers $s,t \in V(G)$, and an integer $k$, adds (randomly) to $G$ a number of arcs such that for every minimal $st$-cut $Z$ in $G$ of size at most $k$, with probability $2^{-\mathrm{poly}(k)}$ the set $Z$ becomes a minimum $st$-cut in the resulting graph. The directed flow-augmentation tool allows us to prove fixed-parameter tractability of a number of problems parameterized by the cardinality of the deletion set, whose parameterized complexity status was repeatedly posed as open problems: (1) Chain SAT, defined by Chitnis, Egri, and Marx [ESA'13, Algorithmica'17], (2) a number of weighted variants of classic directed cut problems, such as Weighted $st$-Cut, Weighted Directed Feedback Vertex Set, or Weighted Almost 2-SAT. By proving that Chain SAT is FPT, we confirm a conjecture of Chitnis, Egri, and Marx that, for any graph $H$, if the List $H$-Coloring problem is polynomial-time solvable, then the corresponding vertex-deletion problem is fixed-parameter tractable.
We study two "above guarantee" versions of the classical Longest Path problem on undirected and directed graphs and obtain the following results. In the first variant of Longest Path that we study, called Longest Detour, the task is to decide whether a graph has an (s,t)-path of length at least dist_G(s,t)+k (where dist_G(s,t) denotes the length of a shortest path from s to t). Bez\'akov\'a et al. proved that on undirected graphs the problem is fixed-parameter tractable (FPT) by providing an algorithm of running time 2^{O (k)} n. Further, they left the parameterized complexity of the problem on directed graphs open. Our first main result establishes a connection between Longest Detour on directed graphs and 3-Disjoint Paths on directed graphs. Using these new insights, we design a 2^{O(k)} n^{O(1)} time algorithm for the problem on directed planar graphs. Further, the new approach yields a significantly faster FPT algorithm on undirected graphs. In the second variant of Longest Path, namely Longest Path Above Diameter, the task is to decide whether the graph has a path of length at least diam(G)+k (diam(G) denotes the length of a longest shortest path in a graph G). We obtain dichotomy results about Longest Path Above Diameter on undirected and directed graphs. For (un)directed graphs, Longest Path Above Diameter is NP-complete even for k=1. However, if the input undirected graph is 2-connected, then the problem is FPT. On the other hand, for 2-connected directed graphs, we show that Longest Path Above Diameter is solvable in polynomial time for each k\in{1,\dots, 4} and is NP-complete for every k\geq 5. The parameterized complexity of Longest Path Above Diameter on general directed graphs remains an interesting open problem.
For a graph class ${\cal H}$, the graph parameters elimination distance to ${\cal H}$ (denoted by ${\bf ed}_{\cal H}$) [Bulian and Dawar, Algorithmica, 2016], and ${\cal H}$-treewidth (denoted by ${\bf tw}_{\cal H}$) [Eiben et al. JCSS, 2021] aim to minimize the treedepth and treewidth, respectively, of the "torso" of the graph induced on a modulator to the graph class ${\cal H}$. Here, the torso of a vertex set $S$ in a graph $G$ is the graph with vertex set $S$ and an edge between two vertices $u, v \in S$ if there is a path between $u$ and $v$ in $G$ whose internal vertices all lie outside $S$. In this paper, we show that from the perspective of (non-uniform) fixed-parameter tractability (FPT), the three parameters described above give equally powerful parameterizations for every hereditary graph class ${\cal H}$ that satisfies mild additional conditions. In fact, we show that for every hereditary graph class ${\cal H}$ satisfying mild additional conditions, with the exception of ${\bf tw}_{\cal H}$ parameterized by ${\bf ed}_{\cal H}$, for every pair of these parameters, computing one parameterized by itself or any of the others is FPT-equivalent to the standard vertex-deletion (to ${\cal H}$) problem. As an example, we prove that an FPT algorithm for the vertex-deletion problem implies a non-uniform FPT algorithm for computing ${\bf ed}_{\cal H}$ and ${\bf tw}_{\cal H}$. The conclusions of non-uniform FPT algorithms being somewhat unsatisfactory, we essentially prove that if ${\cal H}$ is hereditary, union-closed, CMSO-definable, and (a) the canonical equivalence relation (or any refinement thereof) for membership in the class can be efficiently computed, or (b) the class admits a "strong irrelevant vertex rule", then there exists a uniform FPT algorithm for ${\bf ed}_{\cal H}$.
We study reward design strategies for incentivizing a reinforcement learning agent to adopt a policy from a set of admissible policies. The goal of the reward designer is to modify the underlying reward function cost-efficiently while ensuring that any approximately optimal deterministic policy under the new reward function is admissible and performs well under the original reward function. This problem can be viewed as a dual to the problem of optimal reward poisoning attacks: instead of forcing an agent to adopt a specific policy, the reward designer incentivizes an agent to avoid taking actions that are inadmissible in certain states. Perhaps surprisingly, and in contrast to the problem of optimal reward poisoning attacks, we first show that the reward design problem for admissible policy teaching is computationally challenging, and it is NP-hard to find an approximately optimal reward modification. We then proceed by formulating a surrogate problem whose optimal solution approximates the optimal solution to the reward design problem in our setting, but is more amenable to optimization techniques and analysis. For this surrogate problem, we present characterization results that provide bounds on the value of the optimal solution. Finally, we design a local search algorithm to solve the surrogate problem and showcase its utility using simulation-based experiments.
We consider applications involving a large set of instances of projecting points to polytopes. We develop an intuition guided by theoretical and empirical analysis to show that when these instances follow certain structures, a large majority of the projections lie on vertices of the polytopes. To do these projections efficiently we derive a vertex-oriented incremental algorithm to project a point onto any arbitrary polytope, as well as give specific algorithms to cater to simplex projection and polytopes where the unit box is cut by planes. Such settings are especially useful in web-scale applications such as optimal matching or allocation problems. Several such problems in internet marketplaces (e-commerce, ride-sharing, food delivery, professional services, advertising, etc.), can be formulated as Linear Programs (LP) with such polytope constraints that require a projection step in the overall optimization process. We show that in the very recent work, the polytopic projection is the most expensive step and our efficient projection algorithms help in gaining massive improvements in performance.
We present an index theory of equilibria for extensive form games. This requires developing an index theory for games where the strategy sets of players are general polytopes and their payoff functions are multiaffine in the product of these polytopes. Such polytopes arise from identifying (topologically) equivalent mixed strategies of a normal form game.
We consider zero-sum games on infinite graphs, with objectives specified as sets of infinite words over some alphabet of colors. A well-studied class of objectives is the one of $\omega$-regular objectives, due to its relation to many natural problems in theoretical computer science. We focus on the strategy complexity question: given an objective, how much memory does each player require to play as well as possible? A classical result is that finite-memory strategies suffice for both players when the objective is $\omega$-regular. We show a reciprocal of that statement: when both players can play optimally with a chromatic finite-memory structure (i.e., whose updates can only observe colors) in all infinite game graphs, then the objective must be $\omega$-regular. This provides a game-theoretic characterization of $\omega$-regular objectives, and this characterization can help in obtaining memory bounds. Moreover, a by-product of our characterization is a new one-to-two-player lift: to show that chromatic finite-memory structures suffice to play optimally in two-player games on infinite graphs, it suffices to show it in the simpler case of one-player games on infinite graphs. We illustrate our results with the family of discounted-sum objectives, for which $\omega$-regularity depends on the value of some parameters.
Let $\kappa(s,t)$ denote the maximum number of internally disjoint paths in an undirected graph $G$. We consider designing a data structure that includes a list of cuts, and answers the following query: given $s,t \in V$, determine whether $\kappa(s,t) \leq k$, and if so, return a pointer to an $st$-cut of size $\leq k$ (or to a minimum $st$-cut) in the list. A trivial data structure that includes a list of $n(n-1)/2$ cuts and requires $\Theta(kn^2)$ space can answer each query in $O(1)$ time. We obtain the following results. In the case when $G$ is $k$-connected, we show that $n$ cuts suffice, and that these cuts can be partitioned into $(2k+1)$ laminar families. Thus using space $O(kn)$ we can answers each min-cut query in $O(1)$ time, slightly improving and substantially simplifying a recent result of Pettie and Yin. We then extend this data structure to subset $k$-connectivity. In the general case we show that $(2k+1)n$ cuts suffice to return an $st$-cut of size $\leq k$,and a list of size $k(k+2)n$ contains a minimum $st$-cut for every $s,t \in V$. Combining our subset $k$-connectivity data structure with the data structure of Hsu and Lu for checking $k$-connectivity, we give an $O(k^2 n)$ space data structure that returns an $st$-cut of size $\leq k$ in $O(\log k)$ time, while $O(k^3 n)$ space enables to return a minimum $st$-cut.
$\newcommand{\Emph}[1]{{\it{#1}}} \newcommand{\FF}{\mathcal{F}}\newcommand{\region}{\mathsf{r}}\newcommand{\restrictY}[2]{#1 \cap {#2}}$For a set of points $P \subseteq \mathbb{R}^2$, and a family of regions $\FF$, a $\Emph{local~t-spanner}$ of $P$, is a sparse graph $G$ over $P$, such that, for any region $\region \in \FF$, the subgraph restricted to $\region$, denoted by $\restrictY{G}{\region} = G_{P \cap \region}$, is a $t$-spanner for all the points of $\region \cap P$. We present algorithms for the construction of local spanners with respect to several families of regions, such as homothets of a convex region. Unfortunately, the number of edges in the resulting graph depends logarithmically on the spread of the input point set. We prove that this dependency can not be removed, thus settling an open problem raised by Abam and Borouny. We also show improved constructions (with no dependency on the spread) of local spanners for fat triangles, and regular $k$-gons. In particular, this improves over the known construction for axis parallel squares. We also study a somewhat weaker notion of local spanner where one allows to shrink the region a "bit". Any spanner is a weak local spanner if the shrinking is proportional to the diameter. Surprisingly, we show a near linear size construction of a weak spanner for axis-parallel rectangles, where the shrinkage is $\Emph{multiplicative}$.
In recent years, object detection has experienced impressive progress. Despite these improvements, there is still a significant gap in the performance between the detection of small and large objects. We analyze the current state-of-the-art model, Mask-RCNN, on a challenging dataset, MS COCO. We show that the overlap between small ground-truth objects and the predicted anchors is much lower than the expected IoU threshold. We conjecture this is due to two factors; (1) only a few images are containing small objects, and (2) small objects do not appear enough even within each image containing them. We thus propose to oversample those images with small objects and augment each of those images by copy-pasting small objects many times. It allows us to trade off the quality of the detector on large objects with that on small objects. We evaluate different pasting augmentation strategies, and ultimately, we achieve 9.7\% relative improvement on the instance segmentation and 7.1\% on the object detection of small objects, compared to the current state of the art method on MS COCO.
The Normalized Cut (NCut) objective function, widely used in data clustering and image segmentation, quantifies the cost of graph partitioning in a way that biases clusters or segments that are balanced towards having lower values than unbalanced partitionings. However, this bias is so strong that it avoids any singleton partitions, even when vertices are very weakly connected to the rest of the graph. Motivated by the B\"uhler-Hein family of balanced cut costs, we propose the family of Compassionately Conservative Balanced (CCB) Cut costs, which are indexed by a parameter that can be used to strike a compromise between the desire to avoid too many singleton partitions and the notion that all partitions should be balanced. We show that CCB-Cut minimization can be relaxed into an orthogonally constrained $\ell_{\tau}$-minimization problem that coincides with the problem of computing Piecewise Flat Embeddings (PFE) for one particular index value, and we present an algorithm for solving the relaxed problem by iteratively minimizing a sequence of reweighted Rayleigh quotients (IRRQ). Using images from the BSDS500 database, we show that image segmentation based on CCB-Cut minimization provides better accuracy with respect to ground truth and greater variability in region size than NCut-based image segmentation.