\emph{$K$-best enumeration}, which asks to output $k$ best solutions without duplication, plays an important role in data analysis for many fields. In such fields, data can be typically represented by graphs, and thus subgraph enumeration has been paid much attention to. However, $k$-best enumeration tends to be intractable since, in many cases, finding one optimum solution is \NP-hard. To overcome this difficulty, we combine $k$-best enumeration with a new concept of enumeration algorithms called \emph{approximation enumeration algorithms}, which has been recently proposed. As a main result, we propose an $\alpha$-approximation algorithm for minimal connected edge dominating sets which outputs $k$ minimal solutions with cardinality at most $\alpha\cdot\overline{\rm OPT}$, where $\overline{\rm OPT}$ is the cardinality of a mini\emph{mum} solution which is \emph{not} outputted by the algorithm, and $\alpha$ is constant. Moreover, our proposed algorithm runs in $O(nm^2\Delta)$ delay, where $n$, $m$, $\Delta$ are the number of vertices, the number of edges, and the maximum degree of an input graph.
The basic goal of survivable network design is to build cheap networks that guarantee the connectivity of certain pairs of nodes despite the failure of a few edges or nodes. A celebrated result by Jain [Combinatorica'01] provides a 2-approximation for a wide class of these problems. However nothing better is known even for very basic special cases, raising the natural question whether any improved approximation factor is possible at all. In this paper we address one of the most basic problems in this family for which 2 is still the best-known approximation factor, the Forest Augmentation Problem (FAP): given an undirected unweighted graph (that w.l.o.g. is a forest) and a collection of extra edges (links), compute a minimum cardinality subset of links whose addition to the graph makes it 2-edge-connected. Several better-than-2 approximation algorithms are known for the special case where the input graph is a tree, a.k.a. the Tree Augmentation Problem (TAP). Recently this was achieved also for the weighted version of TAP, and for the k-edge-connectivity generalization of TAP. These results heavily exploit the fact that the input graph is connected, a condition that does not hold in FAP. In this paper we breach the 2-approximation barrier for FAP. Our result is based on two main ingredients. First, we describe a reduction to the Path Augmentation Problem (PAP), the special case of FAP where the input graph is a collection of disjoint paths. Our reduction is not approximation preserving, however it is sufficiently accurate to improve on a factor 2 approximation. Second, we present a better-than-2 approximation algorithm for PAP, an open problem on its own. Here we exploit a novel notion of implicit credits which might turn out to be helpful in future related work.
We study reinforcement learning for two-player zero-sum Markov games with simultaneous moves in the finite-horizon setting, where the transition kernel of the underlying Markov games can be parameterized by a linear function over the current state, both players' actions and the next state. In particular, we assume that we can control both players and aim to find the Nash Equilibrium by minimizing the duality gap. We propose an algorithm Nash-UCRL based on the principle "Optimism-in-Face-of-Uncertainty". Our algorithm only needs to find a Coarse Correlated Equilibrium (CCE), which is computationally efficient. Specifically, we show that Nash-UCRL can provably achieve an $\tilde{O}(dH\sqrt{T})$ regret, where $d$ is the linear function dimension, $H$ is the length of the game and $T$ is the total number of steps in the game. To assess the optimality of our algorithm, we also prove an $\tilde{\Omega}( dH\sqrt{T})$ lower bound on the regret. Our upper bound matches the lower bound up to logarithmic factors, which suggests the optimality of our algorithm.
We consider the problem of enumerating optimal solutions for two hypergraph $k$-partitioning problems -- namely, Hypergraph-$k$-Cut and Minmax-Hypergraph-$k$-Partition. The input in hypergraph $k$-partitioning problems is a hypergraph $G=(V, E)$ with positive hyperedge costs along with a fixed positive integer $k$. The goal is to find a partition of $V$ into $k$ non-empty parts $(V_1, V_2, \ldots, V_k)$ -- known as a $k$-partition -- so as to minimize an objective of interest. 1. If the objective of interest is the maximum cut value of the parts, then the problem is known as Minmax-Hypergraph-$k$-Partition. A subset of hyperedges is a minmax-$k$-cut-set if it is the subset of hyperedges crossing an optimum $k$-partition for Minmax-Hypergraph-$k$-Partition. 2. If the objective of interest is the total cost of hyperedges crossing the $k$-partition, then the problem is known as Hypergraph-$k$-Cut. A subset of hyperedges is a min-$k$-cut-set if it is the subset of hyperedges crossing an optimum $k$-partition for Hypergraph-$k$-Cut. We give the first polynomial bound on the number of minmax-$k$-cut-sets and a polynomial-time algorithm to enumerate all of them in hypergraphs for every fixed $k$. Our technique is strong enough to also enable an $n^{O(k)}p$-time deterministic algorithm to enumerate all min-$k$-cut-sets in hypergraphs, thus improving on the previously known $n^{O(k^2)}p$-time deterministic algorithm, where $n$ is the number of vertices and $p$ is the size of the hypergraph. The correctness analysis of our enumeration approach relies on a structural result that is a strong and unifying generalization of known structural results for Hypergraph-$k$-Cut and Minmax-Hypergraph-$k$-Partition. We believe that our structural result is likely to be of independent interest in the theory of hypergraphs (and graphs).
While algorithms for planar graphs have received a lot of attention, few papers have focused on the additional power that one gets from assuming an embedding of the graph is available. While in the classic sequential setting, this assumption gives no additional power (as a planar graph can be embedded in linear time), we show that this is far from being the case in other settings. We assume that the embedding is straight-line, but our methods also generalize to non-straight-line embeddings. Specifically, we focus on sublinear-time computation and massively parallel computation (MPC). Our main technical contribution is a sublinear-time algorithm for computing a relaxed version of an $r$-division. We then show how this can be used to estimate Lipschitz additive graph parameters. This includes, for example, the maximum matching, maximum independent set, or the minimum dominating set. We also show how this can be used to solve some property testing problems with respect to the vertex edit distance. In the second part of our paper, we show an MPC algorithm that computes an $r$-division of the input graph. We show how this can be used to solve various classical graph problems with space per machine of $O(n^{2/3+\epsilon})$ for some $\epsilon>0$, and while performing $O(1)$ rounds. This includes for example approximate shortest paths or the minimum spanning tree. Our results also imply an improved MPC algorithm for Euclidean minimum spanning tree.
The Schrijver graph $S(n,k)$ is defined for integers $n$ and $k$ with $n \geq 2k$ as the graph whose vertices are all the $k$-subsets of $\{1,2,\ldots,n\}$ that do not include two consecutive elements modulo $n$, where two such sets are adjacent if they are disjoint. A result of Schrijver asserts that the chromatic number of $S(n,k)$ is $n-2k+2$ (Nieuw Arch. Wiskd., 1978). In the computational Schrijver problem, we are given an access to a coloring of the vertices of $S(n,k)$ with $n-2k+1$ colors, and the goal is to find a monochromatic edge. The Schrijver problem is known to be complete in the complexity class $\mathsf{PPA}$. We prove that it can be solved by a randomized algorithm with running time $n^{O(1)} \cdot k^{O(k)}$, hence it is fixed-parameter tractable with respect to the parameter $k$.
Given a set $P$ of $n$ points in the plane, the $k$-center problem is to find $k$ congruent disks of minimum possible radius such that their union covers all the points in $P$. The $2$-center problem is a special case of the $k$-center problem that has been extensively studied in the recent past \cite{CAHN,HT,SH}. In this paper, we consider a generalized version of the $2$-center problem called \textit{proximity connected} $2$-center (PCTC) problem. In this problem, we are also given a parameter $\delta\geq 0$ and we have the additional constraint that the distance between the centers of the disks should be at most $\delta$. Note that when $\delta=0$, the PCTC problem is reduced to the $1$-center(minimum enclosing disk) problem and when $\delta$ tends to infinity, it is reduced to the $2$-center problem. The PCTC problem first appeared in the context of wireless networks in 1992 \cite{ACN0}, but obtaining a nontrivial deterministic algorithm for the problem remained open. In this paper, we resolve this open problem by providing a deterministic $O(n^2\log n)$ time algorithm for the problem.
The number of down-steps between pairs of up-steps in $k_t$-Dyck paths, a generalization of Dyck paths consisting of steps $\{(1, k), (1, -1)\}$ such that the path stays (weakly) above the line $y=-t$, is studied. Results are proved bijectively and by means of generating functions, and lead to several interesting identities as well as links to other combinatorial structures. In particular, there is a connection between $k_t$-Dyck paths and perforation patterns for punctured convolutional codes (binary matrices) used in coding theory. Surprisingly, upon restriction to usual Dyck paths this yields a new combinatorial interpretation of Catalan numbers.
This paper makes the first attempt to apply newly developed upwind GFDM for the meshless solution of two-phase porous flow equations. In the presented method, node cloud is used to flexibly discretize the computational domain, instead of complicated mesh generation. Combining with moving least square approximation and local Taylor expansion, spatial derivatives of oil-phase pressure at a node are approximated by generalized difference operators in the local influence domain of the node. By introducing the first-order upwind scheme of phase relative permeability, and combining the discrete boundary conditions, fully-implicit GFDM-based nonlinear discrete equations of the immiscible two-phase porous flow are obtained and solved by the nonlinear solver based on the Newton iteration method with the automatic differentiation, to avoid the additional computational cost and possible computational instability caused by sequentially coupled scheme. Two numerical examples are implemented to test the computational performances of the presented method. Detailed error analysis finds the two sources of the calculation error, roughly studies the convergence order thus find that the low-order error of GFDM makes the convergence order of GFDM lower than that of FDM when node spacing is small, and points out the significant effect of the symmetry or uniformity of the node collocation in the node influence domain on the accuracy of generalized difference operators, and the radius of the node influence domain should be small to achieve high calculation accuracy, which is a significant difference between the studied hyperbolic two-phase porous flow problem and the elliptic problems when GFDM is applied.
In the storied Colonel Blotto game, two colonels allocate $a$ and $b$ troops, respectively, to $k$ distinct battlefields. A colonel wins a battle if they assign more troops to that particular battle, and each colonel seeks to maximize their total number of victories. Despite the problem's formulation in 1921, the first polynomial-time algorithm to compute Nash equilibrium (NE) strategies for this game was discovered only quite recently. In 2016, \citep{ahmadinejad_dehghani_hajiaghayi_lucier_mahini_seddighin_2019} formulated a breakthrough algorithm to compute NE strategies for the Colonel Blotto game\footnote{To the best of our knowledge, the algorithm from \citep{ahmadinejad_dehghani_hajiaghayi_lucier_mahini_seddighin_2019} has computational complexity $O(k^{14}\max\{a,b\}^{13})$}, receiving substantial media coverage (e.g. \citep{Insider}, \citep{NSF}, \citep{ScienceDaily}). In this work, we present the first known $\epsilon$-approximation algorithm to compute NE strategies in the two-player Colonel Blotto game in runtime $\widetilde{O}(\epsilon^{-4} k^8 \max\{a,b\}^2)$ for arbitrary settings of these parameters. Moreover, this algorithm computes approximate coarse correlated equilibrium strategies in the multiplayer (continuous and discrete) Colonel Blotto game (when there are $\ell > 2$ colonels) with runtime $\widetilde{O}(\ell \epsilon^{-4} k^8 n^2 + \ell^2 \epsilon^{-2} k^3 n (n+k))$, where $n$ is the maximum troop count. Before this work, no polynomial-time algorithm was known to compute exact or approximate equilibrium (in any sense) strategies for multiplayer Colonel Blotto with arbitrary parameters. Our algorithm computes these approximate equilibria by a novel (to the author's knowledge) sampling technique with which we implicitly perform multiplicative weights update over the exponentially many strategies available to each player.
We consider smooth optimization problems with a Hermitian positive semi-definite fixed-rank constraint, where a quotient geometry with three Riemannian metrics $g^i(\cdot, \cdot)$ $(i=1,2,3)$ is used to represent this constraint. By taking the nonlinear conjugate gradient method (CG) as an example, we show that CG on the quotient geometry with metric $g^1$ is equivalent to CG on the factor-based optimization framework, which is often called the Burer--Monteiro approach. We also show that CG on the quotient geometry with metric $g^3$ is equivalent to CG on the commonly-used embedded geometry. We call two CG methods equivalent if they produce an identical sequence of iterates $\{X_k\}$. In addition, we show that if the limit point of the sequence $\{X_k\}$ generated by an algorithm has lower rank, that is $X_k\in \mathbb C^{n\times n}, k = 1, 2, \ldots$ has rank $p$ and the limit point $X_*$ has rank $r < p$, then the condition number of the Riemannian Hessian with metric $g^1$ can be unbounded, but those of the other two metrics stay bounded. Numerical experiments show that the Burer--Monteiro CG method has slower local convergence rate if the limit point has a reduced rank, compared to CG on the quotient geometry under the other two metrics. This slower convergence rate can thus be attributed to the large condition number of the Hessian near a minimizer.