We study the problem of finding a minimum homology basis, that is, a shortest set of cycles that generates the $1$-dimensional homology classes with $\mathbb{Z}_2$ coefficients in a given simplicial complex $K$. This problem has been extensively studied in the last few years. For general complexes, the current best deterministic algorithm, by Dey et al., runs in $O(N^\omega + N^2 g)$ time, where $N$ denotes the number of simplices in $K$, $g$ denotes the rank of the $1$-homology group of $K$, and $\omega$ denotes the exponent of matrix multiplication. In this paper, we present two conceptually simple randomized algorithms that compute a minimum homology basis of a general simplicial complex $K$. The first algorithm runs in $\tilde{O}(m^\omega)$ time, where $m$ denotes the number of edges in $K$, whereas the second algorithm runs in $O(m^\omega + N m^{\omega-1})$ time. We also study the problem of finding a minimum cycle basis in an undirected graph $G$ with $n$ vertices and $m$ edges. The best known algorithm for this problem runs in $O(m^\omega)$ time. Our algorithm, which has a simpler high-level description, but is slightly more expensive, runs in $\tilde{O}(m^\omega)$ time.
We study the imbalance problem on complete bipartite graphs. The imbalance problem is a graph layout problem and is known to be NP-complete. Graph layout problems find their applications in the optimization of networks for parallel computer architectures, VLSI circuit design, information retrieval, numerical analysis, computational biology, graph theory, scheduling and archaeology. In this paper, we give characterizations for the optimal solutions of the imbalance problem on complete bipartite graphs. Using the characterizations, we can solve the imbalance problem in $\mathcal{O}(\log(|V|) \cdot \log(\log(|V|)))$ time, when given the cardinalities of the parts of the graph, and verify whether a given solution is optimal in $O(|V|)$ time on complete bipartite graphs. We also introduce a restricted form of proper interval bipartite graphs on which the imbalance problem is solvable in $\mathcal{O}(c \cdot \log(|V|) \cdot \log(\log(|V|)))$ time, where $c = \mathcal{O}(|V|)$, by using the aforementioned characterizations.
An $s{\operatorname{-}}t$ minimum cut in a graph corresponds to a minimum weight subset of edges whose removal disconnects vertices $s$ and $t$. Finding such a cut is a classic problem that is dual to that of finding a maximum flow from $s$ to $t$. In this work we describe a quantum algorithm for the minimum $s{\operatorname{-}}t$ cut problem on undirected graphs. For an undirected graph with $n$ vertices, $m$ edges, and integral edge weights bounded by $W$, the algorithm computes with high probability the weight of a minimum $s{\operatorname{-}}t$ cut in time $\widetilde O(\sqrt{m} n^{5/6} W^{1/3} + n^{5/3} W^{2/3})$, given adjacency list access to $G$. For simple graphs this bound is always $\widetilde O(n^{11/6})$, even in the dense case when $m = \Omega(n^2)$. In contrast, a randomized algorithm must make $\Omega(m)$ queries to the adjacency list of a simple graph $G$ even to decide whether $s$ and $t$ are connected.
We establish frame inequalities for signals in Paley--Wiener spaces on two specific families of graphs consisting of combinations of cubes and cycles. The frame elements are localizations to cubes, regarded as clusters in the graphs, of vertex functions that are eigenvectors of certain spatio--spectral limiting operators on graph signals.
A (unit) disk graph is the intersection graph of closed (unit) disks in the plane. Almost three decades ago, an elegant polynomial-time algorithm was found for \textsc{Maximum Clique} on unit disk graphs [Clark, Colbourn, Johnson; Discrete Mathematics '90]. Since then, it has been an intriguing open question whether or not tractability can be extended to general disk graphs. We show that the disjoint union of two odd cycles is never the complement of a disk graph nor of a unit (3-dimensional) ball graph. From that fact and existing results, we derive a simple QPTAS and a subexponential algorithm running in time $2^{\tilde{O}(n^{2/3})}$ for \textsc{Maximum Clique} on disk and unit ball graphs. We then obtain a randomized EPTAS for computing the independence number on graphs having no disjoint union of two odd cycles as an induced subgraph, bounded VC-dimension, and linear independence number. This, in combination with our structural results, yields a randomized EPTAS for \textsc{Max Clique} on disk and unit ball graphs. \textsc{Max Clique} on unit ball graphs is equivalent to finding, given a collection of points in $\mathbb R^3$, a maximum subset of points with diameter at most some fixed value. In stark contrast, \textsc{Maximum Clique} on ball graphs and unit $4$-dimensional ball graphs, as well as intersection graphs of filled ellipses (even close to unit disks) or filled triangles is unlikely to have such algorithms. Indeed, we show that, for all those problems, there is a constant ratio of approximation which cannot be attained even in time $2^{n^{1-\varepsilon}}$, unless the Exponential Time Hypothesis fails.
We study the query version of constrained minimum link paths between two points inside a simple polygon $P$ with $n$ vertices such that there is at least one point on the path, visible from a query point. The method is based on partitioning $P$ into a number of faces of equal link distance from a point, called a link-based shortest path map (SPM). Initially, we solve this problem for two given points $s$, $t$ and a query point $q$. Then, the proposed solution is extended to a general case for three arbitrary query points $s$, $t$ and $q$. In the former, we propose an algorithm with $O(n)$ preprocessing time. Extending this approach for the latter case, we develop an algorithm with $O(n^3)$ preprocessing time. The link distance of a $q$-$visible$ path between $s$, $t$ as well as the path are provided in time $O(\log n)$ and $O(m+\log n)$, respectively, for the above two cases, where $m$ is the number of links.
Subexponential parameterized algorithms are known for a wide range of natural problems on planar graphs, but the techniques are usually highly problem specific. The goal of this paper is to introduce a framework for obtaining $n^{O(\sqrt{k})}$ time algorithms for a family of graph modification problems that includes problems that can be seen as generalized cycle hitting problems. Our starting point is the Node Unique Label Cover problem (that is, given a CSP instance where each constraint is a permutation of values on two variables, the task is to delete $k$ variables to make the instance satisfiable). We introduce a variant of the problem where $k$ vertices have to be deleted such that every 2-connected component of the remaining instance is satisfiable. Then we extend the problem with cardinality constraints that restrict the number of times a certain value can be used (globally or within a 2-connected component of the solution). We show that there is an $n^{O(\sqrt{k})}$ time algorithm on planar graphs for any problem that can be formulated this way, which includes a large number of well-studied problems, for example, Odd Cycle Transversal, Subset Feedback Vertex Set, Group Feedback Vertex Set, Subset Group Feedback Vertex Set, Vertex Multiway Cut, and Component Order Connectivity. For those problems that admit appropriate (quasi)polynomial kernels (that increase the parameter only linearly and preserve planarity), our results immediately imply $2^{O(\sqrt{k}\cdot\operatorname{polylog}(k))}n^{O(1)}$ time parameterized algorithms on planar graphs. In particular, we use or adapt known kernelization results to obtain $2^{O(\sqrt{k}\cdot \operatorname{polylog}(k))} n^{O(1)}$ time (randomized) algorithms for Vertex Multiway Cut, Group Feedback Vertex Set, and Subset Feedback Vertex Set.
We revisit the problem of finding optimal strategies for deterministic Markov Decision Processes (DMDPs), and a closely related problem of testing feasibility of systems of $m$ linear inequalities on $n$ real variables with at most two variables per inequality (2VPI). We give a randomized trade-off algorithm solving both problems and running in $\tilde{O}(nmh+(n/h)^3)$ time using $\tilde{O}(n^2/h+m)$ space for any parameter $h\in [1,n]$. In particular, using subquadratic space we get $\tilde{O}(nm+n^{3/2}m^{3/4})$ running time, which improves by a polynomial factor upon all the known upper bounds for non-dense instances with $m=O(n^{2-\epsilon})$. Moreover, using linear space we match the randomized $\tilde{O}(nm+n^3)$ time bound of Cohen and Megiddo [SICOMP'94] that required $\tilde{\Theta}(n^2+m)$ space. Additionally, we show a new algorithm for the Discounted All-Pairs Shortest Paths problem, introduced by Madani et al. [TALG'10], that extends the DMDPs with optional end vertices. For the case of uniform discount factors, we give a deterministic algorithm running in $\tilde{O}(n^{3/2}m^{3/4})$ time, which improves significantly upon the randomized bound $\tilde{O}(n^2\sqrt{m})$ of Madani et al.
Linear temporal logic (LTL) is a specification language for finite sequences (called traces) widely used in program verification, motion planning in robotics, process mining, and many other areas. We consider the problem of learning LTL formulas for classifying traces; despite a growing interest of the research community, existing solutions suffer from two limitations: they do not scale beyond small formulas, and they may exhaust computational resources without returning any result. We introduce a new algorithm addressing both issues: our algorithm is able to construct formulas an order of magnitude larger than previous methods, and it is anytime, meaning that it in most cases successfully outputs a formula, albeit possibly not of minimal size. We evaluate the performances of our algorithm using an open source implementation against publicly available benchmarks.
A general quantum circuit can be simulated classically in exponential time. If it has a planar layout, then a tensor-network contraction algorithm due to Markov and Shi has a runtime exponential in the square root of its size, or more generally exponential in the treewidth of the underlying graph. Separately, Gottesman and Knill showed that if all gates are restricted to be Clifford, then there is a polynomial time simulation. We combine these two ideas and show that treewidth and planarity can be exploited to improve Clifford circuit simulation. Our main result is a classical algorithm with runtime scaling asymptotically as $n^{\omega/2}<n^{1.19}$ which samples from the output distribution obtained by measuring all $n$ qubits of a planar graph state in given Pauli bases. Here $\omega$ is the matrix multiplication exponent. We also provide a classical algorithm with the same asymptotic runtime which samples from the output distribution of any constant-depth Clifford circuit in a planar geometry. Our work improves known classical algorithms with cubic runtime. A key ingredient is a mapping which, given a tree decomposition of some graph $G$, produces a Clifford circuit with a structure that mirrors the tree decomposition and which emulates measurement of the corresponding graph state. We provide a classical simulation of this circuit with the runtime stated above for planar graphs and otherwise $nt^{\omega-1}$ where $t$ is the width of the tree decomposition. Our algorithm incorporates two subroutines which may be of independent interest. The first is a matrix-multiplication-time version of the Gottesman-Knill simulation of multi-qubit measurement on stabilizer states. The second is a new classical algorithm for solving symmetric linear systems over $\mathbb{F}_2$ in a planar geometry, extending previous works which only applied to non-singular linear systems in the analogous setting.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.