This paper addresses the problem of determining all optimal integer solutions of a linear integer network flow problem, which we call the all optimal integer flow (AOF) problem. We derive an O(F (m + n) + mn + M ) time algorithm to determine all F many optimal integer flows in a directed network with n nodes and m arcs, where M is the best time needed to find one minimum cost flow. We remark that stopping Hamacher's well-known method for the determination of the K best integer flows at the first sub-optimal flow results in an algorithm with a running time of O(F m(n log n + m) + M ) for solving the AOF problem. Our improvement is essentially made possible by replacing the shortest path sub-problem with a more efficient way to determine a so called proper zero cost cycle using a modified depth-first search technique. As a byproduct, our analysis yields an enhanced algorithm to determine the K best integer flows that runs in O(Kn3 + M ). Besides, we give lower and upper bounds for the number of all optimal integer and feasible integer solutions. Our bounds are based on the fact that any optimal solution can be obtained by an initial optimal tree solution plus a conical combination of incidence vectors of all induced cycles with bounded coefficients.
We present a new local-search algorithm for the $k$-median clustering problem. We show that local optima for this algorithm give a $(2.836+\epsilon)$-approximation; our result improves upon the $(3+\epsilon)$-approximate local-search algorithm of Arya et al. [STOC 01]. Moreover, a computer-aided analysis of a natural extension suggests that this approach may lead to an improvement over the best-known approximation guarantee for the problem. The new ingredient in our algorithm is the use of a potential function based on both the closest and second-closest facilities to each client. Specifically, the potential is the sum over all clients, of the distance of the client to its closest facility, plus (a small constant times) the truncated distance to its second-closest facility. We move from one solution to another only if the latter can be obtained by swapping a constant number of facilities, and has a smaller potential than the former. This refined potential allows us to avoid the bad local optima given by Arya et al. for the local-search algorithm based only on the cost of the solution.
We consider $k$-means clustering in the online no-substitution setting where one must decide whether to take each data point $x_t$ as a center immediately upon streaming it and cannot remove centers once taken. Our work is focused on the \emph{arbitrary-order} assumption where there are no restrictions on how the points $X$ are ordered or generated. Algorithms in this setting are evaluated with respect to their approximation ratio compared to optimal clustering cost, the number of centers they select, and their memory usage. Recently, Bhattacharjee and Moshkovitz (2020) defined a parameter, $Lower_{\alpha, k}(X)$ that governs the minimum number of centers any $\alpha$-approximation clustering algorithm, allowed any amount of memory, must take given input $X$. To complement their result, we give the first algorithm that takes $\tilde{O}(Lower_{\alpha,k}(X))$ centers (hiding factors of $k, \log n$) while simultaneously achieving a constant approximation and using $\tilde{O}(k)$ memory in addition to the memory required to save the centers. Our algorithm shows that it in the no-substitution setting, it is possible to take an order-optimal number of centers while using little additional memory.
Frequency estimation is one of the most fundamental problems in streaming algorithms. Given a stream $S$ of elements from some universe $U=\{1 \ldots n\}$, the goal is to compute, in a single pass, a short sketch of $S$ so that for any element $i \in U$, one can estimate the number $x_i$ of times $i$ occurs in $S$ based on the sketch alone. Two state of the art solutions to this problems are the Count-Min and Count-Sketch algorithms. The frequency estimator $\tilde{x}$ produced by Count-Min, using $O(1/\varepsilon \cdot \log n)$ dimensions, guarantees that $\|\tilde{x}-x\|_{\infty} \le \varepsilon \|x\|_1$ with high probability, and $\tilde{x} \ge x$ holds deterministically. Also, Count-Min works under the assumption that $x \ge 0$. On the other hand, Count-Sketch, using $O(1/\varepsilon^2 \cdot \log n)$ dimensions, guarantees that $\|\tilde{x}-x\|_{\infty} \le \varepsilon \|x\|_2$ with high probability. A natural question is whether it is possible to design the best of both worlds sketching method, with error guarantees depending on the $\ell_2$ norm and space comparable to Count-Sketch, but (like Count-Min) also has the no-underestimation property. Our main set of results shows that the answer to the above question is negative. We show this in two incomparable computational models: linear sketching and streaming algorithms. We also study the complementary problem, where the sketch is required to not over-estimate, i.e., $\tilde{x} \le x$ should hold always.
We consider an elliptic linear-quadratic parameter estimation problem with a finite number of parameters. An adaptive finite element method driven by an a posteriori error estimator for the error in the parameters is presented. Unlike prior results in the literature, our estimator, which is composed of standard energy error residual estimators for the state equation and suitable co-state problems, reflects the faster convergence of the parameter error compared to the (co)-state variables. We show optimal convergence rates of our method; in particular and unlike prior works, we prove that the estimator decreases with a rate that is the sum of the best approximation rates of the state and co-state variables. Experiments confirm that our method matches the convergence rate of the parameter error.
Branchwidth determines how graphs, and more generally, arbitrary connectivity (basically symmetric and submodular) functions could be decomposed into a tree-like structure by specific cuts. We develop a general framework for designing fixed-parameter tractable (FPT) 2-approximation algorithms for branchwidth of connectivity functions. The first ingredient of our framework is combinatorial. We prove a structural theorem establishing that either a sequence of particular refinement operations could decrease the width of a branch decomposition or that the width of the decomposition is already within a factor of 2 from the optimum. The second ingredient is an efficient implementation of the refinement operations for branch decompositions that support efficient dynamic programming. We present two concrete applications of our general framework. $\bullet$ An algorithm that for a given $n$-vertex graph $G$ and integer $k$ in time $2^{2^{O(k)}} n^2$ either constructs a rank decomposition of $G$ of width at most $2k$ or concludes that the rankwidth of $G$ is more than $k$. It also yields a $(2^{2k+1}-1)$-approximation algorithm for cliquewidth within the same time complexity, which in turn, improves to $f(k)n^2$ the running times of various algorithms on graphs of cliquewidth $k$. Breaking the "cubic barrier" for rankwidth and cliquewidth was an open problem in the area. $\bullet$ An algorithm that for a given $n$-vertex graph $G$ and integer $k$ in time $2^{O(k)} n$ either constructs a branch decomposition of $G$ of width at most $2k$ or concludes that the branchwidth of $G$ is more than $k$. This improves over the 3-approximation that follows from the recent treewidth 2-approximation of Korhonen [FOCS 2021].
We consider a stochastic bandit problem with a possibly infinite number of arms. We write $p^*$ for the proportion of optimal arms and $\Delta$ for the minimal mean-gap between optimal and sub-optimal arms. We characterize the optimal learning rates both in the cumulative regret setting, and in the best-arm identification setting in terms of the problem parameters $T$ (the budget), $p^*$ and $\Delta$. For the objective of minimizing the cumulative regret, we provide a lower bound of order $\Omega(\log(T)/(p^*\Delta))$ and a UCB-style algorithm with matching upper bound up to a factor of $\log(1/\Delta)$. Our algorithm needs $p^*$ to calibrate its parameters, and we prove that this knowledge is necessary, since adapting to $p^*$ in this setting is impossible. For best-arm identification we also provide a lower bound of order $\Omega(\exp(-cT\Delta^2 p^*))$ on the probability of outputting a sub-optimal arm where $c>0$ is an absolute constant. We also provide an elimination algorithm with an upper bound matching the lower bound up to a factor of order $\log(T)$ in the exponential, and that does not need $p^*$ or $\Delta$ as parameter. Our results apply directly to the three related problems of competing against the $j$-th best arm, identifying an $\epsilon$ good arm, and finding an arm with mean larger than a quantile of a known order.
In the minimum $k$-cut problem, we want to find the minimum number of edges whose deletion breaks the input graph into at least $k$ connected components. The classic algorithm of Karger and Stein runs in $\tilde O(n^{2k-2})$ time, and recent, exciting developments have improved the running time to $O(n^k)$. For general, weighted graphs, this is tight assuming popular hardness conjectures. In this work, we show that perhaps surprisingly, $O(n^k)$ is not the right answer for simple, unweighted graphs. We design an algorithm that runs in time $O(n^{(1-\epsilon)k})$ where $\epsilon>0$ is an absolute constant, breaking the natural $n^k$ barrier. This establishes a separation of the two problems in the unweighted and weighted cases.
Online allocation problems with resource constraints are central problems in revenue management and online advertising. In these problems, requests arrive sequentially during a finite horizon and, for each request, a decision maker needs to choose an action that consumes a certain amount of resources and generates reward. The objective is to maximize cumulative rewards subject to a constraint on the total consumption of resources. In this paper, we consider a data-driven setting in which the reward and resource consumption of each request are generated using an input model that is unknown to the decision maker. We design a general class of algorithms that attain good performance in various input models without knowing which type of input they are facing. In particular, our algorithms are asymptotically optimal under independent and identically distributed inputs as well as various non-stationary stochastic input models, and they attain an asymptotically optimal fixed competitive ratio when the input is adversarial. Our algorithms operate in the Lagrangian dual space: they maintain a dual multiplier for each resource that is updated using online mirror descent. By choosing the reference function accordingly, we recover the dual sub-gradient descent and dual multiplicative weights update algorithm. The resulting algorithms are simple, fast, and do not require convexity in the revenue function, consumption function and action space, in contrast to existing methods for online allocation problems. We discuss applications to network revenue management, online bidding in repeated auctions with budget constraints, online proportional matching with high entropy, and personalized assortment optimization with limited inventory.
In genome rearrangements, the mutational event transposition swaps two adjacent blocks of genes in one chromosome. The Transposition Distance Problem (TDP) aims to find the minimum number of transpositions required to transform one chromosome into another, both represented as permutations. The TDP can be reduced to the problem of Sorting by Transpositions (SBT). SBT is $\mathcal{NP}$-hard and the best approximation algorithm with a $1.375$ ratio was proposed by Elias and Hartman. Their algorithm employs simplification, a technique used to transform an input permutation $\pi$ into a simple permutation $\hat{\pi}$, presumably easier to handle with. The permutation $\hat{\pi}$ is obtained by inserting new symbols into $\pi$ in a way that the lower bound of the transposition distance of $\pi$ is kept on $\hat{\pi}$. The simplification is guaranteed to keep the lower bound, not the transposition distance. In this paper, we first show that the algorithm of Elias and Hartman (EH algorithm) may require one extra transposition above the approximation ratio of $1.375$, depending on how the input permutation is simplified. Next, using an algebraic approach, we propose a new upper bound for the transposition distance and a new $1.375$-approximation algorithm to solve SBT skipping simplification and ensuring the approximation ratio of $1.375$ for all $S_n$. We implemented our algorithm and EH's. Regarding the implementation of the EH algorithm, two issues needed to be fixed. We tested both algorithms against all permutations of size $n$, $2\leq n \leq 12$. The results show that the EH algorithm exceeds the approximation ratio of $1.375$ for permutations with a size greater than $7$. Finally, we investigate the performance of both implementations on longer permutations of maximum length $500$.
We study the $c$-approximate near neighbor problem under the continuous Fr\'echet distance: Given a set of $n$ polygonal curves with $m$ vertices, a radius $\delta > 0$, and a parameter $k \leq m$, we want to preprocess the curves into a data structure that, given a query curve $q$ with $k$ vertices, either returns an input curve with Fr\'echet distance at most $c\cdot \delta$ to $q$, or returns that there exists no input curve with Fr\'echet distance at most $\delta$ to $q$. We focus on the case where the input and the queries are one-dimensional polygonal curves -- also called time series -- and we give a comprehensive analysis for this case. We obtain new upper bounds that provide different tradeoffs between approximation factor, preprocessing time, and query time. Our data structures improve upon the state of the art in several ways. We show that for any $0 < \varepsilon \leq 1$ an approximation factor of $(1+\varepsilon)$ can be achieved within the same asymptotic time bounds as the previously best result for $(2+\varepsilon)$. Moreover, we show that an approximation factor of $(2+\varepsilon)$ can be obtained by using preprocessing time and space $O(nm)$, which is linear in the input size, and query time in $O(\frac{1}{\varepsilon})^{k+2}$, where the previously best result used preprocessing time in $n \cdot O(\frac{m}{\varepsilon k})^k$ and query time in $O(1)^k$. We complement our upper bounds with matching conditional lower bounds based on the Orthogonal Vectors Hypothesis. Interestingly, some of our lower bounds already hold for any super-constant value of $k$. This is achieved by proving hardness of a one-sided sparse version of the Orthogonal Vectors problem as an intermediate problem, which we believe to be of independent interest.