The maximum independent set problem is one of the most important problems in graph algorithms and has been extensively studied in the line of research on the worst-case analysis of exact algorithms for NP-hard problems. In the weighted version, each vertex in the graph is associated with a weight and we are going to find an independent set of maximum total vertex weight. In this paper, we design several reduction rules and a fast exact algorithm for the maximum weighted independent set problem, and use the measure-and-conquer technique to analyze the running time bound of the algorithm. Our algorithm works on general weighted graphs and it has a good running time bound on sparse graphs. If the graph has an average degree at most 3, our algorithm runs in $O^*(1.1443^n)$ time and polynomial space, improving previous running time bounds for the problem in cubic graphs using polynomial space.
We study quantum algorithms for several fundamental string problems, including Longest Common Substring, Lexicographically Minimal String Rotation, and Longest Square Substring. These problems have been widely studied in the stringology literature since the 1970s, and are known to be solvable by near-linear time classical algorithms. In this work, we give quantum algorithms for these problems with near-optimal query complexities and time complexities. Specifically, we show that: - Longest Common Substring can be solved by a quantum algorithm in $\tilde O(n^{2/3})$ time, improving upon the recent $\tilde O(n^{5/6})$-time algorithm by Le Gall and Seddighin (2020). Our algorithm uses the MNRS quantum walk framework, together with a careful combination of string synchronizing sets (Kempa and Kociumaka, 2019) and generalized difference covers. - Lexicographically Minimal String Rotation can be solved by a quantum algorithm in $n^{1/2 + o(1)}$ time, improving upon the recent $\tilde O(n^{3/4})$-time algorithm by Wang and Ying (2020). We design our algorithm by first giving a new classical divide-and-conquer algorithm in near-linear time based on exclusion rules, and then speeding it up quadratically using nested Grover search and quantum minimum finding. - Longest Square Substring can be solved by a quantum algorithm in $\tilde O(\sqrt{n})$ time. Our algorithm is an adaptation of the algorithm by Le Gall and Seddighin (2020) for the Longest Palindromic Substring problem, but uses additional techniques to overcome the difficulty that binary search no longer applies. Our techniques naturally extend to other related string problems, such as Longest Repeated Substring, Longest Lyndon Substring, and Minimal Suffix.
Given an alphabet size $m\in\mathbb{N}$ thought of as a constant, and $\vec{k} = (k_1,\ldots,k_m)$ whose entries sum of up $n$, the $\vec{k}$-multi-slice is the set of vectors $x\in [m]^n$ in which each symbol $i\in [m]$ appears precisely $k_i$ times. We show an invariance principle for low-degree functions over the multi-slice, to functions over the product space $([m]^n,\mu^n)$ in which $\mu(i) = k_i/n$. This answers a question raised by Filmus et al. As applications of the invariance principle, we show: 1. An analogue of the "dictatorship test implies computational hardness" paradigm for problems with perfect completeness, for a certain class of dictatorship tests. Our computational hardness is proved assuming a recent strengthening of the Unique-Games Conjecture, called the Rich $2$-to-$1$ Games Conjecture. Using this analogue, we show that assuming the Rich $2$-to-$1$ Games Conjecture, (a) there is an $r$-ary CSP $\mathcal{P}_r$ for which it is NP-hard to distinguish satisfiable instances of the CSP and instances that are at most $\frac{2r+1}{2^r} + o(1)$ satisfiable, and (b) hardness of distinguishing $3$-colorable graphs, and graphs that do not contain an independent set of size $o(1)$. 2. A reduction of the problem of studying expectations of products of functions on the multi-slice to studying expectations of products of functions on correlated, product spaces. In particular, we are able to deduce analogues of the Gaussian bounds from \cite{MosselGaussian} for the multi-slice. 3. In a companion paper, we show further applications of our invariance principle in extremal combinatorics, and more specifically to proving removal lemmas of a wide family of hypergraphs $H$ called $\zeta$-forests, which is a natural extension of the well-studied case of matchings.
Can every connected graph burn in $\lceil \sqrt{n} \rceil $ steps? While this conjecture remains open, we prove that it is asymptotically true when the graph is much larger than its \emph{growth}, which is the maximal distance of a vertex to a well-chosen path in the graph. In fact, we prove that the conjecture for graphs of bounded growth boils down to a finite number of cases. Through an improved (but still weaker) bound for all trees, we argue that the conjecture almost holds for all graphs with minimum degree at least $3$ and holds for all large enough graphs with minimum degree at least $4$. The previous best lower bound was $23$.
In this paper, we study the problem of designing prefix-free encoding schemes having minimum average code length that can be decoded efficiently under a decode cost model that captures memory hierarchy induced cost functions. We also study a special case of this problem that is closely related to the length limited Huffman coding (LLHC) problem; we call this the {\em soft-length limited Huffman coding} problem. In this version, there is a penalty associated with each of the $n$ characters of the alphabet whose encodings exceed a specified bound $D$($\leq n$), where the penalty increases linearly with the length of the encoding beyond $D$. The goal of the problem is to find a prefix-free encoding having minimum average code length and total penalty within a pre-specified bound ${\cal P}$. This generalizes the LLHC problem. We present an algorithm to solve this problem that runs in time $O( nD )$. We study a further generalization in which the penalty function and the objective function can both be arbitrary monotonically non-decreasing functions of the codeword length. We provide dynamic programming based exact and PTAS algorithms for this setting.
For the misspecified linear Markov decision process (MLMDP) model of Jin et al. [2020], we propose an algorithm with three desirable properties. (P1) Its regret after $K$ episodes scales as $K \max \{ \varepsilon_{\text{mis}}, \varepsilon_{\text{tol}} \}$, where $\varepsilon_{\text{mis}}$ is the degree of misspecification and $\varepsilon_{\text{tol}}$ is a user-specified error tolerance. (P2) Its space and per-episode time complexities remain bounded as $K \rightarrow \infty$. (P3) It does not require $\varepsilon_{\text{mis}}$ as input. To our knowledge, this is the first algorithm satisfying all three properties. For concrete choices of $\varepsilon_{\text{tol}}$, we also improve existing regret bounds (up to log factors) while achieving either (P2) or (P3) (existing algorithms satisfy neither). At a high level, our algorithm generalizes (to MLMDPs) and refines the Sup-Lin-UCB algorithm, which Takemura et al. [2021] recently showed satisfies (P3) for contextual bandits. We also provide an intuitive interpretation of their result, which informs the design of our algorithm.
In this article, we focus on extending the notion of lattice linearity to self-stabilizing programs. Lattice linearity allows a node to execute its actions with old information about the state of other nodes and still preserve correctness. It increases the concurrency of the program execution by eliminating the need for synchronization among its nodes. The extension -- denoted as eventually lattice linear algorithms -- is performed with an example of the service-demand based minimal dominating set (SDDS) problem, which is a generalization of the dominating set problem; it converges in $2n$ moves. Subsequently, we also show that the same approach could be used in various other problems including minimal vertex cover, maximal independent set and graph coloring.
The most important computational problem on lattices is the Shortest Vector Problem (SVP). In this paper, we present new algorithms that improve the state-of-the-art for provable classical/quantum algorithms for SVP. We present the following results. $\bullet$ A new algorithm for SVP that provides a smooth tradeoff between time complexity and memory requirement. For any positive integer $4\leq q\leq \sqrt{n}$, our algorithm takes $q^{13n+o(n)}$ time and requires $poly(n)\cdot q^{16n/q^2}$ memory. This tradeoff which ranges from enumeration ($q=\sqrt{n}$) to sieving ($q$ constant), is a consequence of a new time-memory tradeoff for Discrete Gaussian sampling above the smoothing parameter. $\bullet$ A quantum algorithm for SVP that runs in time $2^{0.953n+o(n)}$ and requires $2^{0.5n+o(n)}$ classical memory and poly(n) qubits. In Quantum Random Access Memory (QRAM) model this algorithm takes only $2^{0.873n+o(n)}$ time and requires a QRAM of size $2^{0.1604n+o(n)}$, poly(n) qubits and $2^{0.5n}$ classical space. This improves over the previously fastest classical (which is also the fastest quantum) algorithm due to [ADRS15] that has a time and space complexity $2^{n+o(n)}$. $\bullet$ A classical algorithm for SVP that runs in time $2^{1.741n+o(n)}$ time and $2^{0.5n+o(n)}$ space. This improves over an algorithm of [CCL18] that has the same space complexity. The time complexity of our classical and quantum algorithms are obtained using a known upper bound on a quantity related to the lattice kissing number which is $2^{0.402n}$. We conjecture that for most lattices this quantity is a $2^{o(n)}$. Assuming that this is the case, our classical algorithm runs in time $2^{1.292n+o(n)}$, our quantum algorithm runs in time $2^{0.750n+o(n)}$ and our quantum algorithm in QRAM model runs in time $2^{0.667n+o(n)}$.
We provide an algorithm requiring only $O(N^2)$ time to compute the maximum weight independent set of interval filament graphs. This also implies an $O(N^4)$ algorithm to compute the maximum weight induced matching of interval filament graphs. Both algorithms significantly improve upon the previous best complexities for these problems. Previously, the maximum weight independent set and maximum weight induced matching problems required $O(N^3)$ and $O(N^6)$ time respectively.
Min-plus product of two $n\times n$ matrices is a fundamental problem in algorithm research. It is known to be equivalent to APSP, and in general it has no truly subcubic algorithms. In this paper, we focus on the min-plus product on a special class of matrices, called $\delta$-bounded-difference matrices, in which the difference between any two adjacent entries is bounded by $\delta=O(1)$. Our algorithm runs in randomized time $O(n^{2.779})$ by the fast rectangular matrix multiplication algorithm [Le Gall \& Urrutia 18], better than $\tilde{O}(n^{2+\omega/3})=O(n^{2.791})$ ($\omega<2.373$ [Alman \& V.V.Williams 20]). This improves previous result of $\tilde{O}(n^{2.824})$ [Bringmann et al. 16]. When $\omega=2$ in the ideal case, our complexity is $\tilde{O}(n^{2+2/3})$, improving Bringmann et al.'s result of $\tilde{O}(n^{2.755})$.
Consider a system of $m$ polynomial equations $\{p_i(x) = b_i\}_{i \leq m}$ of degree $D\geq 2$ in $n$-dimensional variable $x \in \mathbb{R}^n$ such that each coefficient of every $p_i$ and $b_i$s are chosen at random and independently from some continuous distribution. We study the basic question of determining the smallest $m$ -- the algorithmic threshold -- for which efficient algorithms can find refutations (i.e. certificates of unsatisfiability) for such systems. This setting generalizes problems such as refuting random SAT instances, low-rank matrix sensing and certifying pseudo-randomness of Goldreich's candidate generators and generalizations. We show that for every $d \in \mathbb{N}$, the $(n+m)^{O(d)}$-time canonical sum-of-squares (SoS) relaxation refutes such a system with high probability whenever $m \geq O(n) \cdot (\frac{n}{d})^{D-1}$. We prove a lower bound in the restricted low-degree polynomial model of computation which suggests that this trade-off between SoS degree and the number of equations is nearly tight for all $d$. We also confirm the predictions of this lower bound in a limited setting by showing a lower bound on the canonical degree-$4$ sum-of-squares relaxation for refuting random quadratic polynomials. Together, our results provide evidence for an algorithmic threshold for the problem at $m \gtrsim \widetilde{O}(n) \cdot n^{(1-\delta)(D-1)}$ for $2^{n^{\delta}}$-time algorithms for all $\delta$.