Given a Binary Decision Diagram $B$ of a Boolean function $\varphi$ in $n$ variables, it is well known that all $\varphi$-models can be enumerated in output polynomial time, and in a compressed way (using don't-care symbols). We show that all $N$ many $\varphi$-models of fixed Hamming-weight $k$ can be enumerated as well in time polynomial in $n$ and $|B|$ and $N$. Furthermore, using novel wildcards, again enables a compressed enumeration of these models.
We present polynomial-time SDP-based algorithms for the following problem: For fixed $k \leq \ell$, given a real number $\epsilon>0$ and a graph $G$ that admits a $k$-colouring with a $\rho$-fraction of the edges coloured properly, it returns an $\ell$-colouring of $G$ with an $(\alpha \rho - \epsilon)$-fraction of the edges coloured properly in polynomial time in $G$ and $1 / \epsilon$. Our algorithms are based on the algorithms of Frieze and Jerrum [Algorithmica'97] and of Karger, Motwani and Sudan [JACM'98]. For $k = 2, \ell = 3$, our algorithm achieves an approximation ratio $\alpha = 1$, which is the best possible. When $k$ is fixed and $\ell$ grows large, our algorithm achieves an approximation ratio of $\alpha = 1 - o(1 / \ell)$. When $k, \ell$ are both large, our algorithm achieves an approximation ratio of $\alpha = 1 - 1 / \ell + 2 \ln \ell / k \ell - o(\ln \ell / k \ell) - O(1 / k^2)$; if we fix $d = \ell - k$ and allow $k, \ell$ to grow large, this is $\alpha = 1 - 1 / \ell + 2 \ln \ell / k \ell - o(\ln \ell / k \ell)$. By extending the results of Khot, Kindler, Mossel and O'Donnell [SICOMP'07] to the promise setting, we show that for large $k$ and $\ell$, assuming Khot's Unique Games Conjecture (UGC), it is \NP-hard to achieve an approximation ratio $\alpha$ greater than $1 - 1 / \ell + 2 \ln \ell / k \ell + o(\ln \ell / k \ell)$, provided that $\ell$ is bounded by a function that is $o(\exp(\sqrt[3]{k}))$. For the case where $d = \ell - k$ is fixed, this bound matches the performance of our algorithm up to $o(\ln \ell / k \ell)$. Furthermore, by extending the results of Guruswami and Sinop [ToC'13] to the promise setting, we prove that it is NP-hard to achieve an approximation ratio greater than $1 - 1 / \ell + 8 \ln \ell / k \ell + o(\ln \ell / k \ell)$, provided again that $\ell$ is bounded as before (but this time without assuming the UGC).
A graph is called a $(k,\rho)$-graph iff every node can reach $\rho$ of its nearest neighbors in at most k hops. This property proved useful in the analysis and design of parallel shortest-path algorithms. Any graph can be transformed into a $(k,\rho)$-graph by adding shortcuts. Formally, the $(k,\rho)$-Minimum-Shortcut problem asks to find an appropriate shortcut set of minimal cardinality. We show that the $(k,\rho)$-Minimum-Shortcut problem is NP-complete in the practical regime of $k \ge 3$ and $\rho = \Theta(n^\epsilon)$ for $\epsilon > 0$. With a related construction, we bound the approximation factor of known $(k,\rho)$-Minimum-Shortcut problem heuristics from below and propose algorithmic countermeasures improving the approximation quality. Further, we describe an integer linear problem (ILP) solving the $(k,\rho)$-Minimum-Shortcut problem optimally. Finally, we compare the practical performance and quality of all algorithms in an empirical campaign.
At STOC 2002, Eiter, Gottlob, and Makino presented a technique called ordered generation that yields an $n^{O(d)}$-delay algorithm listing all minimal transversals of an $n$-vertex hypergraph of degeneracy $d$. Recently at IWOCA 2019, Conte, Kant\'e, Marino, and Uno asked whether this XP-delay algorithm parameterized by $d$ could be made FPT-delay for a weaker notion of degeneracy, or even parameterized by the maximum degree $\Delta$, i.e., whether it can be turned into an algorithm with delay $f(\Delta)\cdot n^{O(1)}$ for some computable function $f$. Moreover, and as a first step toward answering that question, they note that they could not achieve these time bounds even for the particular case of minimal dominating sets enumeration. In this paper, using ordered generation, we show that an FPT-delay algorithm can be devised for minimal transversals enumeration parameterized by the maximum degree and dimension, giving a positive and more general answer to the latter question.
We consider an inverse problem for a finite graph $(X,E)$ where we are given a subset of vertices $B\subset X$ and the distances $d_{(X,E)}(b_1,b_2)$ of all vertices $b_1,b_2\in B$. The distance of points $x_1,x_2\in X$ is defined as the minimal number of edges needed to connect two vertices, so all edges have length 1. The inverse problem is a discrete version of the boundary rigidity problem in Riemannian geometry or the inverse travel time problem in geophysics. We will show that this problem has unique solution under certain conditions and develop quantum computing methods to solve it. We prove the following uniqueness result: when $(X,E)$ is a tree and $B$ is the set of leaves of the tree, the graph $(X,E)$ can be uniquely determined in the class of all graphs having a fixed number of vertices. We present a quantum computing algorithm which produces a graph $(X,E)$, or one of those, which has a given number of vertices and the required distances between vertices in $B$. To this end we develop an algorithm that takes in a qubit representation of a graph and combine it with Grover's search algorithm. The algorithm can be implemented using only $O(|X|^2)$ qubits, the same order as the number of elements in the adjacency matrix of $(X,E)$. It also has a quadratic improvement in computational cost compared to standard classical algorithms. Finally, we consider applications in theory of computation, and show that a slight modification of the above inverse problem is NP-complete: all NP-problems can be reduced to a discrete inverse problem we consider.
We propose and study a new multilevel method for the numerical approximation of a Gibbs distribution $\pi$ on $\mathbb{R}^d$, based on (overdamped) Langevin diffusions. This method inspired by \cite{mainPPlangevin} and \cite{giles_szpruch_invariant} relies on a multilevel occupation measure, $i.e.$ on an appropriate combination of $R$ occupation measures of (constant-step) Euler schemes with respective steps $\gamma_r = \gamma_0 2^{-r}$, $r=0,\ldots,R$. We first state a quantitative result under general assumptions which guarantees an \textit{$\varepsilon$-approximation} (in a $L^2$-sense) with a cost of the order $\varepsilon^{-2}$ or $\varepsilon^{-2}|\log \varepsilon|^3$ under less contractive assumptions. We then apply it to overdamped Langevin diffusions with strongly convex potential $U:\mathbb{R}^d\rightarrow\mathbb{R}$ and obtain an \textit{$\varepsilon$-complexity} of the order ${\cal O}(d\varepsilon^{-2}\log^3(d\varepsilon^{-2}))$ or ${\cal O}(d\varepsilon^{-2})$ under additional assumptions on $U$. More precisely, up to universal constants, an appropriate choice of the parameters leads to a cost controlled by ${(\bar{\lambda}_U\vee 1)^2}{\underline{\lambda}_U^{-3}} d\varepsilon^{-2}$ (where $\bar{\lambda}_U$ and $\underline{\lambda}_U$ respectively denote the supremum and the infimum of the largest and lowest eigenvalue of $D^2U$). We finally complete these theoretical results with some numerical illustrations including comparisons to other algorithms in Bayesian learning and opening to non strongly convex setting.
Krylov methods rely on iterated matrix-vector products $A^k u_j$ for an $n\times n$ matrix $A$ and vectors $u_1,\ldots,u_m$. The space spanned by all iterates $A^k u_j$ admits a particular basis -- the \emph{maximal Krylov basis} -- which consists of iterates of the first vector $u_1, Au_1, A^2u_1,\ldots$, until reaching linear dependency, then iterating similarly the subsequent vectors until a basis is obtained. Finding minimal polynomials and Frobenius normal forms is closely related to computing maximal Krylov bases. The fastest way to produce these bases was, until this paper, Keller-Gehrig's 1985 algorithm whose complexity bound $O(n^\omega \log(n))$ comes from repeated squarings of $A$ and logarithmically many Gaussian eliminations. Here $\omega>2$ is a feasible exponent for matrix multiplication over the base field. We present an algorithm computing the maximal Krylov basis in $O(n^\omega\log\log(n))$ field operations when $m \in O(n)$, and even $O(n^\omega)$ as soon as $m\in O(n/\log(n)^c)$ for some fixed real $c>0$. As a consequence, we show that the Frobenius normal form together with a transformation matrix can be computed deterministically in $O(n^\omega \log\log(n)^2)$, and therefore matrix exponentiation~$A^k$ can be performed in the latter complexity if $\log(k) \in O(n^{\omega-1-\varepsilon})$, for $\varepsilon>0$. A key idea for these improvements is to rely on fast algorithms for $m\times m$ polynomial matrices of average degree $n/m$, involving high-order lifting and minimal kernel bases.
We study the weak recovery problem on the $r$-uniform hypergraph stochastic block model ($r$-HSBM) with two balanced communities. In this model, $n$ vertices are randomly divided into two communities, and size-$r$ hyperedges are added randomly depending on whether all vertices in the hyperedge are in the same community. The goal of the weak recovery problem is to recover a non-trivial fraction of the communities given the hypergraph. Previously, Pal and Zhu (2021) established that weak recovery is always possible above a natural threshold called the Kesten-Stigum (KS) threshold. Gu and Polyanskiy (2023) proved that the KS threshold is tight if $r\le 4$ or the expected degree $d$ is small. It remained open whether the KS threshold is tight for $r\ge 5$ and large $d$. In this paper we determine the tightness of the KS threshold for any fixed $r$ and large $d$. We prove that for $r\le 6$ and $d$ large enough, the KS threshold is tight. This shows that there is no information-computation gap in this regime. This partially confirms a conjecture of Angelini et al. (2015). For $r\ge 7$, we prove that for $d$ large enough, the KS threshold is not tight, providing more evidence supporting the existence of an information-computation gap in this regime. Furthermore, we establish asymptotic bounds on the weak recovery threshold for fixed $r$ and large $d$. We also obtain a number of results regarding the closely-related broadcasting on hypertrees (BOHT) model, including the asymptotics of the reconstruction threshold for $r\ge 7$ and impossibility of robust reconstruction at criticality.
We study the notion of $k$-stabilizer universal quantum state, that is, an $n$-qubit quantum state, such that it is possible to induce any stabilizer state on any $k$ qubits, by using only local operations and classical communications. These states generalize the notion of $k$-pairable states introduced by Bravyi et al., and can be studied from a combinatorial perspective using graph states and $k$-vertex-minor universal graphs. First, we demonstrate the existence of $k$-stabilizer universal graph states that are optimal in size with $n=\Theta(k^2)$ qubits. We also provide parameters for which a random graph state on $\Theta(k^2)$ qubits is $k$-stabilizer universal with high probability. Our second contribution consists of two explicit constructions of $k$-stabilizer universal graph states on $n = O(k^4)$ qubits. Both rely upon the incidence graph of the projective plane over a finite field $\mathbb{F}_q$. This provides a major improvement over the previously known explicit construction of $k$-pairable graph states with $n = O(2^{3k})$, bringing forth a new and potentially powerful family of multipartite quantum resources.
Given integers $n > k > 0$, and a set of integers $L \subset [0, k-1]$, an $L$-system is a family of sets $\mathcal{F} \subset \binom{[n]}{k}$ such that $|F \cap F'| \in L$ for distinct $F, F'\in \mathcal{F}$. $L$-systems correspond to independent sets in a certain generalized Johnson graph $G(n, k, L)$, so that the maximum size of an $L$-system is equivalent to finding the independence number of the graph $G(n, k, L)$. The Lov\'asz number $\vartheta(G)$ is a semidefinite programming approximation of the independence number of a graph $G$. In this paper, we determine the order of magnitude of $\vartheta(G(n, k, L))$ of any generalized Johnson graph with $k$ and $L$ fixed and $n\rightarrow \infty$. As an application of this theorem, we give an explicit construction of a graph $G$ on $n$ vertices with large gap between the Lov\'asz number and the Shannon capacity $c(G)$. Specifically, we prove that for any $\epsilon > 0$, for infinitely many $n$ there is a generalized Johnson graph $G$ on $n$ vertices which has ratio $\vartheta(G)/c(G) = \Omega(n^{1-\epsilon})$, which greatly improves on the best known explicit construction.
Classical Krylov subspace projection methods for the solution of linear problem $Ax = b$ output an approximate solution $\widetilde{x}\simeq x$. Recently, it has been recognized that projection methods can be understood from a statistical perspective. These probabilistic projection methods return a distribution $p(\widetilde{x})$ in place of a point estimate $\widetilde{x}$. The resulting uncertainty, codified as a distribution, can, in theory, be meaningfully combined with other uncertainties, can be propagated through computational pipelines, and can be used in the framework of probabilistic decision theory. The problem we address is that the current probabilistic projection methods lead to the poorly calibrated posterior distribution. We improve the covariance matrix from previous works in a way that it does not contain such undesirable objects as $A^{-1}$ or $A^{-1}A^{-T}$, results in nontrivial uncertainty, and reproduces an arbitrary projection method as a mean of the posterior distribution. We also propose a variant that is numerically inexpensive in the case the uncertainty is calibrated a priori. Since it usually is not, we put forward a practical way to calibrate uncertainty that performs reasonably well, albeit at the expense of roughly doubling the numerical cost of the underlying projection method.