{mayi_des}
Consider a following NP-problem DOUBLE CLIQUE (abbr.: CLIQ$_{2}$): Given a natural number $k>2$ and a pair of two disjoint subgraphs of a fixed graph $G$ decide whether each subgraph in question contains a $k$-clique. We prove that CLIQ$_{2}$ can't be solved in polynomial time by a deterministic TM. Clearly this is equivalent to analogous claim related to the well-known monotone problem CLIQUE (abbr.: CLIQ), which infers $\mathbf{P}\neq \mathbf{NP}$. Our proof of polynomial unsolvability of CLIQ$_{2}$ upgrades the well-known "monotone" proof with respect to CLIQ. However note that problem CLIQ$_{2}$ is not monotone and it appears to be more complex than just iterated CLIQ, as the required subgraphs are mutually dependent (see also Remark 26 in the text).
We prove a bound of $O( k (n+m)\log^{d-1})$ on the number of incidences between $n$ points and $m$ axis parallel boxes in $\mathbb{R}^d$, if no $k$ boxes contain $k$ common points. That is, the incidence graph between the points and the boxes does not contain $K_{k,k}$ as a subgraph. This new bound improves over previous work by a factor of $\log^d n$, for $d >2$. We also study other variants of the problem. For halfspaces, using shallow cuttings, we get a near linear bound in two and three dimensions. Finally, we present near linear bound for the case of shapes in the plane with low union complexity (e.g. fat triangles).
In this paper, we introduce a novel family of iterative algorithms which carry out $\alpha$-divergence minimisation in a Variational Inference context. They do so by ensuring a systematic decrease at each step in the $\alpha$-divergence between the variational and the posterior distributions. In its most general form, the variational distribution is a mixture model and our framework allows us to simultaneously optimise the weights and components parameters of this mixture model. Notably, our approach permits to build on various methods previously proposed for $\alpha$-divergence minimisation such as Gradient or Power Descent schemes and we also shed a new light on an integrated Expectation Maximization algorithm. Lastly, we provide empirical evidence that our methodology yields improved results on several multimodal target distributions.
We give a nearly-linear time reduction that encodes any linear program as a 2-commodity flow problem with only a small blow-up in size. Under mild assumptions similar to those employed by modern fast solvers for linear programs, our reduction causes only a polylogarithmic multiplicative increase in the size of the program and runs in nearly-linear time. Our reduction applies to high-accuracy approximation algorithms and exact algorithms. Given an approximate solution to the 2-commodity flow problem, we can extract a solution to the linear program in linear time with only a polynomial factor increase in the error. This implies that any algorithm that solves the 2-commodity flow problem can solve linear programs in essentially the same time. Given a directed graph with edge capacities and two source-sink pairs, the goal of the 2-commodity flow problem is to maximize the sum of the flows routed between the two source-sink pairs subject to edge capacities and flow conservation. A 2-commodity flow can be directly written as a linear program, and thus we establish a nearly-tight equivalence between these two classes of problems. Our proof follows the outline of Itai's polynomial-time reduction of a linear program to a 2-commodity flow problem (JACM'78). Itai's reduction shows that exactly solving 2-commodity flow and exactly solving linear programming are polynomial-time equivalent. We improve Itai's reduction to nearly preserve the problem representation size in each step. In addition, we establish an error bound for approximately solving each intermediate problem in the reduction, and show that the accumulated error is polynomially bounded. We remark that our reduction does not run in strongly polynomial time and that it is open whether 2-commodity flow and linear programming are equivalent in strongly polynomial time.
This paper addresses the problem of determining all optimal integer solutions of a linear integer network flow problem, which we call the all optimal integer flow (AOF) problem. We derive an O(F (m + n) + mn + M ) time algorithm to determine all F many optimal integer flows in a directed network with n nodes and m arcs, where M is the best time needed to find one minimum cost flow. We remark that stopping Hamacher's well-known method for the determination of the K best integer flows at the first sub-optimal flow results in an algorithm with a running time of O(F m(n log n + m) + M ) for solving the AOF problem. Our improvement is essentially made possible by replacing the shortest path sub-problem with a more efficient way to determine a so called proper zero cost cycle using a modified depth-first search technique. As a byproduct, our analysis yields an enhanced algorithm to determine the K best integer flows that runs in O(Kn3 + M ). Besides, we give lower and upper bounds for the number of all optimal integer and feasible integer solutions. Our bounds are based on the fact that any optimal solution can be obtained by an initial optimal tree solution plus a conical combination of incidence vectors of all induced cycles with bounded coefficients.
Crystal Structure Prediction (csp) is one of the central and most challenging problems in materials science and computational chemistry. In csp, the goal is to find a configuration of ions in 3D space that yields the lowest potential energy. Finding an efficient procedure to solve this complex optimisation question is a well known open problem. Due to the exponentially large search space, the problem has been referred in several materials-science papers as "NP-Hard and very challenging" without a formal proof. This paper fills a gap in the literature providing the first set of formally proven NP-Hardness results for a variant of csp with various realistic constraints. In particular, we focus on the problem of removal: the goal is to find a substructure with minimal potential energy, by removing a subset of the ions. Our main contributions are NP-Hardness results for the csp removal problem, new embeddings of combinatorial graph problems into geometrical settings, and a more systematic exploration of the energy function to reveal the complexity of csp. In a wider context, our results contribute to the analysis of computational problems for weighted graphs embedded into the three-dimensional Euclidean space.
We prove a linearity theorem for an extension of linear logic with addition and multiplication by a scalar: the proofs of some propositions in this logic are linear in the algebraic sense. This work is part of a wider research program that aims at defining a logic whose proof language is a quantum programming language.
Given a graph $G = (V,E)$, a threshold function $t~ :~ V \rightarrow \mathbb{N}$ and an integer $k$, we study the Harmless Set problem, where the goal is to find a subset of vertices $S \subseteq V$ of size at least $k$ such that every vertex $v\in V$ has less than $t(v)$ neighbors in $S$. We enhance our understanding of the problem from the viewpoint of parameterized complexity. Our focus lies on parameters that measure the structural properties of the input instance. We show that the problem is W[1]-hard parameterized by a wide range of fairly restrictive structural parameters such as the feedback vertex set number, pathwidth, treedepth, and even the size of a minimum vertex deletion set into graphs of pathwidth and treedepth at most three. On dense graphs, we show that the problem is W[1]-hard parameterized by cluster vertex deletion number. We also show that the Harmless Set problem with majority thresholds is W[1]-hard when parameterized by the treewidth of the input graph. We prove that the Harmless Set problem can be solved in polynomial time on graph with bounded cliquewidth. On the positive side, we obtain fixed-parameter algorithms for the problem with respect to neighbourhood diversity, twin cover and vertex integrity of the input graph. We show that the problem parameterized by the solution size is fixed parameter tractable on planar graphs. We thereby resolve two open questions stated in C. Bazgan and M. Chopin (2014) concerning the complexity of {\sc Harmless Set} parameterized by the treewidth of the input graph and on planar graphs with respect to the solution size.
We study the complexity of proving that a sparse random regular graph on an odd number of vertices does not have a perfect matching, and related problems involving each vertex being matched some pre-specified number of times. We show that this requires proofs of degree $\Omega(n / \log n)$ in the Polynomial Calculus (over fields of characteristic $\ne 2$) and Sum-of-Squares proof systems, and exponential size in the bounded-depth Frege proof system. This resolves a question by Razborov asking whether the Lov\'asz-Schrijver proof system requires $n^\delta$ rounds to refute these formulas for some $\delta > 0$. The results are obtained by a worst-case to average-case reduction of these formulas relying on a topological embedding theorem which may be of independent interest.
We initiate the study of Boolean function analysis on high-dimensional expanders. We give a random-walk based definition of high-dimensional expansion, which coincides with the earlier definition in terms of two-sided link expanders. Using this definition, we describe an analog of the Fourier expansion and the Fourier levels of the Boolean hypercube for simplicial complexes. Our analog is a decomposition into approximate eigenspaces of random walks associated with the simplicial complexes. Our random-walk definition and the decomposition have the additional advantage that they extend to the more general setting of posets, encompassing both high-dimensional expanders and the Grassmann poset, which appears in recent work on the unique games conjecture. We then use this decomposition to extend the Friedgut-Kalai-Naor theorem to high-dimensional expanders. Our results demonstrate that a constant-degree high-dimensional expander can sometimes serve as a sparse model for the Boolean slice or hypercube, and quite possibly additional results from Boolean function analysis can be carried over to this sparse model. Therefore, this model can be viewed as a derandomization of the Boolean slice, containing only $|X(k-1)|=O(n)$ points in contrast to $\binom{n}{k}$ points in the $k$-slice (which consists of all $n$-bit strings with exactly $k$ ones).
Eigendecomposition of symmetric matrices is at the heart of many computer vision algorithms. However, the derivatives of the eigenvectors tend to be numerically unstable, whether using the SVD to compute them analytically or using the Power Iteration (PI) method to approximate them. This instability arises in the presence of eigenvalues that are close to each other. This makes integrating eigendecomposition into deep networks difficult and often results in poor convergence, particularly when dealing with large matrices. While this can be mitigated by partitioning the data into small arbitrary groups, doing so has no theoretical basis and makes it impossible to exploit the full power of eigendecomposition. In previous work, we mitigated this using SVD during the forward pass and PI to compute the gradients during the backward pass. However, the iterative deflation procedure required to compute multiple eigenvectors using PI tends to accumulate errors and yield inaccurate gradients. Here, we show that the Taylor expansion of the SVD gradient is theoretically equivalent to the gradient obtained using PI without relying in practice on an iterative process and thus yields more accurate gradients. We demonstrate the benefits of this increased accuracy for image classification and style transfer.