Given a graph $G$ and an integer $b$, Bandwidth asks whether there exists a bijection $\pi$ from $V(G)$ to $\{1, \ldots, |V(G)|\}$ such that $\max_{\{u, v \} \in E(G)} | \pi(u) - \pi(v) | \leq b$. This is a classical NP-complete problem, known to remain NP-complete even on very restricted classes of graphs, such as trees of maximum degree 3 and caterpillars of hair length 3. In the realm of parameterized complexity, these results imply that the problem remains NP-hard on graphs of bounded pathwidth, while it is additionally known to be W[1]-hard when parameterized by the treedepth of the input graph. In contrast, the problem does become FPT when parameterized by the vertex cover number of the input graph. In this paper, we make progress towards the parameterized (in)tractability of Bandwidth. We first show that it is FPT when parameterized by the cluster vertex deletion number cvd plus the clique number $\omega$ of the input graph, thus generalizing the previously mentioned result for vertex cover. On the other hand, we show that Bandwidth is W[1]-hard when parameterized only by cvd. Our results generalize some of the previous results and narrow some of the complexity gaps.
We show that the \textsc{Maximum Weight Independent Set} problem (\textsc{MWIS}) can be solved in quasi-polynomial time on $H$-free graphs (graphs excluding a fixed graph $H$ as an induced subgraph) for every $H$ whose every connected component is a path or a subdivided claw (i.e., a tree with at most three leaves). This completes the dichotomy of the complexity of \textsc{MWIS} in $\mathcal{F}$-free graphs for any finite set $\mathcal{F}$ of graphs into NP-hard cases and cases solvable in quasi-polynomial time, and corroborates the conjecture that the cases not known to be NP-hard are actually polynomial-time solvable. The key graph-theoretic ingredient in our result is as follows. Fix an integer $t \geq 1$. Let $S_{t,t,t}$ be the graph created from three paths on $t$ edges by identifying one endpoint of each path into a single vertex. We show that, given a graph $G$, one can in polynomial time find either an induced $S_{t,t,t}$ in $G$, or a balanced separator consisting of $\Oh(\log |V(G)|)$ vertex neighborhoods in $G$, or an extended strip decomposition of $G$ (a decomposition almost as useful for recursion for \textsc{MWIS} as a partition into connected components) with each particle of weight multiplicatively smaller than the weight of $G$. This is a strengthening of a result of Majewski et al.\ [ICALP~2022] which provided such an extended strip decomposition only after the deletion of $\Oh(\log |V(G)|)$ vertex neighborhoods. To reach the final result, we employ an involved branching strategy that relies on the structural lemma presented above.
A matrix $\Phi \in \mathbb{R}^{Q \times N}$ satisfies the restricted isometry property if $\|\Phi x\|_2^2$ is approximately equal to $\|x\|_2^2$ for all $k$-sparse vectors $x$. We give a construction of RIP matrices with the optimal $Q = O(k \log(N/k))$ rows using $O(k\log(N/k)\log(k))$ bits of randomness. The main technical ingredient is an extension of the Hanson-Wright inequality to $\epsilon$-biased distributions.
In this paper, we consider the following two problems: (i) Deletion Blocker($\alpha$) where we are given an undirected graph $G=(V,E)$ and two integers $k,d\geq 1$ and ask whether there exists a subset of vertices $S\subseteq V$ with $|S|\leq k$ such that $\alpha(G-S) \leq \alpha(G)-d$, that is the independence number of $G$ decreases by at least $d$ after having removed the vertices from $S$; (ii) Transversal($\alpha$) where we are given an undirected graph $G=(V,E)$ and two integers $k,d\geq 1$ and ask whether there exists a subset of vertices $S\subseteq V$ with $|S|\leq k$ such that for every maximum independent set $I$ we have $|I\cap S| \geq d$. We show that both problems are polynomial-time solvable in the class of co-comparability graphs by reducing them to the well-known Vertex Cut problem. Our results generalize a result of [Chang et al., Maximum clique transversals, Lecture Notes in Computer Science 2204, pp. 32-43, WG 2001] and a recent result of [Hoang et al., Assistance and interdiction problems on interval graphs, Discrete Applied Mathematics 340, pp. 153-170, 2023].
We provide a simple and natural solution to the problem of generating all $2^n \cdot n!$ signed permutations of $[n] = \{1,2,\ldots,n\}$. Our solution provides a pleasing generalization of the most famous ordering of permutations: plain changes (Steinhaus-Johnson-Trotter algorithm). In plain changes, the $n!$ permutations of $[n]$ are ordered so that successive permutations differ by swapping a pair of adjacent symbols, and the order is often visualized as a weaving pattern involving $n$ ropes. Here we model a signed permutation using $n$ ribbons with two distinct sides, and each successive configuration is created by twisting (i.e., swapping and turning over) two neighboring ribbons or a single ribbon. By greedily prioritizing $2$-twists of the largest symbol before $1$-twists of the largest symbol, we create a signed version of plain change's memorable zig-zag pattern. We provide a loopless algorithm (i.e., worst-case $\mathcal{O}(1)$-time per object) by extending the well-known mixed-radix Gray code algorithm.
We study matrix-matrix multiplication of two matrices, $A$ and $B$, each of size $n \times n$. This operation results in a matrix $C$ of size $n\times n$. Our goal is to produce $C$ as efficiently as possible given a cache: a 1-D limited set of data values that we can work with to perform elementary operations (additions, multiplications, etc.). That is, we attempt to reuse the maximum amount of data from $A$, $B$ and $C$ during our computation (or equivalently, utilize data in the fast-access cache as often as possible). Firstly, we introduce the matrix-matrix multiplication algorithm. Secondly, we present a standard two-memory model to simulate the architecture of a computer, and we explain the LRU (Least Recently Used) Cache policy (which is standard in most computers). Thirdly, we introduce a basic model Cache Simulator, which possesses an $\mathcal{O}(M)$ time complexity (meaning we are limited to small $M$ values). Then we discuss and model the LFU (Least Frequently Used) Cache policy and the explicit control cache policy. Finally, we introduce the main result of this paper, the $\mathcal{O}(1)$ Cache Simulator, and use it to compare, experimentally, the savings of time, energy, and communication incurred from the ideal cache-efficient algorithm for matrix-matrix multiplication. The Cache Simulator simulates the amount of data movement that occurs between the main memory and the cache of the computer. One of the findings of this project is that, in some cases, there is a significant discrepancy in communication values between an LRU cache algorithm and explicit cache control. We propose to alleviate this problem by ``tricking'' the LRU cache algorithm by updating the timestamp of the data we want to keep in cache (namely entries of matrix $C$). This enables us to have the benefits of an explicit cache policy while being constrained by the LRU paradigm (realistic policy on a CPU).
We investigate the problem of approximating an incomplete preference relation $\succsim$ on a finite set by a complete preference relation. We aim to obtain this approximation in such a way that the choices on the basis of two preferences, one incomplete, the other complete, have the smallest possible discrepancy in the aggregate. To this end, we use the top-difference metric on preferences, and define a best complete approximation of $\succsim$ as a complete preference relation nearest to $\succsim$ relative to this metric. We prove that such an approximation must be a maximal completion of $\succsim$, and that it is, in fact, any one completion of $\succsim$ with the largest index. Finally, we use these results to provide a sufficient condition for the best complete approximation of a preference to be its canonical completion. This leads to closed-form solutions to the best approximation problem in the case of several incomplete preference relations of interest.
We prove an NP upper bound on a theory of integer-indexed integer-valued arrays that extends combinatory array logic with an ordering relation on the index set and the ability to express sums of elements. We compare our fragment with seven other fragments in the literature in terms of their expressiveness and computational complexity.
The matrix semigroup membership problem asks, given square matrices $M,M_1,\ldots,M_k$ of the same dimension, whether $M$ lies in the semigroup generated by $M_1,\ldots,M_k$. It is classical that this problem is undecidable in general but decidable in case $M_1,\ldots,M_k$ commute. In this paper we consider the problem of whether, given $M_1,\ldots,M_k$, the semigroup generated by $M_1,\ldots,M_k$ contains a non-negative matrix. We show that in case $M_1,\ldots,M_k$ commute, this problem is decidable subject to Schanuel's Conjecture. We show also that the problem is undecidable if the commutativity assumption is dropped. A key lemma in our decidability result is a procedure to determine, given a matrix $M$, whether the sequence of matrices $(M^n)_{n\geq 0}$ is ultimately nonnegative. This answers a problem posed by S. Akshay (arXiv:2205.09190). The latter result is in stark contrast to the notorious fact that it is not known how to determine effectively whether for any specific matrix index $(i,j)$ the sequence $(M^n)_{i,j}$ is ultimately nonnegative (which is a formulation of the Ultimate Positivity Problem for linear recurrence sequences).
In this paper, we investigate the problem of deciding whether two random databases $\mathsf{X}\in\mathcal{X}^{n\times d}$ and $\mathsf{Y}\in\mathcal{Y}^{n\times d}$ are statistically dependent or not. This is formulated as a hypothesis testing problem, where under the null hypothesis, these two databases are statistically independent, while under the alternative, there exists an unknown row permutation $\sigma$, such that $\mathsf{X}$ and $\mathsf{Y}^\sigma$, a permuted version of $\mathsf{Y}$, are statistically dependent with some known joint distribution, but have the same marginal distributions as the null. We characterize the thresholds at which optimal testing is information-theoretically impossible and possible, as a function of $n$, $d$, and some spectral properties of the generative distributions of the datasets. For example, we prove that if a certain function of the eigenvalues of the likelihood function and $d$, is below a certain threshold, as $d\to\infty$, then weak detection (performing slightly better than random guessing) is statistically impossible, no matter what the value of $n$ is. This mimics the performance of an efficient test that thresholds a centered version of the log-likelihood function of the observed matrices. We also analyze the case where $d$ is fixed, for which we derive strong (vanishing error) and weak detection lower and upper bounds.
In this paper, we develop sixth-order hybrid finite difference methods (FDMs) for the elliptic interface problem $-\nabla \cdot( a\nabla u)=f$ in $\Omega\backslash \Gamma$, where $\Gamma$ is a smooth interface inside $\Omega$. The variable scalar coefficient $a>0$ and source $f$ are possibly discontinuous across $\Gamma$. The hybrid FDMs utilize a $9$-point compact stencil at any interior regular points of the grid and a $13$-point stencil at irregular points near $\Gamma$. For interior regular points away from $\Gamma$, we obtain a sixth-order $9$-point compact FDM satisfying the sign and sum conditions for ensuring the M-matrix property. We also derive sixth-order compact ($4$-point for corners and $6$-point for edges) FDMs satisfying the sign and sum conditions for the M-matrix property at any boundary point subject to (mixed) Dirichlet/Neumann/Robin boundary conditions. Thus, for the elliptic problem without interface (i.e., $\Gamma$ is empty), our compact FDM has the M-matrix property for any mesh size $h>0$ and consequently, satisfies the discrete maximum principle, which guarantees the theoretical sixth-order convergence. For irregular points near $\Gamma$, we propose fifth-order $13$-point FDMs, whose stencil coefficients can be effectively calculated by recursively solving several small linear systems. Theoretically, the proposed high order FDMs use high order (partial) derivatives of the coefficient $a$, the source term $f$, the interface curve $\Gamma$, the two jump functions along $\Gamma$, and the functions on $\partial \Omega$. Numerically, we always use function values to approximate all required high order (partial) derivatives in our hybrid FDMs without losing accuracy. Our numerical experiments confirm the sixth-order convergence in the $l_{\infty}$ norm of the proposed hybrid FDMs for the elliptic interface problem.