The concept of the \textit{relative fractional packing number} between two graphs $G$ and $H$, initially introduced in arXiv:2307.06155 [math.CO], serves as an upper bound for the ratio of the zero-error Shannon capacity of these graphs. Defined as: \begin{equation*} \sup\limits_{W} \frac{\alpha(G \boxtimes W)}{\alpha(H \boxtimes W)} \end{equation*} where the supremum is computed over all arbitrary graphs and $\boxtimes$ denotes the strong product of graphs. This article delves into various critical theorems regarding the computation of this number. Specifically, we address its NP-hardness and the complexity of approximating it. Furthermore, we develop a conjecture for necessary and sufficient conditions for this number to be less than one. We also validate this conjecture for specific graph families. Additionally, we present miscellaneous concepts and introduce a generalized version of the independence number that gives insights that could significantly contribute to the study of the relative fractional packing number.
We revisit the popular \emph{delayed deterministic finite automaton} (\ddfa{}) compression algorithm introduced by Kumar~et~al.~[SIGCOMM 2006] for compressing deterministic finite automata (DFAs) used in intrusion detection systems. This compression scheme exploits similarities in the outgoing sets of transitions among states to achieve strong compression while maintaining high throughput for matching. The \ddfa{} algorithm and later variants of it, unfortunately, require at least quadratic compression time since they compare all pairs of states to compute an optimal compression. This is too slow and, in some cases, even infeasible for collections of regular expression in modern intrusion detection systems that produce DFAs of millions of states. Our main result is a simple, general framework for constructing \ddfa{} based on locality-sensitive hashing that constructs an approximation of the optimal \ddfa{} in near-linear time. We apply our approach to the original \ddfa{} compression algorithm and two important variants, and we experimentally evaluate our algorithms on DFAs from widely used modern intrusion detection systems. Overall, our new algorithms compress up to an order of magnitude faster than existing solutions with either no or little loss of compression size. Consequently, our algorithms are significantly more scalable and can handle larger collections of regular expressions than previous solutions.
We present a $O(n^{\frac{3}{2}})$-time algorithm for the \emph{shortest (diagonal) flip path problem} for \emph{lattice} triangulations with $n$ points, improving over previous $O(n^2)$-time algorithms. For a large, natural class of inputs, our bound is tight in the sense that our algorithm runs in time linear in the number of flips in the output flip path. Our results rely on an independently interesting structural elucidation of shortest flip paths as the linear orderings of a unique partially ordered set, called a \emph{minimum flip plan}, constructed by a novel use of Farey sequences from elementary number theory. Flip paths between general (not necessarily lattice) triangulations have been studied in the combinatorial setting for nearly a century. In the Euclidean geometric setting, finding a shortest flip path between two triangulations is NP-complete. However, for lattice triangulations, which are studied as spin systems, there are known $O\left(n^2\right)$-time algorithms to find shortest flip paths. These algorithms, as well as ours, apply to \emph{constrained} flip paths that ensure a set of \emph{constraint} edges are present in every triangulation along the path. Implications for determining simultaneously flippable edges, i.e. finding optimal simultaneous flip paths between lattice triangulations, and for counting lattice triangulations are discussed.
It is a long-standing open question to construct a classical oracle relative to which BQP/qpoly $\neq$ BQP/poly or QMA $\neq$ QCMA. In this paper, we construct classically-accessible classical oracles relative to which BQP/qpoly $\neq$ BQP/poly and QMA $\neq$ QCMA. Here, classically-accessible classical oracles are oracles that can be accessed only classically even for quantum algorithms. Based on a similar technique, we also show an alternative proof for the separation of QMA and QCMA relative to a distributional quantumly-accessible classical oracle, which was recently shown by Natarajan and Nirkhe.
Let $G$ be a planar graph and $I_s$ and $I_t$ be two independent sets in $G$, each of size $k$. We begin with a ``token'' on each vertex of $I_s$ and seek to move all tokens to $I_t$, by repeated ``token jumping'', removing a single token from one vertex and placing it on another vertex. We require that each intermediate arrangement of tokens again specifies an independent set of size $k$. Given $G$, $I_s$, and $I_t$, we ask whether there exists a sequence of token jumps that transforms $I_s$ to $I_t$. When $k$ is part of the input, this problem is known to be PSPACE-complete. However, it was shown by Ito, Kami\'nski, and Ono to be fixed-parameter tractable. That is, when $k$ is fixed, the problem can be solved in time polynomial in the order of $G$. Here we strengthen the upper bound on the running time in terms of $k$ by showing that the problem has a kernel of size linear in $k$. More precisely, we transform an arbitrary input problem on a planar graph into an equivalent problem on a (planar) graph with order $O(k)$.
Let $G$ be an unlabeled planar and simple $n$-vertex graph. Unlabeled graphs are graphs where the label-information is either not given or lost during the construction of data-structures. We present a succinct encoding of $G$ that provides induced-minor operations, i.e., edge contractions and vertex deletions. Any sequence of such operations is processed in $O(n)$ time in the word-RAM model. At all times the encoding provides constant time (per element output) neighborhood access and degree queries. Optional hash tables extend the encoding with constant expected time adjacency queries and edge-deletion (thus, all minor operations are supported) such that any number of edge deletions are computed in $O(n)$ expected time. Constructing the encoding requires $O(n)$ bits and $O(n)$ time. The encoding requires $\mathcal{H}(n) + o(n)$ bits of space with $\mathcal{H}(n)$ being the entropy of encoding a planar graph with $n$ vertices. Our data structure is based on the recent result of Holm et al. [ESA 2017] who presented a linear time contraction data structure that allows to maintain parallel edges and works for labeled graphs, but uses $\Theta(n \log n)$ bits of space. We combine the techniques used by Holm et al. with novel ideas and the succinct encoding of Blelloch and Farzan [CPM 2010] for arbitrary separable graphs. Our result partially answers the question raised by Blelloch and Farzan whether their encoding can be modified to allow modifications of the graph. As a simple application of our encoding, we present a linear time outerplanarity testing algorithm that uses $O(n)$ bits of space.
For a complexity class $C$ and language $L$, a constructive separation of $L \notin C$ gives an efficient algorithm (also called a refuter) to find counterexamples (bad inputs) for every $C$-algorithm attempting to decide $L$. We study the questions: Which lower bounds can be made constructive? What are the consequences of constructive separations? We build a case that "constructiveness" serves as a dividing line between many weak lower bounds we know how to prove, and strong lower bounds against $P$, $ZPP$, and $BPP$. Put another way, constructiveness is the opposite of a complexity barrier: it is a property we want lower bounds to have. Our results fall into three broad categories. 1. Our first set of results shows that, for many well-known lower bounds against streaming algorithms, one-tape Turing machines, and query complexity, as well as lower bounds for the Minimum Circuit Size Problem, making these lower bounds constructive would imply breakthrough separations ranging from $EXP \neq BPP$ to even $P \neq NP$. 2. Our second set of results shows that for most major open problems in lower bounds against $P$, $ZPP$, and $BPP$, including $P \neq NP$, $P \neq PSPACE$, $P \neq PP$, $ZPP \neq EXP$, and $BPP \neq NEXP$, any proof of the separation would further imply a constructive separation. Our results generalize earlier results for $P \neq NP$ [Gutfreund, Shaltiel, and Ta-Shma, CCC 2005] and $BPP \neq NEXP$ [Dolev, Fandina and Gutfreund, CIAC 2013]. 3. Our third set of results shows that certain complexity separations cannot be made constructive. We observe that for all super-polynomially growing functions $t$, there are no constructive separations for detecting high $t$-time Kolmogorov complexity (a task which is known to be not in $P$) from any complexity class, unconditionally.
Traditionally, classical numerical schemes have been employed to solve partial differential equations (PDEs) using computational methods. Recently, neural network-based methods have emerged. Despite these advancements, neural network-based methods, such as physics-informed neural networks (PINNs) and neural operators, exhibit deficiencies in robustness and generalization. To address these issues, numerous studies have integrated classical numerical frameworks with machine learning techniques, incorporating neural networks into parts of traditional numerical methods. In this study, we focus on hyperbolic conservation laws by replacing traditional numerical fluxes with neural operators. To this end, we developed loss functions inspired by established numerical schemes related to conservation laws and approximated numerical fluxes using Fourier neural operators (FNOs). Our experiments demonstrated that our approach combines the strengths of both traditional numerical schemes and FNOs, outperforming standard FNO methods in several respects. For instance, we demonstrate that our method is robust, has resolution invariance, and is feasible as a data-driven method. In particular, our method can make continuous predictions over time and exhibits superior generalization capabilities with out-of-distribution (OOD) samples, which are challenges that existing neural operator methods encounter.
We propose a new framework to design and analyze accelerated methods that solve general monotone equation (ME) problems $F(x)=0$. Traditional approaches include generalized steepest descent methods and inexact Newton-type methods. If $F$ is uniformly monotone and twice differentiable, these methods achieve local convergence rates while the latter methods are globally convergent thanks to line search and hyperplane projection. However, a global rate is unknown for these methods. The variational inequality methods can be applied to yield a global rate that is expressed in terms of $\|F(x)\|$ but these results are restricted to first-order methods and a Lipschitz continuous operator. It has not been clear how to obtain global acceleration using high-order Lipschitz continuity. This paper takes a continuous-time perspective where accelerated methods are viewed as the discretization of dynamical systems. Our contribution is to propose accelerated rescaled gradient systems and prove that they are equivalent to closed-loop control systems. Based on this connection, we establish the properties of solution trajectories. Moreover, we provide a unified algorithmic framework obtained from discretization of our system, which together with two approximation subroutines yields both existing high-order methods and new first-order methods. We prove that the $p^{th}$-order method achieves a global rate of $O(k^{-p/2})$ in terms of $\|F(x)\|$ if $F$ is $p^{th}$-order Lipschitz continuous and the first-order method achieves the same rate if $F$ is $p^{th}$-order strongly Lipschitz continuous. If $F$ is strongly monotone, the restarted versions achieve local convergence with order $p$ when $p \geq 2$. Our discrete-time analysis is largely motivated by the continuous-time analysis and demonstrates the fundamental role that rescaled gradients play in global acceleration for solving ME problems.
We extend Petkov\v{s}ek's algorithm for computing hypergeometric solutions of scalar difference equations to the case of difference systems $\tau(Y) = M Y$, with $M \in {\rm GL}_n(C(x))$, where $\tau$ is the shift operator. Hypergeometric solutions are solutions of the form $\gamma P$ where $P \in C(x)^n$ and $\gamma$ is a hypergeometric term over $C(x)$, i.e. ${\tau(\gamma)}/{\gamma} \in C(x)$. Our contributions concern efficient computation of a set of candidates for ${\tau(\gamma)}/{\gamma}$ which we write as $\lambda = c\frac{A}{B}$ with monic $A, B \in C[x]$, $c \in C^*$. Factors of the denominators of $M^{-1}$ and $M$ give candidates for $A$ and $B$, while another algorithm is needed for $c$. We use the super-reduction algorithm to compute candidates for $c$, as well as other ingredients to reduce the list of candidates for $A/B$. To further reduce the number of candidates $A/B$, we bound the so-called type of $A/B$ by bounding local types. Our algorithm has been implemented in Maple and experiments show that our implementation can handle systems of high dimension, which is useful for factoring operators.
2D-based Industrial Anomaly Detection has been widely discussed, however, multimodal industrial anomaly detection based on 3D point clouds and RGB images still has many untouched fields. Existing multimodal industrial anomaly detection methods directly concatenate the multimodal features, which leads to a strong disturbance between features and harms the detection performance. In this paper, we propose Multi-3D-Memory (M3DM), a novel multimodal anomaly detection method with hybrid fusion scheme: firstly, we design an unsupervised feature fusion with patch-wise contrastive learning to encourage the interaction of different modal features; secondly, we use a decision layer fusion with multiple memory banks to avoid loss of information and additional novelty classifiers to make the final decision. We further propose a point feature alignment operation to better align the point cloud and RGB features. Extensive experiments show that our multimodal industrial anomaly detection model outperforms the state-of-the-art (SOTA) methods on both detection and segmentation precision on MVTec-3D AD dataset. Code is available at //github.com/nomewang/M3DM.