We study the following Independent Stable Set problem. Let G be an undirected graph and M = (V(G),I) be a matroid whose elements are the vertices of G. For an integer k\geq 1, the task is to decide whether G contains a set S\subseteq V(G) of size at least k which is independent (stable) in G and independent in M. This problem generalizes several well-studied algorithmic problems, including Rainbow Independent Set, Rainbow Matching, and Bipartite Matching with Separation. We show that - When the matroid M is represented by the independence oracle, then for any computable function f, no algorithm can solve Independent Stable Set using f(k)n^{o(k)} calls to the oracle. - On the other hand, when the graph G is of degeneracy d, then the problem is solvable in time O((d+1)^kn), and hence is FPT parameterized by d+k. Moreover, when the degeneracy d is a constant (which is not a part of the input), the problem admits a kernel polynomial in k. More precisely, we prove that for every integer d\geq 0, the problem admits a kernelization algorithm that in time n^{O(d)} outputs an equivalent framework with a graph on dk^{O(d)} vertices. A lower bound complements this when d is part of the input: Independent Stable Set does not admit a polynomial kernel when parameterized by k+d unless NP \subseteq coNP/poly. This lower bound holds even when M is a partition matroid. - Another set of results concerns the scenario when the graph G is chordal. In this case, our computational lower bound excludes an FPT algorithm when the input matroid is given by its independence oracle. However, we demonstrate that Independent Stable Set can be solved in 2^{O(k)}||M||^{O(1)} time when M is a linear matroid given by its representation. In the same setting, Independent Stable Set does not have a polynomial kernel when parameterized by k unless NP\subseteq coNP/poly.
Nonnegative Matrix Factorization (NMF) is the problem of approximating a given nonnegative matrix M through the conic combination of two nonnegative low-rank matrices W and H. Traditionally NMF is tackled by optimizing a specific objective function evaluating the quality of the approximation. This assessment is often done based on the Frobenius norm. In this study, we argue that the Frobenius norm as the "point-to-point" distance may not always be appropriate. Due to the nonnegative combination resulting in a polyhedral cone, this conic perspective of NMF may not naturally align with conventional point-to-point distance measures. Hence, a ray-to-ray chordal distance is proposed as an alternative way of measuring the discrepancy between M and WH. This measure is related to the Euclidean distance on the unit sphere, motivating us to employ nonsmooth manifold optimization approaches. We apply Riemannian optimization technique to solve chordal-NMF by casting it on a manifold. Unlike existing works on Riemannian optimization that require the manifold to be smooth, the nonnegativity in chordal-NMF is a non-differentiable manifold. We propose a Riemannian Multiplicative Update (RMU) that preserves the convergence properties of Riemannian gradient descent without breaking the smoothness condition on the manifold. We showcase the effectiveness of the Chordal-NMF on synthetic datasets as well as real-world multispectral images.
Prompts play a crucial role in guiding the responses of Large Language Models (LLMs). However, the intricate role of individual tokens in prompts, known as input saliency, in shaping the responses remains largely underexplored. Existing saliency methods either misalign with LLM generation objectives or rely heavily on linearity assumptions, leading to potential inaccuracies. To address this, we propose Token Distribution Dynamics (TDD), a \textcolor{black}{simple yet effective} approach to unveil and manipulate the role of prompts in generating LLM outputs. TDD leverages the robust interpreting capabilities of the language model head (LM head) to assess input saliency. It projects input tokens into the embedding space and then estimates their significance based on distribution dynamics over the vocabulary. We introduce three TDD variants: forward, backward, and bidirectional, each offering unique insights into token relevance. Extensive experiments reveal that the TDD surpasses state-of-the-art baselines with a big margin in elucidating the causal relationships between prompts and LLM outputs. Beyond mere interpretation, we apply TDD to two prompt manipulation tasks for controlled text generation: zero-shot toxic language suppression and sentiment steering. Empirical results underscore TDD's proficiency in identifying both toxic and sentimental cues in prompts, subsequently mitigating toxicity or modulating sentiment in the generated content.
The Influence Maximization (IM) problem is a well-known NP-hard combinatorial problem over graphs whose goal is to find the set of nodes in a network that spreads influence at most. Among the various methods for solving the IM problem, evolutionary algorithms (EAs) have been shown to be particularly effective. While the literature on the topic is particularly ample, only a few attempts have been made at solving the IM problem over higher-order networks, namely extensions of standard graphs that can capture interactions that involve more than two nodes. Hypergraphs are a valuable tool for modeling complex interaction networks in various domains; however, they require rethinking of several graph-based problems, including IM. In this work, we propose a multi-objective EA for the IM problem over hypergraphs that leverages smart initialization and hypergraph-aware mutation. While the existing methods rely on greedy or heuristic methods, to our best knowledge this is the first attempt at applying EAs to this problem. Our results over nine real-world datasets and three propagation models, compared with five baseline algorithms, reveal that our method achieves in most cases state-of-the-art results in terms of hypervolume and solution diversity.
In the quest for accurate and interpretable AI models, eXplainable AI (XAI) has become crucial. Fuzzy Cognitive Maps (FCMs) stand out as an advanced XAI method because of their ability to synergistically combine and exploit both expert knowledge and data-driven insights, providing transparency and intrinsic interpretability. This letter introduces and investigates the "Total Causal Effect Calculation for FCMs" (TCEC-FCM) algorithm, an innovative approach that, for the first time, enables the efficient calculation of total causal effects among concepts in large-scale FCMs by leveraging binary search and graph traversal techniques, thereby overcoming the challenge of exhaustive causal path exploration that hinder existing methods. We evaluate the proposed method across various synthetic FCMs that demonstrate TCEC-FCM's superior performance over exhaustive methods, marking a significant advancement in causal effect analysis within FCMs, thus broadening their usability for modern complex XAI applications.
Recently, Accattoli introduced the Exponential Substitution Calculus (ESC) given by untyped proof terms for Intuitionistic Multiplicative Exponential Linear Logic (IMELL), endowed with rewriting rules at-a-distance for cut elimination. He also introduced a new cut elimination strategy, dubbed the good strategy, and showed that its number of steps is a time cost model with polynomial overhead for the ESC/IMELL, and the first such one. Here, we refine Accattoli's result by introducing an abstract machine for ESC and proving that it implements the good strategy and computes cut-free terms/proofs within a linear overhead.
Global Navigation Satellite System (GNSS) receivers provide ubiquitous and precise position, navigation, and time (PNT) to a wide gamut of civilian and tactical infrastructures and devices. Due to the low GNSS received signal power, even low-power radiofrequency interference (RFI) sources are a serious threat to the GNSS integrity and availability. Nonetheless, RFI source localization is paramount yet hard, especially over large areas. Methods based on multi-rotor unmanned aerial vehicles (UAV) exist but are often limited by hovering time, and require specific antenna and detectors. In comparison, fixed-wing planes allow longer missions but are more complex to operate and deploy. A vertical take-off and landing (VTOL) UAV combines the positive aspects of both platforms: high maneuverability, and long mission time and, jointly with highly integrated control systems, simple operation and deployment. Building upon the flexibility allowed by such a platform, we propose a method that combines advanced flight dynamics with high-performance consumer receivers to detect interference over large areas, with minimal interaction with the operator. The proposed system can detect multiple interference sources and map their area of influence, gaining situational awareness of poor GNSS quality or denied environments. Furthermore, it can estimate the relative heading and position of the interference source within tens of meters. The proposed method is validated with real-life measurements, successfully mapping two interference-affected areas and exposing radio equipment causing involuntary in-band interference.
Relevance evaluation of a query and a passage is essential in Information Retrieval (IR). Recently, numerous studies have been conducted on tasks related to relevance judgment using Large Language Models (LLMs) such as GPT-4, demonstrating significant improvements. However, the efficacy of LLMs is considerably influenced by the design of the prompt. The purpose of this paper is to identify which specific terms in prompts positively or negatively impact relevance evaluation with LLMs. We employed two types of prompts: those used in previous research and generated automatically by LLMs. By comparing the performance of these prompts in both few-shot and zero-shot settings, we analyze the influence of specific terms in the prompts. We have observed two main findings from our study. First, we discovered that prompts using the term answerlead to more effective relevance evaluations than those using relevant. This indicates that a more direct approach, focusing on answering the query, tends to enhance performance. Second, we noted the importance of appropriately balancing the scope of relevance. While the term relevant can extend the scope too broadly, resulting in less precise evaluations, an optimal balance in defining relevance is crucial for accurate assessments. The inclusion of few-shot examples helps in more precisely defining this balance. By providing clearer contexts for the term relevance, few-shot examples contribute to refine relevance criteria. In conclusion, our study highlights the significance of carefully selecting terms in prompts for relevance evaluation with LLMs.
Existing knowledge graph (KG) embedding models have primarily focused on static KGs. However, real-world KGs do not remain static, but rather evolve and grow in tandem with the development of KG applications. Consequently, new facts and previously unseen entities and relations continually emerge, necessitating an embedding model that can quickly learn and transfer new knowledge through growth. Motivated by this, we delve into an expanding field of KG embedding in this paper, i.e., lifelong KG embedding. We consider knowledge transfer and retention of the learning on growing snapshots of a KG without having to learn embeddings from scratch. The proposed model includes a masked KG autoencoder for embedding learning and update, with an embedding transfer strategy to inject the learned knowledge into the new entity and relation embeddings, and an embedding regularization method to avoid catastrophic forgetting. To investigate the impacts of different aspects of KG growth, we construct four datasets to evaluate the performance of lifelong KG embedding. Experimental results show that the proposed model outperforms the state-of-the-art inductive and lifelong embedding baselines.
Graph Neural Networks (GNNs) have received considerable attention on graph-structured data learning for a wide variety of tasks. The well-designed propagation mechanism which has been demonstrated effective is the most fundamental part of GNNs. Although most of GNNs basically follow a message passing manner, litter effort has been made to discover and analyze their essential relations. In this paper, we establish a surprising connection between different propagation mechanisms with a unified optimization problem, showing that despite the proliferation of various GNNs, in fact, their proposed propagation mechanisms are the optimal solution optimizing a feature fitting function over a wide class of graph kernels with a graph regularization term. Our proposed unified optimization framework, summarizing the commonalities between several of the most representative GNNs, not only provides a macroscopic view on surveying the relations between different GNNs, but also further opens up new opportunities for flexibly designing new GNNs. With the proposed framework, we discover that existing works usually utilize naive graph convolutional kernels for feature fitting function, and we further develop two novel objective functions considering adjustable graph kernels showing low-pass or high-pass filtering capabilities respectively. Moreover, we provide the convergence proofs and expressive power comparisons for the proposed models. Extensive experiments on benchmark datasets clearly show that the proposed GNNs not only outperform the state-of-the-art methods but also have good ability to alleviate over-smoothing, and further verify the feasibility for designing GNNs with our unified optimization framework.
Automatic KB completion for commonsense knowledge graphs (e.g., ATOMIC and ConceptNet) poses unique challenges compared to the much studied conventional knowledge bases (e.g., Freebase). Commonsense knowledge graphs use free-form text to represent nodes, resulting in orders of magnitude more nodes compared to conventional KBs (18x more nodes in ATOMIC compared to Freebase (FB15K-237)). Importantly, this implies significantly sparser graph structures - a major challenge for existing KB completion methods that assume densely connected graphs over a relatively smaller set of nodes. In this paper, we present novel KB completion models that can address these challenges by exploiting the structural and semantic context of nodes. Specifically, we investigate two key ideas: (1) learning from local graph structure, using graph convolutional networks and automatic graph densification and (2) transfer learning from pre-trained language models to knowledge graphs for enhanced contextual representation of knowledge. We describe our method to incorporate information from both these sources in a joint model and provide the first empirical results for KB completion on ATOMIC and evaluation with ranking metrics on ConceptNet. Our results demonstrate the effectiveness of language model representations in boosting link prediction performance and the advantages of learning from local graph structure (+1.5 points in MRR for ConceptNet) when training on subgraphs for computational efficiency. Further analysis on model predictions shines light on the types of commonsense knowledge that language models capture well.