The escalating risk of collisions and the accumulation of space debris in Low Earth Orbit (LEO) has reached critical concern due to the ever increasing number of spacecraft. Addressing this crisis, especially in dealing with non-cooperative and unidentified space debris, is of paramount importance. This paper contributes to efforts in enabling autonomous swarms of small chaser satellites for target geometry determination and safe flight trajectory planning for proximity operations in LEO. Our research explores on-orbit use of the You Only Look Once v5 (YOLOv5) object detection model trained to detect satellite components. While this model has shown promise, its inherent lack of interpretability hinders human understanding, a critical aspect of validating algorithms for use in safety-critical missions. To analyze the decision processes, we introduce Probabilistic Explanations for Entropic Knowledge extraction (PEEK), a method that utilizes information theoretic analysis of the latent representations within the hidden layers of the model. Through both synthetic in hardware-in-the-loop experiments, PEEK illuminates the decision-making processes of the model, helping identify its strengths, limitations and biases.
Image compression constitutes a significant challenge amidst the era of information explosion. Recent studies employing deep learning methods have demonstrated the superior performance of learning-based image compression methods over traditional codecs. However, an inherent challenge associated with these methods lies in their lack of interpretability. Following an analysis of the varying degrees of compression degradation across different frequency bands, we propose the end-to-end optimized image compression model facilitated by the frequency-oriented transform. The proposed end-to-end image compression model consists of four components: spatial sampling, frequency-oriented transform, entropy estimation, and frequency-aware fusion. The frequency-oriented transform separates the original image signal into distinct frequency bands, aligning with the human-interpretable concept. Leveraging the non-overlapping hypothesis, the model enables scalable coding through the selective transmission of arbitrary frequency components. Extensive experiments are conducted to demonstrate that our model outperforms all traditional codecs including next-generation standard H.266/VVC on MS-SSIM metric. Moreover, visual analysis tasks (i.e., object detection and semantic segmentation) are conducted to verify the proposed compression method could preserve semantic fidelity besides signal-level precision.
Information extraction techniques, including named entity recognition (NER) and relation extraction (RE), are crucial in many domains to support making sense of vast amounts of unstructured text data by identifying and connecting relevant information. Such techniques can assist researchers in extracting valuable insights. In this paper, we introduce the Entity-aware Masking for Biomedical Relation Extraction (EMBRE) method for biomedical relation extraction, as applied in the context of the BioRED challenge Task 1, in which human-annotated entities are provided as input. Specifically, we integrate entity knowledge into a deep neural network by pretraining the backbone model with an entity masking objective. We randomly mask named entities for each instance and let the model identify the masked entity along with its type. In this way, the model is capable of learning more specific knowledge and more robust representations. Then, we utilize the pre-trained model as our backbone to encode language representations and feed these representations into two multilayer perceptron (MLPs) to predict the logits for relation and novelty, respectively. The experimental results demonstrate that our proposed method can improve the performances of entity pair, relation and novelty extraction over our baseline.
Simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS) is a cutting-edge concept for the sixth-generation (6G) wireless networks. In this letter, we propose a novel system that incorporates STAR-RIS with simultaneous wireless information and power transfer (SWIPT) using rate splitting multiple access (RSMA). The proposed system facilitates communication from a multi-antenna base station (BS) to single-antenna users in a downlink transmission. The BS concurrently sends energy and information signals to multiple energy harvesting receivers (EHRs) and information data receivers (IDRs) with the support of a deployed STAR-RIS. Furthermore, a multi-objective optimization is introduced to strike a balance between users' sum rate and the total harvested energy. To achieve this, an optimization problem is formulated to optimize the energy/information beamforming vectors at the BS, the phase shifts at the STAR-RIS, and the common message rate. Subsequently, we employ a meta deep deterministic policy gradient (Meta-DDPG) approach to solve the complex problem. Simulation results validate that the proposed algorithm significantly enhances both data rate and harvested energy in comparison to conventional DDPG.
Text reuse is a methodological element of fundamental importance in humanities research: pieces of text that re-appear across different documents, verbatim or paraphrased, provide invaluable information about the historical spread and evolution of ideas. Large modern digitized corpora enable the joint analysis of text collections that span entire centuries and the detection of large-scale patterns, impossible to detect with traditional small-scale analysis. For this opportunity to materialize, it is necessary to develop efficient data science systems that perform the corresponding analysis tasks. In this paper, we share insights from ReceptionReader, a system for analyzing text reuse in large historical corpora. The system is built upon billions of instances of text reuses from large digitized corpora of 18th-century texts. Its main functionality is to perform downstream text reuse analysis tasks, such as finding reuses that stem from a given article or identifying the most reused quotes from a set of documents, with each task expressed as a database query. For the purposes of the paper, we discuss the related design choices including various database normalization levels and query execution frameworks, such as distributed data processing (Apache Spark), indexed row store engine (MariaDB Aria), and compressed column store engine (MariaDB Columnstore). Moreover, we present an extensive evaluation with various metrics of interest (latency, storage size, and computing costs) for varying workloads, and we offer insights from the trade-offs we observed and the choices that emerged as optimal in our setting. In summary, our results show that (1) for the workloads that are most relevant to text-reuse analysis, the MariaDB Aria framework emerges as the overall optimal choice, (2) big data processing (Apache Spark) is irreplaceable for all processing stages of the system's pipeline.
Energy theft, characterized by manipulating energy consumption readings to reduce payments, poses a dual threat-causing financial losses for grid operators and undermining the performance of smart grids. Effective Energy Theft Detection (ETD) methods become crucial in mitigating these risks by identifying such fraudulent activities in their early stages. However, the majority of current ETD methods rely on supervised learning, which is hindered by the difficulty of labelling data and the risk of overfitting known attacks. To address these challenges, several unsupervised ETD methods have been proposed, focusing on learning the normal patterns from honest users, specifically the reconstruction of input. However, our investigation reveals a limitation in current unsupervised ETD methods, as they can only detect anomalous behaviours in users exhibiting regular patterns. Users with high-variance behaviours pose a challenge to these methods. In response, this paper introduces a Denoising Diffusion Probabilistic Model (DDPM)-based ETD approach. This innovative approach demonstrates impressive ETD performance on high-variance smart grid data by incorporating additional attributes correlated with energy consumption. The proposed methods improve the average ETD performance on high-variance smart grid data from below 0.5 to over 0.9 w.r.t. AUC. On the other hand, our experimental findings indicate that while the state-of-the-art ETD methods based on reconstruction error can identify ETD attacks for the majority of users, they prove ineffective in detecting attacks for certain users. To address this, we propose a novel ensemble approach that considers both reconstruction error and forecasting error, enhancing the robustness of the ETD methodology. The proposed ensemble method improves the average ETD performance on the stealthiest attacks from nearly 0 to 0.5 w.r.t. 5%-TPR.
Time-Triggered Ethernet (TTEthernet) has been widely applied in many scenarios such as industrial internet, automotive electronics, and aerospace, where offline routing and scheduling for TTEthernet has been largely investigated. However, predetermined routes and schedules cannot meet the demands in some agile scenarios, such as smart factories, autonomous driving, and satellite network switching, where the transmission requests join in and leave the network frequently. Thus, we study the online joint routing and scheduling problem for TTEthernet. However, balancing efficient and effective routing and scheduling in an online environment can be quite challenging. To ensure high-quality and fast routing and scheduling, we first design a time-slot expanded graph (TSEG) to model the available resources of TTEthernet over time. The fine-grained representation of TSEG allows us to select a time slot via selecting an edge, thus transforming the scheduling problem into a simple routing problem. Next, we design a dynamic weighting method for each edge in TSEG and further propose an algorithm to co-optimize the routing and scheduling. Our scheme enhances the TTEthernet throughput by co-optimizing the routing and scheduling to eliminate potential conflicts among flow requests, as compared to existing methods. The extensive simulation results show that our scheme runs >400 times faster than standard solutions (i.e., ILP solver), while the gap is only 2% to the optimally scheduled number of flow requests. Besides, as compared to existing schemes, our method can improve the successfully scheduled number of flows by more than 18%.
Functional magnetic resonance imaging (fMRI) plays a crucial role in neuroimaging, enabling the exploration of brain activity through complex-valued signals. These signals, composed of magnitude and phase, offer a rich source of information for understanding brain functions. Traditional fMRI analyses have largely focused on magnitude information, often overlooking the potential insights offered by phase data. In this paper, we propose a novel fully Bayesian model designed for analyzing single-subject complex-valued fMRI (cv-fMRI) data. Our model, which we refer to as the CV-M&P model, is distinctive in its comprehensive utilization of both magnitude and phase information in fMRI signals, allowing for independent prediction of different types of activation maps. We incorporate Gaussian Markov random fields (GMRFs) to capture spatial correlations within the data, and employ image partitioning and parallel computation to enhance computational efficiency. Our model is rigorously tested through simulation studies, and then applied to a real dataset from a unilateral finger-tapping experiment. The results demonstrate the model's effectiveness in accurately identifying brain regions activated in response to specific tasks, distinguishing between magnitude and phase activation.
Document representation is the core of many NLP tasks on machine understanding. A general representation learned in an unsupervised manner reserves generality and can be used for various applications. In practice, sentiment analysis (SA) has been a challenging task that is regarded to be deeply semantic-related and is often used to assess general representations. Existing methods on unsupervised document representation learning can be separated into two families: sequential ones, which explicitly take the ordering of words into consideration, and non-sequential ones, which do not explicitly do so. However, both of them suffer from their own weaknesses. In this paper, we propose a model that overcomes difficulties encountered by both families of methods. Experiments show that our model outperforms state-of-the-art methods on popular SA datasets and a fine-grained aspect-based SA by a large margin.
Graph Neural Networks (GNNs) have recently become increasingly popular due to their ability to learn complex systems of relations or interactions arising in a broad spectrum of problems ranging from biology and particle physics to social networks and recommendation systems. Despite the plethora of different models for deep learning on graphs, few approaches have been proposed thus far for dealing with graphs that present some sort of dynamic nature (e.g. evolving features or connectivity over time). In this paper, we present Temporal Graph Networks (TGNs), a generic, efficient framework for deep learning on dynamic graphs represented as sequences of timed events. Thanks to a novel combination of memory modules and graph-based operators, TGNs are able to significantly outperform previous approaches being at the same time more computationally efficient. We furthermore show that several previous models for learning on dynamic graphs can be cast as specific instances of our framework. We perform a detailed ablation study of different components of our framework and devise the best configuration that achieves state-of-the-art performance on several transductive and inductive prediction tasks for dynamic graphs.
Multi-relation Question Answering is a challenging task, due to the requirement of elaborated analysis on questions and reasoning over multiple fact triples in knowledge base. In this paper, we present a novel model called Interpretable Reasoning Network that employs an interpretable, hop-by-hop reasoning process for question answering. The model dynamically decides which part of an input question should be analyzed at each hop; predicts a relation that corresponds to the current parsed results; utilizes the predicted relation to update the question representation and the state of the reasoning process; and then drives the next-hop reasoning. Experiments show that our model yields state-of-the-art results on two datasets. More interestingly, the model can offer traceable and observable intermediate predictions for reasoning analysis and failure diagnosis.