The following contribution introduces a concept that employs Large Language Models (LLMs) and a chatbot interface to enhance SPARQL query generation for ontologies, thereby facilitating intuitive access to formalized knowledge. Utilizing natural language inputs, the system converts user inquiries into accurate SPARQL queries that strictly query the factual content of the ontology, effectively preventing misinformation or fabrication by the LLM. To enhance the quality and precision of outcomes, additional textual information from established domain-specific standards is integrated into the ontology for precise descriptions of its concepts and relationships. An experimental study assesses the accuracy of generated SPARQL queries, revealing significant benefits of using LLMs for querying ontologies and highlighting areas for future research.
Current Semi-Supervised Object Detection (SSOD) methods enhance detector performance by leveraging large amounts of unlabeled data, assuming that both labeled and unlabeled data share the same label space. However, in open-set scenarios, the unlabeled dataset contains both in-distribution (ID) classes and out-of-distribution (OOD) classes. Applying semi-supervised detectors in such settings can lead to misclassifying OOD class as ID classes. To alleviate this issue, we propose a simple yet effective method, termed Collaborative Feature-Logits Detector (CFL-Detector). Specifically, we introduce a feature-level clustering method using contrastive loss to clarify vector boundaries in the feature space and highlight class differences. Additionally, by optimizing the logits-level uncertainty classification loss, the model enhances its ability to effectively distinguish between ID and OOD classes. Extensive experiments demonstrate that our method achieves state-of-the-art performance compared to existing methods.
We present a novel fast bipartitioned hybrid adder (FBHA) that utilizes carry-select and carry-lookahead logic. The proposed FBHA is an accurate adder with a significant part and a less significant part joined together by a carry signal. In an N-bit FBHA, the K-bit less significant part is realized using carry-lookahead adder logic, and the (N-K)-bit significant part is realized using carry-select adder logic. The 32-bit addition was considered as an example operation for this work. Many 32-bit adders ranging from the slow ripple carry adder to the fast parallel-prefix Kogge-Stone adder and the proposed adder were synthesized using a 28-nm CMOS standard cell library and their design metrics were compared. A well-optimized FBHA achieved significant optimizations in design metrics compared to its high-speed adder counterparts and some examples are mentioned as follows: (a) 19.8% reduction in delay compared to a carry-lookahead adder; (b) 19.8% reduction in delay, 24.4% reduction in area, and 19.4% reduction in power compared to a carry-select adder; (c) 45.6% reduction in delay, and 13.5% reduction in power compared to a conditional sum adder; and (d) 46.5% reduction in area, and 29.3% reduction in power compared to the Kogge-Stone adder.
This paper presents a unified framework for constructing Approximate Message Passing (AMP) algorithms for rotationally-invariant models. By employing a general iterative algorithm template and reducing it to long-memory Orthogonal AMP (OAMP), we systematically derive the correct Onsager terms of AMP algorithms. This approach allows us to rederive an AMP algorithm introduced by Fan and Opper et al., while shedding new light on the role of free cumulants of the spectral law. The free cumulants arise naturally from a recursive centering operation, potentially of independent interest beyond the scope of AMP. To illustrate the flexibility of our framework, we introduce two novel AMP variants and apply them to estimation in spiked models.
Training an accurate classifier for EEG-based brain-computer interface (BCI) requires EEG data from a large number of users, whereas protecting their data privacy is a critical consideration. Federated learning (FL) is a promising solution to this challenge. This paper proposes Federated classification with local Batch-specific batch normalization and Sharpness-aware minimization (FedBS) for privacy protection in EEG-based motor imagery (MI) classification. FedBS utilizes local batch-specific batch normalization to reduce data discrepancies among different clients, and sharpness-aware minimization optimizer in local training to improve model generalization. Experiments on three public MI datasets using three popular deep learning models demonstrated that FedBS outperformed six state-of-the-art FL approaches. Remarkably, it also outperformed centralized training, which does not consider privacy protection at all. In summary, FedBS protects user EEG data privacy, enabling multiple BCI users to participate in large-scale machine learning model training, which in turn improves the BCI decoding accuracy.
Despite the success of the O-RAN Alliance in developing a set of interoperable interfaces, development of unique Radio Access Network (RAN) deployments remains challenging. This is especially true for military communications, where deployments are highly specialized with limited volume. The construction and maintenance of the RAN, which is a real time embedded system, is an ill-defined NP problem requiring teams of specialized system engineers, with specialized knowledge of the hardware platform. In this paper, we introduce a RAN Domain Specific Language (RDSL(TM)) to formally describe use cases, constraints, and multi-vendor hardware/software abstraction to allow automation of RAN construction. In this DSL, system requirements are declarative, and performance constraints are guaranteed by construction using an automated system solver. Using our RAN system solver platform, Gabriel(TM) we show how a system engineer can confidently modify RAN functionality without knowledge of the underlying hardware. We show benefits for specific system requirements when compared to the manually optimized, default configuration of the Intel FlexRAN(TM), and conclude that DSL/automation driven construction of the RAN can lead to significant power and latency benefits when the deployment constraints are tuned for a specific case. We give examples of how constraints and requirements can be formatted in a "Kubernetes style" YAML format which allows the use of other tools, such as Ansible, to integrate the generation of these requirements into higher level automation flows such as Service Management and Orchestration (SMO).
Large multimodal models (LMMs) combine unimodal encoders and large language models (LLMs) to perform multimodal tasks. Despite recent advancements towards the interpretability of these models, understanding internal representations of LMMs remains largely a mystery. In this paper, we present a novel framework for the interpretation of LMMs. We propose a dictionary learning based approach, applied to the representation of tokens. The elements of the learned dictionary correspond to our proposed concepts. We show that these concepts are well semantically grounded in both vision and text. Thus we refer to these as ``multi-modal concepts''. We qualitatively and quantitatively evaluate the results of the learnt concepts. We show that the extracted multimodal concepts are useful to interpret representations of test samples. Finally, we evaluate the disentanglement between different concepts and the quality of grounding concepts visually and textually. Our code is publicly available at //github.com/mshukor/xl-vlms
This work explores systems where source updates require multiple sequential processing steps. We model and analyze the Age of Information (AoI) performance of various system designs under both parallel and series server setups. In parallel setups, each processor executes all computation steps with multiple processors working in parallel, while in series setups, each processor performs a specific step in sequence. In practice, processing faster is better in terms of age but it also consumes more power. We identify the occurrence of wasted power in these setups, which arises when processing efforts do not lead to a reduction in age. This happens when a fresher update finishes first in parallel servers or when a server preempts processing due to a fresher update from preceding server in series setups. To address this age-power trade-off, we formulate and solve an optimization problem to determine the optimal service rates for each processing step under a given power budget. We focus on a special case where updates require two computational steps.
Efficient inference in high-dimensional models is a central challenge in machine learning. We introduce the Gaussian Ensemble Belief Propagation (GEnBP) algorithm, which combines the strengths of the Ensemble Kalman Filter (EnKF) and Gaussian Belief Propagation (GaBP) to address this challenge. GEnBP updates ensembles of prior samples into posterior samples by passing low-rank local messages over the edges of a graphical model, enabling efficient handling of high-dimensional states, parameters, and complex, noisy, black-box generation processes. By utilizing local message passing within a graphical model structure, GEnBP effectively manages complex dependency structures and remains computationally efficient even when the ensemble size is much smaller than the inference dimension -- a common scenario in spatiotemporal modeling, image processing, and physical model inversion. We demonstrate that GEnBP can be applied to various problem structures, including data assimilation, system identification, and hierarchical models, and show through experiments that it outperforms existing belief propagation methods in terms of accuracy and computational efficiency. Supporting code is available at //github.com/danmackinlay/GEnBP
Convolutional neural networks (CNNs) have shown dramatic improvements in single image super-resolution (SISR) by using large-scale external samples. Despite their remarkable performance based on the external dataset, they cannot exploit internal information within a specific image. Another problem is that they are applicable only to the specific condition of data that they are supervised. For instance, the low-resolution (LR) image should be a "bicubic" downsampled noise-free image from a high-resolution (HR) one. To address both issues, zero-shot super-resolution (ZSSR) has been proposed for flexible internal learning. However, they require thousands of gradient updates, i.e., long inference time. In this paper, we present Meta-Transfer Learning for Zero-Shot Super-Resolution (MZSR), which leverages ZSSR. Precisely, it is based on finding a generic initial parameter that is suitable for internal learning. Thus, we can exploit both external and internal information, where one single gradient update can yield quite considerable results. (See Figure 1). With our method, the network can quickly adapt to a given image condition. In this respect, our method can be applied to a large spectrum of image conditions within a fast adaptation process.
Manually labeling objects by tracing their boundaries is a laborious process. In Polygon-RNN++ the authors proposed Polygon-RNN that produces polygonal annotations in a recurrent manner using a CNN-RNN architecture, allowing interactive correction via humans-in-the-loop. We propose a new framework that alleviates the sequential nature of Polygon-RNN, by predicting all vertices simultaneously using a Graph Convolutional Network (GCN). Our model is trained end-to-end. It supports object annotation by either polygons or splines, facilitating labeling efficiency for both line-based and curved objects. We show that Curve-GCN outperforms all existing approaches in automatic mode, including the powerful PSP-DeepLab and is significantly more efficient in interactive mode than Polygon-RNN++. Our model runs at 29.3ms in automatic, and 2.6ms in interactive mode, making it 10x and 100x faster than Polygon-RNN++.