Out-of-distribution (OOD) detection is critical when deploying machine learning models in the real world. Outlier exposure methods, which incorporate auxiliary outlier data in the training process, can drastically improve OOD detection performance compared to approaches without advanced training strategies. We introduce Hopfield Boosting, a boosting approach, which leverages modern Hopfield energy (MHE) to sharpen the decision boundary between the in-distribution and OOD data. Hopfield Boosting encourages the model to concentrate on hard-to-distinguish auxiliary outlier examples that lie close to the decision boundary between in-distribution and auxiliary outlier data. Our method achieves a new state-of-the-art in OOD detection with outlier exposure, improving the FPR95 metric from 2.28 to 0.92 on CIFAR-10 and from 11.76 to 7.94 on CIFAR-100.
With the increasing complexity and scope of software systems, their dependability is crucial. The analysis of log data recorded during system execution can enable engineers to automatically predict failures at run time. Several Machine Learning (ML) techniques, including traditional ML and Deep Learning (DL), have been proposed to automate such tasks. However, current empirical studies are limited in terms of covering all main DL types -- Recurrent Neural Network (RNN), Convolutional Neural network (CNN), and transformer -- as well as examining them on a wide range of diverse datasets. In this paper, we aim to address these issues by systematically investigating the combination of log data embedding strategies and DL types for failure prediction. To that end, we propose a modular architecture to accommodate various configurations of embedding strategies and DL-based encoders. To further investigate how dataset characteristics such as dataset size and failure percentage affect model accuracy, we synthesised 360 datasets, with varying characteristics, for three distinct system behavioral models, based on a systematic and automated generation approach. Using the F1 score metric, our results show that the best overall performing configuration is a CNN-based encoder with Logkey2vec. Additionally, we provide specific dataset conditions, namely a dataset size >350 or a failure percentage >7.5%, under which this configuration demonstrates high accuracy for failure prediction.
Probabilistic Hoare logic (PHL) is an extension of Hoare logic and is specifically useful in verifying randomized programs. It allows researchers to formally reason about the behavior of programs with stochastic elements, ensuring the desired probabilistic properties are upheld. The relative completeness of satisfaction-based PHL has been an open problem ever since the birth of the first PHL in 1979. More specifically, no satisfaction-based PHL with While-loop has been proven to be relatively complete yet. This paper solves this problem by establishing a new PHL with While-loop and prove its relative completeness. The programming language concerned in our PHL is expressively equivalent to the existing PHL systems but brings a lot of convenience in showing completeness. The weakest preterm for While-loop command reveals how it changes the probabilistic properties of computer states, considering both execution branches that halt and infinite runs. We prove the relative completeness of our PHL in two steps. We first establish a semantics and proof system of Hoare triples with probabilistic programs and deterministic assertions. Then, by utilizing the weakest precondition of deterministic assertions, we construct the weakest preterm calculus of probabilistic expressions. The relative completeness of our PHL is then obtained as a consequence of the weakest preterm calculus.
Random partition distribution is a crucial tool for model-based clustering. This study advances the field of random partition in the context of functional spatial data, focusing on the challenges posed by hourly population data across various regions and dates. We propose an extended generalized Dirichlet process, named the similarity-based generalized Dirichlet process (SGDP), to address the limitations of simple random partition distributions (e.g., those induced by the Dirichlet process), such as an overabundance of clusters. This model prevents excess cluster production as well as incorporates pairwise similarity information to ensure accurate and meaningful grouping. The theoretical properties of the SGDP are studied. Then, SGDP-based random partition is applied to a real-world dataset of hourly population flow in $500\text{m}^2$ meshes in the central part of Tokyo. In this empirical context, our method excels at detecting meaningful patterns in the data while accounting for spatial nuances. The results underscore the adaptability and utility of the method, showcasing its prowess in revealing intricate spatiotemporal dynamics. The proposed SGDP will significantly contribute to urban planning, transportation, and policy-making and will be a helpful tool for understanding population dynamics and their implications.
Transfer function design is crucial in volume rendering, as it directly influences the visual representation and interpretation of volumetric data. However, creating effective transfer functions that align with users' visual objectives is often challenging due to the complex parameter space and the semantic gap between transfer function values and features of interest within the volume. In this work, we propose a novel approach that leverages recent advancements in language-vision models to bridge this semantic gap. By employing a fully differentiable rendering pipeline and an image-based loss function guided by language descriptions, our method generates transfer functions that yield volume-rendered images closely matching the user's intent. We demonstrate the effectiveness of our approach in creating meaningful transfer functions from simple descriptions, empowering users to intuitively express their desired visual outcomes with minimal effort. This advancement streamlines the transfer function design process and makes volume rendering more accessible to a wider range of users.
Label scarcity problem is the main challenge that hinders the wide application of deep learning systems in automatic cardiovascular diseases (CVDs) detection using electrocardiography (ECG). Tuning pre-trained models alleviates this problem by transferring knowledge learned from large datasets to downstream small datasets. However, bottlenecks in computational efficiency and CVDs detection performance limit its clinical applications. It is difficult to improve the detection performance without significantly sacrificing model computational efficiency. Here, we propose a computation-efficient semi-supervised learning paradigm (FastECG) for robust and computation-efficient CVDs detection using ECG. It enables a robust adaptation of pre-trained models on downstream datasets with limited supervision and high computational efficiency. First, a random-deactivation technique is developed to achieve robust and fast low-rank adaptation of pre-trained weights. Subsequently, we propose a one-shot rank allocation module to determine the optimal ranks for the update matrices of the pre-trained weights. Finally, a lightweight semi-supervised learning pipeline is introduced to enhance model performance by leveraging labeled and unlabeled data with high computational efficiency. Extensive experiments on four downstream ECG datasets demonstrate that FastECG not only outperforms the state-of-the-art methods in multi-label CVDs detection but also consumes fewer GPU footprints, training time, and parameter storage space. As such, this paradigm provides an effective solution for achieving high computational efficiency and robust detection performance in the clinical applications of pre-trained models under limited supervision.
The enduring value of the Vickrey-Clarke-Groves (VCG) mechanism has been highlighted due to its adoption by Facebook ad auctions. Our research delves into its utility in the collaborative virtual goods production (CVGP) game, which finds application in realms like federated learning and crowdsourcing, in which bidders take on the roles of suppliers rather than consumers. We introduce the Procurement-VCG (PVCG) sharing rule into existing VCG mechanisms such that they can handle capacity limits and the continuous strategy space characteristic of the reverse auction setting in CVGP games. Our main theoretical contribution provides mathematical proofs to show that PVCG is the first in the CVGP game context to simultaneously achieve truthfulness, Pareto efficiency, individual rationality, and weak budget balance. These properties suggest the potential for Pareto-efficient production in the digital planned economy. Moreover, to compute the PVCG payments in a noisy economic environment, we propose the Report-Interpolation-Maximization (RIM) method. RIM facilitates the learning of the optimal procurement level and PVCG payments through iterative interactions with suppliers.
Diffusion models have become the de-facto approach for generating visual data, which are trained to match the distribution of the training dataset. In addition, we also want to control generation to fulfill desired properties such as alignment to a text description, which can be specified with a black-box reward function. Prior works fine-tune pretrained diffusion models to achieve this goal through reinforcement learning-based algorithms. Nonetheless, they suffer from issues including slow credit assignment as well as low quality in their generated samples. In this work, we explore techniques that do not directly maximize the reward but rather generate high-reward images with relatively high probability -- a natural scenario for the framework of generative flow networks (GFlowNets). To this end, we propose the Diffusion Alignment with GFlowNet (DAG) algorithm to post-train diffusion models with black-box property functions. Extensive experiments on Stable Diffusion and various reward specifications corroborate that our method could effectively align large-scale text-to-image diffusion models with given reward information.
Agent-based modeling and simulation has evolved as a powerful tool for modeling complex systems, offering insights into emergent behaviors and interactions among diverse agents. Integrating large language models into agent-based modeling and simulation presents a promising avenue for enhancing simulation capabilities. This paper surveys the landscape of utilizing large language models in agent-based modeling and simulation, examining their challenges and promising future directions. In this survey, since this is an interdisciplinary field, we first introduce the background of agent-based modeling and simulation and large language model-empowered agents. We then discuss the motivation for applying large language models to agent-based simulation and systematically analyze the challenges in environment perception, human alignment, action generation, and evaluation. Most importantly, we provide a comprehensive overview of the recent works of large language model-empowered agent-based modeling and simulation in multiple scenarios, which can be divided into four domains: cyber, physical, social, and hybrid, covering simulation of both real-world and virtual environments. Finally, since this area is new and quickly evolving, we discuss the open problems and promising future directions.
Out-of-distribution (OOD) detection is critical to ensuring the reliability and safety of machine learning systems. For instance, in autonomous driving, we would like the driving system to issue an alert and hand over the control to humans when it detects unusual scenes or objects that it has never seen before and cannot make a safe decision. This problem first emerged in 2017 and since then has received increasing attention from the research community, leading to a plethora of methods developed, ranging from classification-based to density-based to distance-based ones. Meanwhile, several other problems are closely related to OOD detection in terms of motivation and methodology. These include anomaly detection (AD), novelty detection (ND), open set recognition (OSR), and outlier detection (OD). Despite having different definitions and problem settings, these problems often confuse readers and practitioners, and as a result, some existing studies misuse terms. In this survey, we first present a generic framework called generalized OOD detection, which encompasses the five aforementioned problems, i.e., AD, ND, OSR, OOD detection, and OD. Under our framework, these five problems can be seen as special cases or sub-tasks, and are easier to distinguish. Then, we conduct a thorough review of each of the five areas by summarizing their recent technical developments. We conclude this survey with open challenges and potential research directions.
Recently, ensemble has been applied to deep metric learning to yield state-of-the-art results. Deep metric learning aims to learn deep neural networks for feature embeddings, distances of which satisfy given constraint. In deep metric learning, ensemble takes average of distances learned by multiple learners. As one important aspect of ensemble, the learners should be diverse in their feature embeddings. To this end, we propose an attention-based ensemble, which uses multiple attention masks, so that each learner can attend to different parts of the object. We also propose a divergence loss, which encourages diversity among the learners. The proposed method is applied to the standard benchmarks of deep metric learning and experimental results show that it outperforms the state-of-the-art methods by a significant margin on image retrieval tasks.