Adaptive Systems (ASs) are capable to monitor their behavior and make adjustments when quality goals are not achieved through the MAPE-K, a widely recognized reference model that offers abstractions for designing ASs. By making these abstractions evident in the system structure, numerous benefits emerge, particularly in terms of enhancing the architecture's maintenance and comprehensibility. However, it is observed that many existing ASs are not designed in accordance with MAPE-K, causing these abstractions to remain hidden in their architecture. To address this issue, Architectural Conformance Checking (ACC) emerges as a valuable technique for verifying whether the current architecture (CA) of a system adheres to the rules prescribed by the planned architecture (PA) or a reference model, such as MAPE-K. In this paper, we present REMEDY, a domain-specific approach that encompasses the specification of the planned adaptive architecture based on the MAPE-K reference model, the recovery of the current adaptive architecture, the conformance checking process, and architecture visualizations. Furthermore, our approach is specifically tailored for ASs, incorporating well-known rules from the MAPE-K model. The evaluation of the REMEDY DSL involves a comparison with a general-purpose DSL, and the results demonstrate improvements in productivity. REMEDY facilitates the identification and correction of architectural non-conformance issues, thereby enhancing the overall quality of adaptive systems.
While chain-of-thought prompting (CoT) has the potential to improve the explainability of language model reasoning, it can systematically misrepresent the factors influencing models' behavior--for example, rationalizing answers in line with a user's opinion without mentioning this bias. To mitigate this biased reasoning problem, we introduce bias-augmented consistency training (BCT), an unsupervised fine-tuning scheme that trains models to give consistent reasoning across prompts with and without biasing features. We construct a suite testing nine forms of biased reasoning on seven question-answering tasks, and find that applying BCT to GPT-3.5-Turbo with one bias reduces the rate of biased reasoning by 86% on held-out tasks. Moreover, this model generalizes to other forms of bias, reducing biased reasoning on held-out biases by an average of 37%. As BCT generalizes to held-out biases and does not require gold labels, this method may hold promise for reducing biased reasoning from as-of-yet unknown biases and on tasks where supervision for ground truth reasoning is unavailable.
In the Network Revenue Management (NRM) problem, products composed of up to L resources are sold to stochastically arriving customers. We take a randomized rounding approach to NRM, motivated by developments in Online Contention Resolution Schemes (OCRS). The goal is to take a fractional solution to NRM that satisfies the resource constraints in expectation, and implement it in an online policy that satisfies the resource constraints in any state, while (approximately) preserving all of the sales that were prescribed by the fractional solution. OCRS cannot be naively applied to NRM or revenue management problems in general, because customer substitution induces a negative correlation in products being demanded. We start by deriving an OCRS that achieves a guarantee of 1/(1+L) for NRM with customer substitution, matching a common benchmark in the literature. We then show how to beat this benchmark for all integers L>1 assuming no substitution, i.e., in the standard OCRS setting. By contrast, we show that this benchmark is unbeatable using OCRS or any fractional relaxation if there is customer substitution, for all integers L that are the power of a prime number. Finally, we show how to beat 1/(1+L) even with customer substitution, if the products comprise one item from each of up to L groups. Our results have corresponding implications for Online Combinatorial Auctions, in which buyers bid for bundles of up to L items, and buyers being single-minded is akin to no substitution. Our final result also beats 1/(1+L) for Prophet Inequality on the intersection of L partition matroids. All in all, our paper provides a unifying framework for applying OCRS to these problems, delineating the impact of substitution, and establishing a separation between the guarantees achievable with vs. without substitution under general resource constraints parametrized by L.
A great interest has arisen in using Deep Generative Models (DGM) for generative design. When assessing the quality of the generated designs, human designers focus more on structural plausibility, e.g., no missing component, rather than visual artifacts, e.g., noises in the images. Meanwhile, commonly used metrics such as Fr\'echet Inception Distance (FID) may not evaluate accurately as they tend to penalize visual artifacts instead of structural implausibility. As such, FID might not be suitable to assess the performance of DGMs for a generative design task. In this work, we propose to encode the input designs with a simple Denoising Autoencoder (DAE) and measure the distribution distance in the latent space thereof. We experimentally test our DAE-based metrics with FID and other state-of-the-art metrics on three data sets: compared to FID and some more recent works, e.g., FD$_\text{DINO-V2}$ and topology distance, DAE-based metrics can effectively detect implausible structures and are more consistent with structural inspection by human experts.
Collaboration between humans and robots is becoming increasingly crucial in our daily life. In order to accomplish efficient cooperation, trust recognition is vital, empowering robots to predict human behaviors and make trust-aware decisions. Consequently, there is an urgent need for a generalized approach to recognize human-robot trust. This study addresses this need by introducing an EEG-based method for trust recognition during human-robot cooperation. A human-robot cooperation game scenario is used to stimulate various human trust levels when working with robots. To enhance recognition performance, the study proposes an EEG Vision Transformer model coupled with a 3-D spatial representation to capture the spatial information of EEG, taking into account the topological relationship among electrodes. To validate this approach, a public EEG-based human trust dataset called EEGTrust is constructed. Experimental results indicate the effectiveness of the proposed approach, achieving an accuracy of 74.99% in slice-wise cross-validation and 62.00% in trial-wise cross-validation. This outperforms baseline models in both recognition accuracy and generalization. Furthermore, an ablation study demonstrates a significant improvement in trust recognition performance of the spatial representation. The source code and EEGTrust dataset are available at //github.com/CaiyueXu/EEGTrust.
This work is an attempt to introduce a comprehensive benchmark for Arabic speech recognition, specifically tailored to address the challenges of telephone conversations in Arabic language. Arabic, characterized by its rich dialectal diversity and phonetic complexity, presents a number of unique challenges for automatic speech recognition (ASR) systems. These challenges are further amplified in the domain of telephone calls, where audio quality, background noise, and conversational speech styles negatively affect recognition accuracy. Our work aims to establish a robust benchmark that not only encompasses the broad spectrum of Arabic dialects but also emulates the real-world conditions of call-based communications. By incorporating diverse dialectical expressions and accounting for the variable quality of call recordings, this benchmark seeks to provide a rigorous testing ground for the development and evaluation of ASR systems capable of navigating the complexities of Arabic speech in telephonic contexts. This work also attempts to establish a baseline performance evaluation using state-of-the-art ASR technologies.
Agent Based Models (ABMs) have emerged as a powerful tool for investigating complex social interactions, particularly in the context of public health and infectious disease investigation. In an effort to enhance the conventional ABM, enabling automated model calibration and reducing the computational resources needed for scaling up the model, we have developed a tensorized and differentiable agent-based model by coupling Graph Neural Network (GNN) and Long Short-Term Memory (LSTM) network. The model was employed to investigate the 2019 measles outbreak occurred in New Zealand, demonstrating a promising ability to accurately simulate the outbreak dynamics, particularly during the peak period of repeated cases. This paper shows that by leveraging the latest Artificial Intelligence (AI) technology and the capabilities of traditional ABMs, we gain deeper insights into the dynamics of infectious disease outbreaks. This, in turn, helps us make more informed decision when developing effective strategies that strike a balance between managing outbreaks and minimizing disruptions to everyday life.
Cyber threats continue to evolve in complexity, thereby traditional Cyber Threat Intelligence (CTI) methods struggle to keep pace. AI offers a potential solution, automating and enhancing various tasks, from data ingestion to resilience verification. This paper explores the potential of integrating Artificial Intelligence (AI) into CTI. We provide a blueprint of an AI-enhanced CTI processing pipeline, and detail its components and functionalities. The pipeline highlights the collaboration of AI and human expertise, which is necessary to produce timely and high-fidelity cyber threat intelligence. We also explore the automated generation of mitigation recommendations, harnessing AI's capabilities to provide real-time, contextual, and predictive insights. However, the integration of AI into CTI is not without challenges. Thereby, we discuss ethical dilemmas, potential biases, and the imperative for transparency in AI-driven decisions. We address the need for data privacy, consent mechanisms, and the potential misuse of technology. Moreover, we highlights the importance of addressing biases both during CTI analysis and AI models warranting their transparency and interpretability. Lastly, our work points out future research directions such as the exploration of advanced AI models to augment cyber defences, and the human-AI collaboration optimization. Ultimately, the fusion of AI with CTI appears to hold significant potential in cybersecurity domain.
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP) and have recently gained significant attention in the domain of Recommendation Systems (RS). These models, trained on massive amounts of data using self-supervised learning, have demonstrated remarkable success in learning universal representations and have the potential to enhance various aspects of recommendation systems by some effective transfer techniques such as fine-tuning and prompt tuning, and so on. The crucial aspect of harnessing the power of language models in enhancing recommendation quality is the utilization of their high-quality representations of textual features and their extensive coverage of external knowledge to establish correlations between items and users. To provide a comprehensive understanding of the existing LLM-based recommendation systems, this survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec), with the latter being systematically sorted out for the first time. Furthermore, we systematically review and analyze existing LLM-based recommendation systems within each paradigm, providing insights into their methodologies, techniques, and performance. Additionally, we identify key challenges and several valuable findings to provide researchers and practitioners with inspiration.
Graph Neural Networks (GNNs) have recently become increasingly popular due to their ability to learn complex systems of relations or interactions arising in a broad spectrum of problems ranging from biology and particle physics to social networks and recommendation systems. Despite the plethora of different models for deep learning on graphs, few approaches have been proposed thus far for dealing with graphs that present some sort of dynamic nature (e.g. evolving features or connectivity over time). In this paper, we present Temporal Graph Networks (TGNs), a generic, efficient framework for deep learning on dynamic graphs represented as sequences of timed events. Thanks to a novel combination of memory modules and graph-based operators, TGNs are able to significantly outperform previous approaches being at the same time more computationally efficient. We furthermore show that several previous models for learning on dynamic graphs can be cast as specific instances of our framework. We perform a detailed ablation study of different components of our framework and devise the best configuration that achieves state-of-the-art performance on several transductive and inductive prediction tasks for dynamic graphs.
For better user experience and business effectiveness, Click-Through Rate (CTR) prediction has been one of the most important tasks in E-commerce. Although extensive CTR prediction models have been proposed, learning good representation of items from multimodal features is still less investigated, considering an item in E-commerce usually contains multiple heterogeneous modalities. Previous works either concatenate the multiple modality features, that is equivalent to giving a fixed importance weight to each modality; or learn dynamic weights of different modalities for different items through technique like attention mechanism. However, a problem is that there usually exists common redundant information across multiple modalities. The dynamic weights of different modalities computed by using the redundant information may not correctly reflect the different importance of each modality. To address this, we explore the complementarity and redundancy of modalities by considering modality-specific and modality-invariant features differently. We propose a novel Multimodal Adversarial Representation Network (MARN) for the CTR prediction task. A multimodal attention network first calculates the weights of multiple modalities for each item according to its modality-specific features. Then a multimodal adversarial network learns modality-invariant representations where a double-discriminators strategy is introduced. Finally, we achieve the multimodal item representations by combining both modality-specific and modality-invariant representations. We conduct extensive experiments on both public and industrial datasets, and the proposed method consistently achieves remarkable improvements to the state-of-the-art methods. Moreover, the approach has been deployed in an operational E-commerce system and online A/B testing further demonstrates the effectiveness.