This paper presents an extended version of the SPADE platform, which aims to empower intelligent agent systems with normative reasoning and value alignment capabilities. Normative reasoning involves evaluating social norms and their impact on decision-making, while value alignment ensures agents' actions are in line with desired principles and ethical guidelines. The extended platform equips agents with normative awareness and reasoning capabilities based on deontic logic, allowing them to assess the appropriateness of their actions and make informed decisions. By integrating normative reasoning and value alignment, the platform enhances agents' social intelligence and promotes responsible and ethical behaviors in complex environments.
Detecting undesired process behavior is one of the main tasks of process mining and various conformance-checking techniques have been developed to this end. These techniques typically require a normative process model as input, specifically designed for the processes to be analyzed. Such models are rarely available, though, and their creation involves considerable manual effort.However, reference process models serve as best-practice templates for organizational processes in a plethora of domains, containing valuable knowledge about general behavioral relations in well-engineered processes. These general models can thus mitigate the need for dedicated models by providing a basis to check for undesired behavior. Still, finding a perfectly matching reference model for a real-life event log is unrealistic because organizational needs can vary, despite similarities in process execution. Furthermore, event logs may encompass behavior related to different reference models, making traditional conformance checking impractical as it requires aligning process executions to individual models. To still use reference models for conformance checking, we propose a framework for mining declarative best-practice constraints from a reference model collection, automatically selecting constraints that are relevant for a given event log, and checking for best-practice violations. We demonstrate the capability of our framework to detect best-practice violations through an evaluation based on real-world process model collections and event logs.
This study presents a comprehensive approach for optimizing the acquisition, utilization, and maintenance of ABLVR vascular robots in healthcare settings. Medical robotics, particularly in vascular treatments, necessitates precise resource allocation and optimization due to the complex nature of robot and operator maintenance. Traditional heuristic methods, though intuitive, often fail to achieve global optimization. To address these challenges, this research introduces a novel strategy, combining mathematical modeling, a hybrid genetic algorithm, and ARIMA time series forecasting. Considering the dynamic healthcare environment, our approach includes a robust resource allocation model for robotic vessels and operators. We incorporate the unique requirements of the adaptive learning process for operators and the maintenance needs of robotic components. The hybrid genetic algorithm, integrating simulated annealing and greedy approaches, efficiently solves the optimization problem. Additionally, ARIMA time series forecasting predicts the demand for vascular robots, further enhancing the adaptability of our strategy. Experimental results demonstrate the superiority of our approach in terms of optimization, transparency, and convergence speed from other state-of-the-art methods.
To efficiently find an optimum parameter combination in a large-scale problem, it is a key to convert the parameters into available variables in actual machines. Specifically, quadratic unconstrained binary optimization problems are solved with the help of machine learning, e.g., factorization machines with annealing, which convert a raw parameter to binary variables. This work investigates the dependence of the convergence speed and the accuracy on binary labeling method, which can influence the cost function shape and thus the probability of being captured at a local minimum solution. By exemplifying traveling salesman problem, we propose and evaluate Gray labeling, which correlates the Hamming distance in binary labels with the traveling distance. Through numerical simulation of traveling salesman problem up to 15 cities at a limited number of iterations, the Gray labeling shows less local minima percentages and shorter traveling distances compared with natural labeling.
With the advent of 6G technology, the demand for efficient and intelligent systems in industrial applications has surged, driving the need for advanced solutions in target localization. Utilizing swarm robots to locate unknown targets involves navigating increasingly complex environments. Digital Twinning (DT) offers a robust solution by creating a virtual replica of the physical world, which enhances the swarm's navigation capabilities. Our framework leverages DT and integrates Swarm Intelligence to store physical map information in the cloud, enabling robots to efficiently locate unknown targets. The simulation results demonstrate that the DT framework, augmented by Swarm Intelligence, significantly improves target location efficiency in obstacle-rich environments compared to traditional methods. This research underscores the potential of combining DT and Swarm Intelligence to advance the field of robotic navigation and target localization in complex industrial settings.
This paper presents a framework that enables robots to automatically recover from assumption violations of high-level specifications during task execution. In contrast to previous methods relying on user intervention to impose additional assumptions for failure recovery, our approach leverages synthesis-based repair to suggest new robot skills that, when implemented, repair the task. Our approach detects violations of environment safety assumptions during the task execution, relaxes the assumptions to admit observed environment behaviors, and acquires new robot skills for task completion. We demonstrate our approach with a Hello Robot Stretch in a factory-like scenario.
This paper presents a systematic methodology for the discretization and reduction of a class of one-dimensional Partial Differential Equations (PDEs) with inputs and outputs collocated at the spatial boundaries. The class of system that we consider is known as Boundary-Controlled Port-Hamiltonian Systems (BC-PHSs) and covers a wide class of Hyperbolic PDEs with a large type of boundary inputs and outputs. This is, for instance, the case of waves and beams with Neumann, Dirichlet, or mixed boundary conditions. Based on a Partitioned Finite Element Method (PFEM), we develop a numerical scheme for the structure-preserving spatial discretization for the class of one-dimensional BC-PHSs. We show that if the initial PDE is passive (or impedance energy preserving), the discretized model also is. In addition and since the discretized model or Full Order Model (FOM) can be of large dimension, we recall the standard Loewner framework for the Model Order Reduction (MOR) using frequency domain interpolation. We recall the main steps to produce a Reduced Order Model (ROM) that approaches the FOM in a given range of frequencies. We summarize the steps to follow in order to obtain a ROM that preserves the passive structure as well. Finally, we provide a constructive way to build a projector that allows to recover the physical meaning of the state variables from the ROM to the FOM. We use the one-dimensional wave equation and the Timoshenko beam as examples to show the versatility of the proposed approach.
With the advent of 6G technology, the demand for efficient and intelligent systems in industrial applications has surged, driving the need for advanced solutions in target localization. Utilizing swarm robots to locate unknown targets involves navigating increasingly complex environments. Digital Twinning (DT) offers a robust solution by creating a virtual replica of the physical world, which enhances the swarm's navigation capabilities. Our framework leverages DT and integrates Swarm Intelligence to store physical map information in the cloud, enabling robots to efficiently locate unknown targets. The simulation results demonstrate that the DT framework, augmented by Swarm Intelligence, significantly improves target location efficiency in obstacle-rich environments compared to traditional methods. This research underscores the potential of combining DT and Swarm Intelligence to advance the field of robotic navigation and target localization in complex industrial settings.
Measuring an overall autonomy score for a robotic system requires the combination of a set of relevant aspects and features of the system that might be measured in different units, qualitative, and/or discordant. In this paper, we build upon an existing non-contextual autonomy framework that measures and combines the Autonomy Level and the Component Performance of a system as overall autonomy score. We examine several methods of combining features, showing how some methods find different rankings of the same data, and we employ the weighted product method to resolve this issue. Furthermore, we introduce the non-contextual autonomy coordinate and represent the overall autonomy of a system with an autonomy distance. We apply our method to a set of seven Unmanned Aerial Systems (UAS) and obtain their absolute autonomy score as well as their relative score with respect to the best system.
In the post-deep learning era, the Transformer architecture has demonstrated its powerful performance across pre-trained big models and various downstream tasks. However, the enormous computational demands of this architecture have deterred many researchers. To further reduce the complexity of attention models, numerous efforts have been made to design more efficient methods. Among them, the State Space Model (SSM), as a possible replacement for the self-attention based Transformer model, has drawn more and more attention in recent years. In this paper, we give the first comprehensive review of these works and also provide experimental comparisons and analysis to better demonstrate the features and advantages of SSM. Specifically, we first give a detailed description of principles to help the readers quickly capture the key ideas of SSM. After that, we dive into the reviews of existing SSMs and their various applications, including natural language processing, computer vision, graph, multi-modal and multi-media, point cloud/event stream, time series data, and other domains. In addition, we give statistical comparisons and analysis of these models and hope it helps the readers to understand the effectiveness of different structures on various tasks. Then, we propose possible research points in this direction to better promote the development of the theoretical model and application of SSM. More related works will be continuously updated on the following GitHub: //github.com/Event-AHU/Mamba_State_Space_Model_Paper_List.
Many tasks in natural language processing can be viewed as multi-label classification problems. However, most of the existing models are trained with the standard cross-entropy loss function and use a fixed prediction policy (e.g., a threshold of 0.5) for all the labels, which completely ignores the complexity and dependencies among different labels. In this paper, we propose a meta-learning method to capture these complex label dependencies. More specifically, our method utilizes a meta-learner to jointly learn the training policies and prediction policies for different labels. The training policies are then used to train the classifier with the cross-entropy loss function, and the prediction policies are further implemented for prediction. Experimental results on fine-grained entity typing and text classification demonstrate that our proposed method can obtain more accurate multi-label classification results.