Systematic enumeration and identification of unique 3D spatial topologies of complex engineering systems (such as automotive cooling systems, electric power trains, satellites, and aero-engines) are essential to navigation of these expansive design spaces with the goal of identifying new spatial configurations that can satisfy challenging system requirements. However, efficient navigation through discrete 3D spatial topology (ST) options is a very challenging problem due to its combinatorial nature and can quickly exceed human cognitive abilities at even moderate complexity levels. This article presents a new, efficient, and scalable design framework that leverages mathematical spatial graph theory to represent, enumerate, and identify distinctive 3D topological classes for a generic 3D engineering system, given its system architecture (SA) -- its components and their interconnections. First, spatial graph diagrams (SGDs) are generated for a given SA from zero to a specified maximum number of interconnect crossings. Then, corresponding Yamada polynomials for all the planar SGDs are generated. SGDs are categorized into topological classes, each of which shares a unique Yamada polynomial. Finally, within each topological class, 3D geometric models are generated using the spatial graph diagrams (SGDs) having different numbers of interconnect crossings. Selected case studies are presented to illustrate the different features of our proposed framework, including an industrial engineering design application: ST enumeration of a 3D automotive fuel cell cooling system (AFCS). Design guidelines are also provided for practicing engineers to aid the application of this framework to different types of real-world problems such as configuration design and spatial packaging optimization.
Research on dynamics of robotic manipulators provides promising support for model-based control. In general, rigorous first-principles-based dynamics modeling and accurate identification of mechanism parameters are critical to achieving high precision in model-based control, while data-driven model reconstruction provides alternative approaches of the above process. Taking the level of activation of data as an indicator, this paper classifies the collected robotic manipulator data by means of K-means clustering algorithm. With the fundamental prior knowledge, we find the corresponding dynamical properties behind the classified data separately. Afterwards, the sparse identification of nonlinear dynamics (SINDy) method is used to reconstruct the dynamics model of the robotic manipulator step by step according to the activation level of the classified data. The simulation results show that the proposed method not only reduces the complexity of the basis function library, enabling the application of SINDy method to multi-degree-of-freedom robotic manipulators, but also decreases the influence of data noise on the regression results. Finally, the dynamic control based on the reconfigured model is deployed on the experimental platform, and the experimental results prove the effectiveness of the proposed method.
In order for robots to safely navigate in unseen scenarios using learning-based methods, it is important to accurately detect out-of-training-distribution (OoD) situations online. Recently, Gaussian process state-space models (GPSSMs) have proven useful to discriminate unexpected observations by comparing them against probabilistic predictions. However, the capability for the model to correctly distinguish between in- and out-of-training distribution observations hinges on the accuracy of these predictions, primarily affected by the class of functions the GPSSM kernel can represent. In this paper, we propose (i) a novel approach to embed existing domain knowledge in the kernel and (ii) an OoD online runtime monitor, based on receding-horizon predictions. Domain knowledge is assumed given as a dataset collected either in simulation or using a nominal model. Numerical results show that the informed kernel yields better regression quality with smaller datasets, as compared to standard kernel choices. We demonstrate the effectiveness of the OoD monitor on a real quadruped navigating an indoor setting, which reliably classifies previously unseen terrains.
Differentiable digital signal processing (DDSP) techniques, including methods for audio synthesis, have gained attention in recent years and lend themselves to interpretability in the parameter space. However, current differentiable synthesis methods have not explicitly sought to model the transient portion of signals, which is important for percussive sounds. In this work, we present a unified synthesis framework aiming to address transient generation and percussive synthesis within a DDSP framework. To this end, we propose a model for percussive synthesis that builds on sinusoidal modeling synthesis and incorporates a modulated temporal convolutional network for transient generation. We use a modified sinusoidal peak picking algorithm to generate time-varying non-harmonic sinusoids and pair it with differentiable noise and transient encoders that are jointly trained to reconstruct drumset sounds. We compute a set of reconstruction metrics using a large dataset of acoustic and electronic percussion samples that show that our method leads to improved onset signal reconstruction for membranophone percussion instruments.
Developing high-performing dialogue systems benefits from the automatic identification of undesirable behaviors in system responses. However, detecting such behaviors remains challenging, as it draws on a breadth of general knowledge and understanding of conversational practices. Although recent research has focused on building specialized classifiers for detecting specific dialogue behaviors, the behavior coverage is still incomplete and there is a lack of testing on real-world human-bot interactions. This paper investigates the ability of a state-of-the-art large language model (LLM), ChatGPT-3.5, to perform dialogue behavior detection for nine categories in real human-bot dialogues. We aim to assess whether ChatGPT can match specialized models and approximate human performance, thereby reducing the cost of behavior detection tasks. Our findings reveal that neither specialized models nor ChatGPT have yet achieved satisfactory results for this task, falling short of human performance. Nevertheless, ChatGPT shows promising potential and often outperforms specialized detection models. We conclude with an in-depth examination of the prevalent shortcomings of ChatGPT, offering guidance for future research to enhance LLM capabilities.
In a microservices-based system, reliability and availability are key components to guarantee the best-in-class experience for the consumers. One of the key advantages of microservices architecture is the ability to independently deploy services, providing maximum change flexibility. However, this introduces an extra complexity in managing the risk associated with every change: any mutation of a service might cause the whole system to fail. In this research, we would propose an algorithm to enable development teams to determine the risk associated with each change to any of the microservices in the system.
An emerging branch of control theory specialises in certificate learning, concerning the specification of a desired (possibly complex) system behaviour for an autonomous or control model, which is then analytically verified by means of a function-based proof. However, the synthesis of controllers abiding by these complex requirements is in general a non-trivial task and may elude the most expert control engineers. This results in a need for automatic techniques that are able to design controllers and to analyse a wide range of elaborate specifications. In this paper, we provide a general framework to encode system specifications and define corresponding certificates, and we present an automated approach to formally synthesise controllers and certificates. Our approach contributes to the broad field of safe learning for control, exploiting the flexibility of neural networks to provide candidate control and certificate functions, whilst using SMT-solvers to offer a formal guarantee of correctness. We test our framework by developing a prototype software tool, and assess its efficacy at verification via control and certificate synthesis over a large and varied suite of benchmarks.
This work focuses on developing a data-driven framework using Koopman operator theory for system identification and linearization of nonlinear systems for control. Our proposed method presents a deep learning framework with recursive learning. The resulting linear system is controlled using a linear quadratic control. An illustrative example using a pendulum system is presented with simulations on noisy data. We show that our proposed method is trained more efficiently and is more accurate than an autoencoder baseline.
To facilitate high quality interaction during the regular use of computing systems, it is essential that the user interface (UI) deliver content and components in an appropriate manner. Although extended reality (XR) is emerging as a new computing platform, we still have a limited understanding of how best to design and present interactive content to users in such immersive environments. Adaptive UIs offer a promising approach for optimal presentation in XR as the user's environment, tasks, capabilities, and preferences vary under changing context. In this position paper, we present a design framework for adapting various characteristics of content presented in XR. We frame these as five considerations that need to be taken into account for adaptive XR UIs: What?, How Much?, Where?, How?, and When?. With this framework, we review literature on UI design and adaptation to reflect on approaches that have been adopted or developed in the past towards identifying current gaps and challenges, and opportunities for applying such approaches in XR. Using our framework, future work could identify and develop novel computational approaches for achieving successful adaptive user interfaces in such immersive environments.
Deep neural networks have revolutionized many machine learning tasks in power systems, ranging from pattern recognition to signal processing. The data in these tasks is typically represented in Euclidean domains. Nevertheless, there is an increasing number of applications in power systems, where data are collected from non-Euclidean domains and represented as the graph-structured data with high dimensional features and interdependency among nodes. The complexity of graph-structured data has brought significant challenges to the existing deep neural networks defined in Euclidean domains. Recently, many studies on extending deep neural networks for graph-structured data in power systems have emerged. In this paper, a comprehensive overview of graph neural networks (GNNs) in power systems is proposed. Specifically, several classical paradigms of GNNs structures (e.g., graph convolutional networks, graph recurrent neural networks, graph attention networks, graph generative networks, spatial-temporal graph convolutional networks, and hybrid forms of GNNs) are summarized, and key applications in power systems such as fault diagnosis, power prediction, power flow calculation, and data generation are reviewed in detail. Furthermore, main issues and some research trends about the applications of GNNs in power systems are discussed.
We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SciIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.