Bond graph is a unified graphical approach for describing the dynamics of complex engineering and physical systems and is widely adopted in a variety of domains, such as, electrical, mechanical, medical, thermal and fluid mechanics. Traditionally, these dynamics are analyzed using paper-and-pencil proof methods and computer-based techniques. However, both of these techniques suffer from their inherent limitations, such as human-error proneness, approximations of results and enormous computational requirements. Thus, these techniques cannot be trusted for performing the bond graph based dynamical analysis of systems from the safety-critical domains like robotics and medicine. Formal methods, in particular, higher-order-logic theorem proving, can overcome the shortcomings of these traditional methods and provide an accurate analysis of these systems. It has been widely used for analyzing the dynamics of engineering and physical systems. In this paper, we propose to use higher-order-logic theorem proving for performing the bond graph based analysis of the physical systems. In particular, we provide formalization of bond graph, which mainly includes functions that allow conversion of a bond graph to its corresponding mathematical model (state-space model) and the verification of its various properties, such as, stability. To illustrate the practical effectiveness of our proposed approach, we present the formal stability analysis of a prosthetic mechatronic hand using HOL Light theorem prover. Moreover, to help non-experts in HOL, we encode our formally verified stability theorems in MATLAB to perform the stability analysis of an anthropomorphic prosthetic mechatronic hand.
Overdetermined systems of first kind integral equations appear in many applications. When the right-hand side is discretized, the resulting finite-data problem is ill-posed and admits infinitely many solutions. We propose a numerical method to compute the minimal-norm solution in the presence of boundary constraints. The algorithm stems from the Riesz representation theorem and operates in a reproducing kernel Hilbert space. Since the resulting linear system is strongly ill-conditioned, we construct a regularization method depending on a discrete parameter. It is based on the expansion of the minimal-norm solution in terms of the singular functions of the integral operator defining the problem. Two estimation techniques are tested for the automatic determination of the regularization parameter, namely, the discrepancy principle and the L-curve method. Numerical results concerning two artificial test problems demonstrate the excellent performance of the proposed method. Finally, a particular model typical of geophysical applications, which reproduces the readings of a frequency domain electromagnetic induction device, is investigated. The results show that the new method is extremely effective when the sought solution is smooth, but gives significant information on the solution even for non-smooth solutions.
One of the main features of interest in analysing the light curves of stars is the underlying periodic behaviour. The corresponding observations are a complex type of time series with unequally spaced time points and are sometimes accompanied by varying measures of accuracy. The main tools for analysing these type of data rely on the periodogram-like functions, constructed with a desired feature so that the peaks indicate the presence of a potential period. In this paper, we explore a particular periodogram for the irregularly observed time series data, similar to Thieler et. al. (2013). We identify the potential periods at the appropriate peaks and more importantly with a quantifiable uncertainty. Our approach is shown to easily generalise to non-parametric methods including a weighted Gaussian process regression periodogram. We also extend this approach to correlated background noise. The proposed method for period detection relies on a test based on quadratic forms with normally distributed components. We implement the saddlepoint approximation, as a faster and more accurate alternative to the simulation-based methods that are currently used. The power analysis of the testing methodology is reported together with applications using light curves from the Hunting Outbursting Young Stars citizen science project.
A shallow semantical embedding for public announcement logic with relativized common knowledge is presented. This embedding enables the first-time automation of this logic with off-the-shelf theorem provers for classical higher-order logic. It is demonstrated (i) how meta-theoretical studies can be automated this way, and (ii) how non-trivial reasoning in the target logic (public announcement logic), required e.g. to obtain a convincing encoding and automation of the wise men puzzle, can be realized. Key to the presented semantical embedding is that evaluation domains are modeled explicitly and treated as an additional parameter in the encodings of the constituents of the embedded target logic; in previous related works, e.g. on the embedding of normal modal logics, evaluation domains were implicitly shared between meta-logic and target logic. The work presented in this article constitutes an important addition to the pluralist LogiKEy knowledge engineering methodology, which enables experimentation with logics and their combinations, with general and domain knowledge, and with concrete use cases -- all at the same time.
We introduce a numerical technique for controlling the location and stability properties of Hopf bifurcations in dynamical systems. The algorithm consists of solving an optimization problem constrained by an extended system of nonlinear partial differential equations that characterizes Hopf bifurcation points. The flexibility and robustness of the method allows us to advance or delay a Hopf bifurcation to a target value of the bifurcation parameter, as well as controlling the oscillation frequency with respect to a parameter of the system or the shape of the domain on which solutions are defined. Numerical applications are presented in systems arising from biology and fluid dynamics, such as the FitzHugh-Nagumo model, Ginzburg-Landau equation, Rayleigh-B\'enard convection problem, and Navier-Stokes equations, where the control of the location and oscillation frequency of periodic solutions is of high interest.
We study FO+, a fragment of first-order logic on finite words, where monadic predicates can only appear positively. We show that there is an FO-definable language that is monotone in monadic predicates but not definable in FO+. This provides a simple proof that Lyndon's preservation theorem fails on finite structures. We lift this example language to finite graphs, thereby providing a new result of independent interest for FO-definable graph classes: negation might be needed even when the class is closed under addition of edges. We finally show that given a regular language of finite words, it is undecidable whether it is definable in FO+.
Statistical relational AI and probabilistic logic programming have so far mostly focused on discrete probabilistic models. The reasons for this is that one needs to provide constructs to succinctly model the independencies in such models, and also provide efficient inference. Three types of independencies are important to represent and exploit for scalable inference in hybrid models: conditional independencies elegantly modeled in Bayesian networks, context-specific independencies naturally represented by logical rules, and independencies amongst attributes of related objects in relational models succinctly expressed by combining rules. This paper introduces a hybrid probabilistic logic programming language, DC#, which integrates distributional clauses' syntax and semantics principles of Bayesian logic programs. It represents the three types of independencies qualitatively. More importantly, we also introduce the scalable inference algorithm FO-CS-LW for DC#. FO-CS-LW is a first-order extension of the context-specific likelihood weighting algorithm (CS-LW), a novel sampling method that exploits conditional independencies and context-specific independencies in ground models. The FO-CS-LW algorithm upgrades CS-LW with unification and combining rules to the first-order case.
We present a hybrid sampling-surrogate approach for reducing the computational expense of uncertainty quantification in nonlinear dynamical systems. Our motivation is to enable rapid uncertainty quantification in complex mechanical systems such as automotive propulsion systems. Our approach is to build upon ideas from multifidelity uncertainty quantification to leverage the benefits of both sampling and surrogate modeling, while mitigating their downsides. In particular, the surrogate model is selected to exploit problem structure, such as smoothness, and offers a highly correlated information source to the original nonlinear dynamical system. We utilize an intrusive generalized Polynomial Chaos surrogate because it avoids any statistical errors in its construction and provides analytic estimates of output statistics. We then leverage a Monte Carlo-based Control Variate technique to correct the bias caused by the surrogate approximation error. The primary theoretical contribution of this work is the analysis and solution of an estimator design strategy that optimally balances the computational effort needed to adapt a surrogate compared with sampling the original expensive nonlinear system. While previous works have similarly combined surrogates and sampling, to our best knowledge this work is the first to provide rigorous analysis of estimator design. We deploy our approach on multiple examples stemming from the simulation of mechanical automotive propulsion system models. We show that the estimator is able to achieve orders of magnitude reduction in mean squared error of statistics estimation in some cases under comparable costs of purely sampling or purely surrogate approaches.
Our motivation stems from current medical research aiming at personalized treatment using a molecular-based approach. The broad goal is to develop a more precise and targeted decision making process, relative to traditional treatments based primarily on clinical diagnoses. Specifically, we consider patients affected by Acute Myeloid Leukemia (AML), an hematological cancer characterized by uncontrolled proliferation of hematopoietic stem cells in the bone marrow. Because AML responds poorly to chemoterapeutic treatments, the development of targeted therapies is essential to improve patients' prospects. In particular, the dataset we analyze contains the levels of proteins involved in cell cycle regulation and linked to the progression of the disease. We analyse treatment effects within a causal framework represented by a Directed Acyclic Graph (DAG) model, whose vertices are the protein levels in the network. A major obstacle in implementing the above program is however represented by individual heterogeneity. We address this issue through a Dirichlet Process (DP) mixture of Gaussian DAG-models where both the graphical structure as well as the allied model parameters are regarded as uncertain. Our procedure determines a clustering structure of the units reflecting the underlying heterogeneity, and produces subject-specific estimates of causal effects based on Bayesian model averaging. With reference to the AML dataset, we identify different effects of protein regulation among individuals; moreover, our method clusters patients into groups that exhibit only mild similarities with traditional categories based on morphological features.
Large scale knowledge graph embedding has attracted much attention from both academia and industry in the field of Artificial Intelligence. However, most existing methods concentrate solely on fact triples contained in the given knowledge graph. Inspired by the fact that logic rules can provide a flexible and declarative language for expressing rich background knowledge, it is natural to integrate logic rules into knowledge graph embedding, to transfer human knowledge to entity and relation embedding, and strengthen the learning process. In this paper, we propose a novel logic rule-enhanced method which can be easily integrated with any translation based knowledge graph embedding model, such as TransE . We first introduce a method to automatically mine the logic rules and corresponding confidences from the triples. And then, to put both triples and mined logic rules within the same semantic space, all triples in the knowledge graph are represented as first-order logic. Finally, we define several operations on the first-order logic and minimize a global loss over both of the mined logic rules and the transformed first-order logics. We conduct extensive experiments for link prediction and triple classification on three datasets: WN18, FB166, and FB15K. Experiments show that the rule-enhanced method can significantly improve the performance of several baselines. The highlight of our model is that the filtered Hits@1, which is a pivotal evaluation in the knowledge inference task, has a significant improvement (up to 700% improvement).
Learning low-dimensional embeddings of knowledge graphs is a powerful approach used to predict unobserved or missing edges between entities. However, an open challenge in this area is developing techniques that can go beyond simple edge prediction and handle more complex logical queries, which might involve multiple unobserved edges, entities, and variables. For instance, given an incomplete biological knowledge graph, we might want to predict "em what drugs are likely to target proteins involved with both diseases X and Y?" -- a query that requires reasoning about all possible proteins that {\em might} interact with diseases X and Y. Here we introduce a framework to efficiently make predictions about conjunctive logical queries -- a flexible but tractable subset of first-order logic -- on incomplete knowledge graphs. In our approach, we embed graph nodes in a low-dimensional space and represent logical operators as learned geometric operations (e.g., translation, rotation) in this embedding space. By performing logical operations within a low-dimensional embedding space, our approach achieves a time complexity that is linear in the number of query variables, compared to the exponential complexity required by a naive enumeration-based approach. We demonstrate the utility of this framework in two application studies on real-world datasets with millions of relations: predicting logical relationships in a network of drug-gene-disease interactions and in a graph-based representation of social interactions derived from a popular web forum.