Model-Based Systems Engineering (MBSE) is a development approach aiming to build correct-by-construction systems, provided the use of clear, unambiguous and complete models to describe them along the design process. The approach is supported by several engineering tools that automate the development steps, for example the production of code, documentation, test cases and more. TASTE [1] is pragmatic MBSE toolset supported by ESA that encapsulates several technologies to design a system (data modelling, architecture modelling, behaviour modelling/implementation), to automatically generate the binary application(s), and to validate it. One topic left open in TASTE is the formal verification of a system design with respect to specified properties. In this paper we describe our approach based on the IF model-checker [4] to enable the formal verification of properties on TASTE designs. The approach is currently under development in the ESA MoC4Space project.
This paper relates findings of own research in the domain of co-design tools in terms of ethical aspects and their opportunities for inspiration and in HCI education. We overview a number of selected general-purpose HCI/design tools as well as domain specific tools for the Internet of Things. These tools are often card-based, not only suitable for workshops with co-designers but also for internal workshops with students to include these aspects in the built-up of their expertise, sometimes even in a playful way.
Cyber-physical systems (CPSs) typically consist of a wide set of integrated, heterogeneous components; consequently, most of their critical failures relate to the interoperability of such components.Unfortunately, most CPS test automation techniques are preliminary and industry still heavily relies on manual testing. With potentially incomplete, manually-generated test suites, it is of paramount importance to assess their quality. Though mutation analysis has demonstrated to be an effective means to assess test suite quality in some specific contexts, we lack approaches for CPSs. Indeed, existing approaches do not target interoperability problems and cannot be executed in the presence of black-box or simulated components, a typical situation with CPSs. In this paper, we introduce data-driven mutation analysis, an approach that consists in assessing test suite quality by verifying if it detects interoperability faults simulated by mutating the data exchanged by software components. To this end, we describe a data-driven mutation analysis technique (DaMAT) that automatically alters the data exchanged through data buffers. Our technique is driven by fault models in tabular form where engineers specify how to mutate data items by selecting and configuring a set of mutation operators. We have evaluated DaMAT with CPSs in the space domain; specifically, the test suites for the software systems of a microsatellite and nanosatellites launched on orbit last year. Our results show that the approach effectively detects test suite shortcomings, is not affected by equivalent and redundant mutants, and entails acceptable costs.
Software measurement is an essential management tool to develop robust and maintainable software systems. Software metrics can be used to control the inherent complexities in software design. To guarantee that the components of the software are inevitably testable, the testability attribute is used, which is a sub-characteristics of the software's maintabilility as well as quality assurance. This study investigates the relationship between static code and test metrics and testability and test cases effectiveness. The study answers three formulated research questions. The results of the analysis showed that size and complexity metrics are suitable for predicting the testability of object-oriented classes.
Modern wireless cellular networks use massive multiple-input multiple-output (MIMO) technology. This technology involves operations with an antenna array at a base station that simultaneously serves multiple mobile devices which also use multiple antennas on their side. For this, various precoding and detection techniques are used, allowing each user to receive the signal intended for him from the base station. There is an important class of linear precoding called Regularized Zero-Forcing (RZF). In this work, we propose Adaptive RZF (ARZF) with a special kind of regularization matrix with different coefficients for each layer of multi-antenna users. These regularization coefficients are defined by explicit formulas based on SVD decompositions of user channel matrices. We study the optimization problem, which is solved by the proposed algorithm, with the connection to other possible problem statements. We also compare the proposed algorithm with state-of-the-art linear precoding algorithms on simulations with the Quadriga channel model. The proposed approach provides a significant increase in quality with the same computation time as in the reference methods.
Digital mathematical libraries assemble the knowledge of years of mathematical research. Numerous disciplines (e.g., physics, engineering, pure and applied mathematics) rely heavily on compendia gathered findings. Likewise, modern research applications rely more and more on computational solutions, which are often calculated and verified by computer algebra systems. Hence, the correctness, accuracy, and reliability of both digital mathematical libraries and computer algebra systems is a crucial attribute for modern research. In this paper, we present a novel approach to verify a digital mathematical library and two computer algebra systems with one another by converting mathematical expressions from one system to the other. We use our previously eveloped conversion tool (referred to as LaCASt) to translate formulae from the NIST Digital Library of Mathematical Functions to the computer algebra systems Maple and Mathematica. The contributions of our presented work are as follows: (1) we present the most comprehensive verification of computer algebra systems and digital mathematical libraries with one another; (2) we significantly enhance the performance of the underlying translator in terms of coverage and accuracy; and (3) we provide open access to translations for Maple and Mathematica of the formulae in the NIST Digital Library of Mathematical Functions.
In model extraction attacks, adversaries can steal a machine learning model exposed via a public API by repeatedly querying it and adjusting their own model based on obtained predictions. To prevent model stealing, existing defenses focus on detecting malicious queries, truncating, or distorting outputs, thus necessarily introducing a tradeoff between robustness and model utility for legitimate users. Instead, we propose to impede model extraction by requiring users to complete a proof-of-work before they can read the model's predictions. This deters attackers by greatly increasing (even up to 100x) the computational effort needed to leverage query access for model extraction. Since we calibrate the effort required to complete the proof-of-work to each query, this only introduces a slight overhead for regular users (up to 2x). To achieve this, our calibration applies tools from differential privacy to measure the information revealed by a query. Our method requires no modification of the victim model and can be applied by machine learning practitioners to guard their publicly exposed models against being easily stolen.
This paper presents a transfer learning approach which enables fast and efficient adaptation of Recurrent Neural Network (RNN) models of dynamical systems. A nominal RNN model is first identified using available measurements. The system dynamics are then assumed to change, leading to an unacceptable degradation of the nominal model performance on the perturbed system. To cope with the mismatch, the model is augmented with an additive correction term trained on fresh data from the new dynamic regime. The correction term is learned through a Jacobian Feature Regression (JFR) method defined in terms of the features spanned by the model's Jacobian with respect to its nominal parameters. A non-parametric view of the approach is also proposed, which extends recent work on Gaussian Process (GP) with Neural Tangent Kernel (NTK-GP) to the RNN case (RNTK-GP). This can be more efficient for very large networks or when only few data points are available. Implementation aspects for fast and efficient computation of the correction term, as well as the initial state estimation for the RNN model are described. Numerical examples show the effectiveness of the proposed methodology in presence of significant system variations.
A method of Sequential Log-Convex Programming (SLCP) is constructed that exploits the log-convex structure present in many engineering design problems. The mathematical structure of Geometric Programming (GP) is combined with the ability of Sequential Quadratic Program (SQP) to accommodate a wide range of objective and constraint functions, resulting in a practical algorithm that can be adopted with little to no modification of existing design practices. Three test problems are considered to demonstrate the SLCP algorithm, comparing it with SQP and the modified Logspace Sequential Quadratic Programming (LSQP). In these cases, SLCP shows up to a 77% reduction in number of iterations compared to SQP, and an 11% reduction compared to LSQP. The airfoil analysis code XFOIL is integrated into one of the case studies to show how SLCP can be used to evolve the fidelity of design problems that have initially been modeled as GP compatible. Finally, a methodology for design based on GP and SLCP is briefly discussed.
This technical report relates Biochemical Space Language (BCSL) to Multiset rewriting systems (MRS). For a BCSL model, the semantics are defined in terms of transition systems, while for an MRS, they are defined in terms of a set of runs. In this report, we relate BCSL to MRS by first showing how the transition system is related to a set of runs and consequently showing how for every BCSL model, an MRS can be constructed such that both represent the same set of runs. The motivation of this step is to establish BCSL in the context of a more general rewriting system and benefit from properties shown for them. Finally, we show that regulations defined for MRS can be consequently used in the BCSL model.
Internet of Things (IoT) infrastructure within the physical library environment is the basis for an integrative, hybrid approach to digital resource recommenders. The IoT infrastructure provides mobile, dynamic wayfinding support for items in the collection, which includes features for location-based recommendations. The evaluation and analysis herein clarified the nature of users' requests for recommendations based on their location, and describes subject areas of the library for which users request recommendations. The results indicated that users of IoT-based recommendations are interested in a broad distribution of subjects, with a short-head distribution from this collection in American and English Literature. A long-tail finding showed a diversity of topics that are recommended to users in the library book stacks with IoT-powered recommendations.