The use of domain-specific modeling for development of complex (cyber-physical) systems is gaining increasing acceptance in the industrial environment. Domain-specific modeling allows complex systems and data to be abstracted for a more efficient system design, development, validation, and configuration. However, no existing (meta-)modeling framework can be used with reasonable effort in certified software so far, neither for the development of systems nor for the execution of system functions. For the use of (development) artifacts from domain-specific modeling in safety-critical processes or systems it is required to ensure their correctness by either subsequent (manual) verification or the usage of (pre-)qualified software. Existing meta-languages often contain modeling elements that are difficult or impossible to implement in a qualifiable manner leading to a high manual, subsequent certification effort. Therefore, the aim is to develop a (meta-)modeling framework, that can be used in certified software. This can significantly reduce the development effort for safety-critical systems and enables the full advantages of domain-specific modeling. The framework components considered in this PhD-Thesis include: (1) an essential meta-language, (2) a qualifiable runtime environment, and (3) a suitable persistence. The essential \mbox{(meta-)}modeling language is mainly based on the UML standard, but is enhanced with multi-level modeling concepts such as deep instantiation. Supporting a possible qualification, the meta-language is implemented using the highly restrictive, but formally provable programming language Ada SPARK.
This work describes the setup of an advanced technical infrastructure for collaborative software development (CDE) in large, distributed projects based on GitLab. We present its customization and extension, additional features and processes like code review, continuous automated testing, DevOps practices, and sustainable life-cycle management including long-term preservation and citable publishing of software releases along with relevant metadata. The environment is currently used for developing the open cardiac simulation software openCARP and an evaluation showcases its capability and utility for collaboration and coordination of sizeable heterogeneous teams. As such, it could be a suitable and sustainable infrastructure solution for a wide range of research software projects.
ConsumerCheck is an open source data analysis software tailored for analysis of sensory and consumer data. Since some of the implemented methods are generic, such as PCA, PLSR and PCR, other data from other domains may also be analysed with ConsumerCheck. The software comes with a graphical user interface and as such provides non-statisticians and users without programming skills free access to a number of widely used analysis methods within the field of sensory and consumer science. Computational results are presented in plots that are easily generated from the tree-controls within the graphical user interfaces. Since the construction of conjoint analysis models is not always straightforward, ConsumerCheck provides three previously defined model structures of different complexity. ConsumerCheck is an ongoing research project and the objective is to implement further statistical methods over time.
Software analytics is a data-driven approach to decision making, which allows software practitioners to leverage valuable insights from data about software to achieve higher development process productivity and improve different aspects of software quality. In previous work, a set of patterns for adopting a lean software analytics process was identified through a literature review. This paper presents two patterns to add to the original set, forming a pattern language for adopting software analytics practices that aims to inform decision-making activities of software practitioners. The writing of these two patterns was informed by the solutions employed in the context of two case studies on software analytics practices, and the patterns were further validated by searching for their occurrence in the literature. The pattern Broad-Spectrum Diagnostic proposes to conduct more broad analysis based on common metrics when the team does not have the expertise to understand the kind of problems that software analytics can help to solve; and the pattern Embedded Improvements suggests adding improvement tasks as part of other routine activities.
With the growing technological advances in autonomous driving, the transport industry and research community seek to determine the impact that autonomous vehicles (AV) will have on consumers, as well as identify the different factors that will influence their use. Most of the research performed so far relies on laboratory-controlled conditions using driving simulators, as they offer a safe environment for testing advanced driving assistance systems (ADAS). In this study we analyze the behavior of drivers that are placed in control of an automated vehicle in a real life driving environment. The vehicle is equipped with advanced autonomy, making driver control of the vehicle unnecessary in many scenarios, although a driver take over is possible and sometimes required. In doing so, we aim to determine the impact of such a system on the driver and their driving performance. To this end road users' behavior from naturalistic driving data is analyzed focusing on awareness and diagnosis of the road situation. Results showed that the road features determined the level of visual attention and trust in the automation. They also showed that the activities performed during the automation affected the reaction time to take over the control of the vehicle.
This paper explores the environmental impact of the super-linear growth trends for AI from a holistic perspective, spanning Data, Algorithms, and System Hardware. We characterize the carbon footprint of AI computing by examining the model development cycle across industry-scale machine learning use cases and, at the same time, considering the life cycle of system hardware. Taking a step further, we capture the operational and manufacturing carbon footprint of AI computing and present an end-to-end analysis for what and how hardware-software design and at-scale optimization can help reduce the overall carbon footprint of AI. Based on the industry experience and lessons learned, we share the key challenges and chart out important development directions across the many dimensions of AI. We hope the key messages and insights presented in this paper can inspire the community to advance the field of AI in an environmentally-responsible manner.
Unique developmental and operational characteristics of ML components as well as their inherent uncertainty demand robust engineering principles are used to ensure their quality. We aim to determine how software systems can be (re-) architected to enable robust integration of ML components. Towards this goal, we conducted a mixed-methods empirical study consisting of (i) a systematic literature review to identify the challenges and their solutions in software architecture for ML, (ii) semi-structured interviews with practitioners to qualitatively complement the initial findings and (iii) a survey to quantitatively validate the challenges and their solutions. We compiled and validated twenty challenges and solutions for (re-) architecting systems with ML components. Our results indicate, for example, that traditional software architecture challenges (e.g., component coupling) also play an important role when using ML components; along with new ML specific challenges (e.g., the need for continuous retraining). Moreover, the results indicate that ML heightened decision drivers, such as privacy, play a marginal role compared to traditional decision drivers, such as scalability. Using the survey we were able to establish a link between architectural solutions and software quality attributes, which enabled us to provide twenty architectural tactics used to satisfy individual quality requirements of systems with ML components. Altogether, the results of the study can be interpreted as an empirical framework that supports the process of (re-) architecting software systems with ML components.
Robots are being designed to communicate with people in various public and domestic venues in a helpful, discreet way. Here, we use a speculative approach to shine light on a new concept of robot steganography (RS), that a robot could seek to help vulnerable populations by discreetly warning of potential threats. We first identify some potentially useful scenarios for RS related to safety and security -- concerns that are estimated to cost the world trillions of dollars each year -- with a focus on two kinds of robots, an autonomous vehicle (AV) and a socially assistive humanoid robot (SAR). Next, we propose that existing, powerful, computer-based steganography (CS) approaches can be adopted with little effort in new contexts (SARs), while also pointing out potential benefits of human-like steganography (HS): although less efficient and robust than CS, HS represents a currently-unused form of RS that could also be used to avoid requiring computers or detection by more technically advanced adversaries. This analysis also introduces some unique challenges of RS that arise from message generation, indirect perception, and effects of perspective. For this, we explore some related theoretical and practical concerns for selecting carrier signals and generating messages, also making available some code and a video demo. Finally, we report on checking the current feasibility of the RS concept via a simplified user study, confirming that messages can be hidden in a robot's behaviors. The immediate implication is that RS could help to improve people's lives and mitigate some costly problems -- suggesting the usefulness of further discussion, ideation, and consideration by designers.
Finely tuning MPI applications and understanding the influence of keyparameters (number of processes, granularity, collective operationalgorithms, virtual topology, and process placement) is critical toobtain good performance on supercomputers. With the high consumptionof running applications at scale, doing so solely to optimize theirperformance is particularly costly. Havinginexpensive but faithful predictions of expected performance could bea great help for researchers and system administrators. Themethodology we propose decouples the complexity of the platform, whichis captured through statistical models of the performance of its maincomponents (MPI communications, BLAS operations), from the complexityof adaptive applications by emulating the application and skippingregular non-MPI parts of the code. We demonstrate the capability of our method with High-PerformanceLinpack (HPL), the benchmark used to rank supercomputers in theTOP500, which requires careful tuning. We briefly present (1) how theopen-source version of HPL can be slightly modified to allow a fastemulation on a single commodity server at the scale of asupercomputer. Then we present (2) an extensive (in)validation studythat compares simulation with real experiments and demonstrates our ability to predict theperformance of HPL within a few percent consistently. This study allows us toidentify the main modeling pitfalls (e.g., spatial and temporal nodevariability or network heterogeneity and irregular behavior) that needto be considered. Last, we show (3) how our "surrogate" allowsstudying several subtle HPL parameter optimization problems whileaccounting for uncertainty on the platform.
Currently, most social robots interact with their surroundings and humans through sensors that are integral parts of the robots, which limits the usability of the sensors, human-robot interaction, and interchangeability. A wearable sensor garment that fits many robots is needed in many applications. This article presents an affordable wearable sensor vest, and an open-source software architecture with the Internet of Things (IoT) for social humanoid robots. The vest consists of touch, temperature, gesture, distance, vision sensors, and a wireless communication module. The IoT feature allows the robot to interact with humans locally and over the Internet. The designed architecture works for any social robot that has a general-purpose graphics processing unit (GPGPU), I2C/SPI buses, Internet connection, and the Robotics Operating System (ROS). The modular design of this architecture enables developers to easily add/remove/update complex behaviors. The proposed software architecture provides IoT technology, GPGPU nodes, I2C and SPI bus mangers, audio-visual interaction nodes (speech to text, text to speech, and image understanding), and isolation between behavior nodes and other nodes. The proposed IoT solution consists of related nodes in the robot, a RESTful web service, and user interfaces. We used the HTTP protocol as a means of two-way communication with the social robot over the Internet. Developers can easily edit or add nodes in C, C++, and Python programming languages. Our architecture can be used for designing more sophisticated behaviors for social humanoid robots.
Object-oriented programming (OOP) is one of the most popular paradigms used for building software systems. However, despite its industrial and academic popularity, OOP is still missing a formal apparatus similar to lambda-calculus, which functional programming is based on. There were a number of attempts to formalize OOP, but none of them managed to cover all the features available in modern OO programming languages, such as C++ or Java. We have made yet another attempt and created phi-calculus. We also created EOLANG (also called EO), an experimental programming language based on phi-calculus.