亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Rigorous development processes aim to be effective in developing critical systems, especially if failures can have catastrophic consequences for humans and the environment. Such processes generally rely on formal methods, which can guarantee, thanks to their mathematical foundation, model preciseness, and properties assurance. However, they are rarely adopted in practice. In this paper, we report our experience in using the Abstract State Machine formal method and the ASMETA framework in developing a prototype of the control software of the MVM (Mechanical Ventilator Milano), a mechanical lung ventilator that has been designed, successfully certified, and deployed during the COVID-19 pandemic. Due to time constraints and lack of skills, no formal method was applied for the MVM project. However, we here want to assess the feasibility of developing (part of) the ventilator by using a formal method-based approach. Our development process starts from a high-level formal specification of the system to describe the MVM main operation modes. Then, through a sequence of refined models, all the other requirements are captured, up to a level in which a C++ implementation of a prototype of the MVM controller is automatically generated from the model, and tested. Along the process, at each refinement level, different model validation and verification activities are performed, and each refined model is proved to be a correct refinement of the previous level. By means of the MVM case study, we evaluate the effectiveness and usability of our formal approach.

相關內容

Online planning of whole-body motions for legged robots is challenging due to the inherent nonlinearity in the robot dynamics. In this work, we propose a nonlinear MPC framework, the BiConMP which can generate whole body trajectories online by efficiently exploiting the structure of the robot dynamics. BiConMP is used to generate various cyclic gaits on a real quadruped robot and its performance is evaluated on different terrain, countering unforeseen pushes and transitioning online between different gaits. Further, the ability of BiConMP to generate non-trivial acyclic whole-body dynamic motions on the robot is presented. Finally, an extensive empirical analysis on the effects of planning horizon and frequency on the nonlinear MPC framework is reported and discussed.

Various stakeholders with different backgrounds are involved in Smart City projects. These stakeholders define the project goals, e.g., based on participative approaches, market research or innovation management processes. To realize these goals often complex technical solutions must be designed and implemented. In practice, however, it is difficult to synchronize the technical design and implementation phase with the definition of moving Smart City goals. We hypothesize that this is due to a lack of a common language for the different stakeholder groups and the technical disciplines. We address this problem with scenario-based requirements engineering techniques. In particular, we use scenarios at different levels of abstraction and formalization that are connected end-to-end by appropriate methods and tools. This enables fast feedback loops to iteratively align technical requirements, stakeholder expectations, and Smart City goals. We demonstrate the applicability of our approach in a case study with different industry partners.

Without a specific functional context, non-functional requirements can only be approached as cross-cutting concerns and treated uniformly across all features of an application. This neglects, however, the heterogeneity of non-functional requirements that arises from stakeholder interests and the distinct functional scopes of software systems, which mutually influence how these non-functional requirements have to be satisfied. Earlier studies showed that the different types and objectives of non-functional requirements result in either vague or unbalanced specification of non-functional requirements. We propose a task analytic approach for eliciting and modeling user tasks to approach the stakeholders' pursued interests towards the software product. Stakeholder interests are structurally related to user tasks and each interest can be specified individually as a constraint of a specific user task. These constraints support DevOps teams with important guidance on how the interest of the stakeholder can be satisfied in the software lifecycle sufficiently. We propose a structured approach, intertwining task-oriented functional requirements with non-functional stakeholder interests to specify constraints on the level of user tasks. We also present results of a case study with domain experts, which reveals that our task modeling and interest-tailoring method increases the comprehensibility of non-functional requirements as well as their impact on the functional requirements, i.e., the users' tasks.

Software non-functional requirements address a multitude of objectives, expectations, and even liabilities that must be considered during development and operation. Typically, these non-functional requirements originate from different domains and their concrete scope, notion, and demarcation to functional requirements is often ambiguous. In this study we seek to categorize and analyze relevant work related to software engineering in a DevOps context in order to clarify the different focus areas, themes, and objectives underlying non-functional requirements and also to identify future research directions in this field. We conducted a systematic mapping study, including 142 selected primary studies, extracted the focus areas, and synthesized the themes and objectives of the described NFRs. In order to examine non-engineering-focused studies related to non-functional requirements in DevOps, we conducted a backward snowballing step and additionally included 17 primary studies. Our analysis revealed 7 recurrent focus areas and 41 themes that characterize NFRs in DevOps, along with typical objectives for these themes. Overall, the focus areas and themes of NFRs in DevOps are very diverse and reflect the different perspectives required to align software engineering with technical quality, business, compliance, and organizational considerations. The lack of methodological support for specifying, measuring, and evaluating fulfillment of these NFRs in DevOps-driven projects offers ample opportunities for future research in this field. Particularly, there is a need for empirically validated approaches for operationalizing non-engineering-focused objectives of software.

Enterprise applications can be automatically generated from a sophisticated OO design model based on model-driven approach. The design model contains information about how to decompose the system into components, how to encapsulate the system operations into classes as well as how the objects of classes collaborate to fulfill the functionality of the system operations. However, the efforts to build the design model from a validated requirements model are not proportional to the return. In practice, it is very desirable to have an approach that can automatically generate standardized enterprise applications directly from the validated requirements models. In this paper, we propose an approach named RM2EA, which can reach this goal based on the contract-based requirements model. We demonstrate the proposed approach through 8 case studies. The evaluation result shows that the quality and efficiency of the generated applications are equal to or even better than the applications implemented by developers: firstly, we demonstrate that a popular type of enterprise applications (i.e., a Jakarta EE application) can be successfully generated by customizing and improving the set of rules; secondly, RM2EA can generate more readable or maintainable code; thirdly, the enterprise applications generated by RM2EA achieve better performance test results. Overall, the result is satisfactory, and the implementation of the proposed approach has the potential to be further enhanced and applied to software development in industry.

Discovering causal structure among a set of variables is a fundamental problem in many empirical sciences. Traditional score-based casual discovery methods rely on various local heuristics to search for a Directed Acyclic Graph (DAG) according to a predefined score function. While these methods, e.g., greedy equivalence search, may have attractive results with infinite samples and certain model assumptions, they are usually less satisfactory in practice due to finite data and possible violation of assumptions. Motivated by recent advances in neural combinatorial optimization, we propose to use Reinforcement Learning (RL) to search for the DAG with the best scoring. Our encoder-decoder model takes observable data as input and generates graph adjacency matrices that are used to compute rewards. The reward incorporates both the predefined score function and two penalty terms for enforcing acyclicity. In contrast with typical RL applications where the goal is to learn a policy, we use RL as a search strategy and our final output would be the graph, among all graphs generated during training, that achieves the best reward. We conduct experiments on both synthetic and real datasets, and show that the proposed approach not only has an improved search ability but also allows a flexible score function under the acyclicity constraint.

Interest in the field of Explainable Artificial Intelligence has been growing for decades and has accelerated recently. As Artificial Intelligence models have become more complex, and often more opaque, with the incorporation of complex machine learning techniques, explainability has become more critical. Recently, researchers have been investigating and tackling explainability with a user-centric focus, looking for explanations to consider trustworthiness, comprehensibility, explicit provenance, and context-awareness. In this chapter, we leverage our survey of explanation literature in Artificial Intelligence and closely related fields and use these past efforts to generate a set of explanation types that we feel reflect the expanded needs of explanation for today's artificial intelligence applications. We define each type and provide an example question that would motivate the need for this style of explanation. We believe this set of explanation types will help future system designers in their generation and prioritization of requirements and further help generate explanations that are better aligned to users' and situational needs.

To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.

Large pre-trained neural networks such as BERT have had great recent success in NLP, motivating a growing body of research investigating what aspects of language they are able to learn from unlabeled data. Most recent analysis has focused on model outputs (e.g., language model surprisal) or internal vector representations (e.g., probing classifiers). Complementary to these works, we propose methods for analyzing the attention mechanisms of pre-trained models and apply them to BERT. BERT's attention heads exhibit patterns such as attending to delimiter tokens, specific positional offsets, or broadly attending over the whole sentence, with heads in the same layer often exhibiting similar behaviors. We further show that certain attention heads correspond well to linguistic notions of syntax and coreference. For example, we find heads that attend to the direct objects of verbs, determiners of nouns, objects of prepositions, and coreferent mentions with remarkably high accuracy. Lastly, we propose an attention-based probing classifier and use it to further demonstrate that substantial syntactic information is captured in BERT's attention.

Collecting training data from the physical world is usually time-consuming and even dangerous for fragile robots, and thus, recent advances in robot learning advocate the use of simulators as the training platform. Unfortunately, the reality gap between synthetic and real visual data prohibits direct migration of the models trained in virtual worlds to the real world. This paper proposes a modular architecture for tackling the virtual-to-real problem. The proposed architecture separates the learning model into a perception module and a control policy module, and uses semantic image segmentation as the meta representation for relating these two modules. The perception module translates the perceived RGB image to semantic image segmentation. The control policy module is implemented as a deep reinforcement learning agent, which performs actions based on the translated image segmentation. Our architecture is evaluated in an obstacle avoidance task and a target following task. Experimental results show that our architecture significantly outperforms all of the baseline methods in both virtual and real environments, and demonstrates a faster learning curve than them. We also present a detailed analysis for a variety of variant configurations, and validate the transferability of our modular architecture.

北京阿比特科技有限公司