Context: A tertiary study can be performed to identify related reviews on a topic of interest. However, the elaboration of an appropriate and effective search string to detect secondary studies is challenging for Software Engineering (SE) researchers. Objective: The main goal of this study is to propose a suitable search string to detect secondary studies in SE, addressing issues such as the quantity of applied terms, relevance, recall and precision. Method: We analyzed seven tertiary studies under two perspectives: (1) structure -- strings' terms to detect secondary studies; and (2) field: where searching -- titles alone or abstracts alone or titles and abstracts together, among others. We validate our string by performing a two-step validation process. Firstly, we evaluated the capability to retrieve secondary studies over a set of 1537 secondary studies included in 24 tertiary studies in SE. Secondly, we evaluated the general capacity of retrieving secondary studies over an automated search using the Scopus digital library. Results: Our string was capable to retrieve an optimum value of over 90\% of the included secondary studies (recall) with a high general precision of almost 60\%. Conclusion: The suitable search string for finding secondary studies in SE contains the terms "systematic review", "literature review", "systematic mapping", "mapping study" and "systematic map".
Finite State Machine is a popular modeling notation for various systems, especially software and electronic. Test paths can be automatically generated from the system model to test such systems using a suitable algorithm. This paper presents a strategy that generates test paths and allows to start and end test paths only in defined states of the finite state machine. The strategy also simultaneously supports generating test paths only of length in a given range. For this purpose, alternative system models, test coverage criteria, and a set of algorithms are developed. The strategy is compared with the best alternative based on the reduction of the test set generated by the established N-switch coverage approach on a mix of 171 industrial and artificially generated problem instances. The proposed strategy outperforms the compared variant in a smaller number of test path steps. The extent varies with the used test coverage criterion and preferred test path length range from none to two and half fold difference. Moreover, the proposed technique detected up to 30% more simple artificial defects inserted into experimental SUT models per one test step than the compared alternative technique. The proposed strategy is well applicable in situations where a possible test path starts and ends in a state machine needs to be reflected and, concurrently, the length of the test paths has to be in a defined range.
General purpose optimization techniques can be used to solve many problems in engineering computations, although their cost is often prohibitive when the number of degrees of freedom is very large. We describe a multilevel approach to speed up the computation of the solution of a large-scale optimization problem by a given optimization technique. By embedding the problem within Harten's Multiresolution Framework (MRF), we set up a procedure that leads to the desired solution, after the computation of a finite sequence of sub-optimal solutions, which solve auxiliary optimization problems involving a smaller number of variables. For convex optimization problems having smooth solutions, we prove that the distance between the optimal solution and each sub-optimal approximation is related to the accuracy of the interpolation technique used within the MRF and analyze its relation with the performance of the proposed algorithm. Several numerical experiments confirm that our technique provides a computationally efficient strategy that allows the end user to treat both the optimizer and the objective function as black boxes throughout the optimization process.
Manufacturing companies face challenges when it comes to quickly adapting their production control to fluctuating demands or changing requirements. Control approaches that encapsulate production functions as services have shown to be promising in order to increase the flexibility of Cyber-Physical Production Systems. But an existing challenge of such approaches is finding a production plan based on provided functionalities for a demanded product, especially when there is no direct (i.e., syntactic) match between demanded and provided functions. While there is a variety of approaches to production planning, flexible production poses specific requirements that are not covered by existing research. In this contribution, we first capture these requirements for flexible production environments. Afterwards, an overview of current Artificial Intelligence approaches that can be utilized in order to overcome the aforementioned challenges is given. For this purpose, we focus on planning algorithms, but also consider models of production systems that can act as inputs to these algorithms. Approaches from both symbolic AI planning as well as approaches based on Machine Learning are discussed and eventually compared against the requirements. Based on this comparison, a research agenda is derived.
This paper discusses a new approach to the fundamental problem of learning optimal Q-functions. In this approach, optimal Q-functions are formulated as saddle points of a nonlinear Lagrangian function derived from the classic Bellman optimality equation. The paper shows that the Lagrangian enjoys strong duality, in spite of its nonlinearity, which paves the way to a general Lagrangian method to Q-function learning. As a demonstration, the paper develops an imitation learning algorithm based on the duality theory, and applies the algorithm to a state-of-the-art machine translation benchmark. The paper then turns to demonstrate a symmetry breaking phenomenon regarding the optimality of the Lagrangian saddle points, which justifies a largely overlooked direction in developing the Lagrangian method.
Citations in science are being studied from several perspectives. On the one hand, there are approaches such as scientometrics and the science of science, which take a more quantitative perspective. In this chapter I briefly review some of the literature on citations, citation distributions and models of citations. These citations feature prominently in another part of the literature which is dealing with research evaluation and the role of metrics and indicators in that process. Here I briefly review part of the discussion in research evaluation. This also touches on the subject of how citations relate to peer review. Finally, I try to integrate the two literatures with the aim of clarifying what I believe the two can learn from each other. The fundamental problem in research evaluation is that research quality is unobservable. This has consequences for conclusions that we can draw from quantitative studies of citations and citation models. The term "indicators" is a relevant concept in this context, which I try to clarify. Causality is important for properly understanding indicators, especially when indicators are used in practice: when we act on indicators, we enter causal territory. Even when an indicator might have been valid, through its very use, the consequences of its use may invalidate it. By combining citation models with proper causal reasoning and acknowledging the fundamental problem about unobservable research quality, we may hope to make progress.
Artificial intelligence (AI) has the potential to greatly improve society, but as with any powerful technology, it comes with heightened risks and responsibilities. Current AI research lacks a systematic discussion of how to manage long-tail risks from AI systems, including speculative long-term risks. Keeping in mind the potential benefits of AI, there is some concern that building ever more intelligent and powerful AI systems could eventually result in systems that are more powerful than us; some say this is like playing with fire and speculate that this could create existential risks (x-risks). To add precision and ground these discussions, we provide a guide for how to analyze AI x-risk, which consists of three parts: First, we review how systems can be made safer today, drawing on time-tested concepts from hazard analysis and systems safety that have been designed to steer large processes in safer directions. Next, we discuss strategies for having long-term impacts on the safety of future systems. Finally, we discuss a crucial concept in making AI systems safer by improving the balance between safety and general capabilities. We hope this document and the presented concepts and tools serve as a useful guide for understanding how to analyze AI x-risk.
Existing adversarial example research focuses on digitally inserted perturbations on top of existing natural image datasets. This construction of adversarial examples is not realistic because it may be difficult, or even impossible, for an attacker to deploy such an attack in the real-world due to sensing and environmental effects. To better understand adversarial examples against cyber-physical systems, we propose approximating the real-world through simulation. In this paper we describe our synthetic dataset generation tool that enables scalable collection of such a synthetic dataset with realistic adversarial examples. We use the CARLA simulator to collect such a dataset and demonstrate simulated attacks that undergo the same environmental transforms and processing as real-world images. Our tools have been used to collect datasets to help evaluate the efficacy of adversarial examples, and can be found at //github.com/carla-simulator/carla/pull/4992.
Deep neural networks (DNNs) have achieved unprecedented success in the field of artificial intelligence (AI), including computer vision, natural language processing and speech recognition. However, their superior performance comes at the considerable cost of computational complexity, which greatly hinders their applications in many resource-constrained devices, such as mobile phones and Internet of Things (IoT) devices. Therefore, methods and techniques that are able to lift the efficiency bottleneck while preserving the high accuracy of DNNs are in great demand in order to enable numerous edge AI applications. This paper provides an overview of efficient deep learning methods, systems and applications. We start from introducing popular model compression methods, including pruning, factorization, quantization as well as compact model design. To reduce the large design cost of these manual solutions, we discuss the AutoML framework for each of them, such as neural architecture search (NAS) and automated pruning and quantization. We then cover efficient on-device training to enable user customization based on the local data on mobile devices. Apart from general acceleration techniques, we also showcase several task-specific accelerations for point cloud, video and natural language processing by exploiting their spatial sparsity and temporal/token redundancy. Finally, to support all these algorithmic advancements, we introduce the efficient deep learning system design from both software and hardware perspectives.
We describe ACE0, a lightweight platform for evaluating the suitability and viability of AI methods for behaviour discovery in multiagent simulations. Specifically, ACE0 was designed to explore AI methods for multi-agent simulations used in operations research studies related to new technologies such as autonomous aircraft. Simulation environments used in production are often high-fidelity, complex, require significant domain knowledge and as a result have high R&D costs. Minimal and lightweight simulation environments can help researchers and engineers evaluate the viability of new AI technologies for behaviour discovery in a more agile and potentially cost effective manner. In this paper we describe the motivation for the development of ACE0.We provide a technical overview of the system architecture, describe a case study of behaviour discovery in the aerospace domain, and provide a qualitative evaluation of the system. The evaluation includes a brief description of collaborative research projects with academic partners, exploring different AI behaviour discovery methods.
Object detection, as of one the most fundamental and challenging problems in computer vision, has received great attention in recent years. Its development in the past two decades can be regarded as an epitome of computer vision history. If we think of today's object detection as a technical aesthetics under the power of deep learning, then turning back the clock 20 years we would witness the wisdom of cold weapon era. This paper extensively reviews 400+ papers of object detection in the light of its technical evolution, spanning over a quarter-century's time (from the 1990s to 2019). A number of topics have been covered in this paper, including the milestone detectors in history, detection datasets, metrics, fundamental building blocks of the detection system, speed up techniques, and the recent state of the art detection methods. This paper also reviews some important detection applications, such as pedestrian detection, face detection, text detection, etc, and makes an in-deep analysis of their challenges as well as technical improvements in recent years.