With the increasing complexity of software permeating critical domains such as autonomous driving, new challenges are emerging in the ways the engineering of these systems needs to be rethought. Autonomous driving is expected to continue gradually overtaking all critical driving functions, which is adding to the complexity of the certification of autonomous driving systems. As a response, certification authorities have already started introducing strategies for the certification of autonomous vehicles and their software. But even with these new approaches, the certification procedures are not fully catching up with the dynamism and unpredictability of future autonomous systems, and thus may not necessarily guarantee compliance with all requirements imposed on these systems. In this paper, we identified a number of issues with the proposed certification strategies, which may impact the systems substantially. For instance, we emphasize the lack of adequate reflection on software changes occurring in constantly changing systems, or low support for systems' cooperation needed for the management of coordinated moves. Other shortcomings concern the narrow focus of the awarded certification by neglecting aspects such as the ethical behavior of autonomous software systems. The contribution of this paper is threefold. First, we discuss the motivation for the need to modify the current certification processes for autonomous driving systems. Second, we analyze current international standards used in the certification processes towards requirements derived from the requirements laid on dynamic software ecosystems and autonomous systems themselves. Third, we outline a concept for incorporating the missing parts into the certification procedure.
Turing's 1950 paper introduced the famed "imitation game", a test originally proposed to capture the notion of machine intelligence. Over the years, the Turing test spawned a large amount of interest, which resulted in several variants, as well as heated discussions and controversy. Here we sidestep the question of whether a particular machine can be labeled intelligent, or can be said to match human capabilities in a given context. Instead, but inspired by Turing, we draw attention to the seemingly simpler challenge of determining whether one is interacting with a human or with a machine, in the context of everyday life. We are interested in reflecting upon the importance of this Human-or-Machine question and the use one may make of a reliable answer thereto. Whereas Turing's original test is widely considered to be more of a thought experiment, the Human-or-Machine question as discussed here has obvious practical significance. And while the jury is still not in regarding the possibility of machines that can mimic human behavior with high fidelity in everyday contexts, we argue that near-term exploration of the issues raised here can contribute to development methods for computerized systems, and may also improve our understanding of human behavior in general.
The emergence of new applications brings multi-class traffic with diverse quality of service (QoS) demands in wide area networks (WANs), which motivates the research in traffic engineering (TE). In recent years, novel centralized TE schemes have employed heuristic or machine-learning techniques to orchestrate resources in closed systems, such as datacenter networks. However, these schemes suffer long delivery delay and high control overhead when applied to general WANs. Semi-centralized TE schemes have been proposed to address these drawbacks, providing lower delay and control overhead. Despite this, they suffer performance degradation dealing with volatile traffic. To provide low-delay service and keep high network utility, we propose an asynchronous multi-class traffic management scheme, AMTM. We first establish an asynchronous TE paradigm, in which distributed nodes instantly make traffic control decisions based on link prices. To manage varying traffic and control delivery time, we propose state-based iteration strategies of link pricing under different scenarios and investigate their convergence. Furthermore, we present a system design and corresponding algorithms. Simulation results indicate that AMTM outperforms existing schemes in terms of both delay reduction and scalability improvement. In addition, AMTM outperforms the semi-centralized scheme with 12-20$\%$ more network utility and achieves 2-7$\%$ less network utility compared to the theoretical optimum.
Empathy is widely used in many disciplines such as philosophy, sociology, psychology, health care. Ability to empathise with software end-users seems to be a vital skill software developers should possess. This is because engineering successful software systems involves not only interacting effectively with users but also understanding their true needs. Empathy has the potential to address this situation. Empathy is a predominant human aspect that can be used to comprehend decisions, feelings, emotions and actions of users. However, to date empathy has been under-researched in software engineering (SE) context. In this position paper, we present our exploration of key empathy models from different disciplines and our analysis of their adequacy for application in SE. While there is no evidence for empathy models that are readily applicable to SE, we believe these models can be adapted and applied in SE context with the aim of assisting software engineers to increase their empathy for diverse end-user needs. We present a preliminary taxonomy of empathy by carefully considering the most popular empathy models from different disciplines. We encourage future research on empathy in SE as we believe it is an important human aspect that can significantly influence the relationship between developers and end-users.
Surveillance capitalism is a concept that describes the practice of collecting and analyzing massive amounts of user data for the purpose of targeted advertising and other forms of monetization. The phenomenon has become increasingly prevalent in recent years, with tech companies like Google and Facebook using users' personal information to deliver personalized content and advertisements. Another example of surveillance capitalism is the use of military technology to collect and analyze data for national security purposes. In this context, surveillance capitalism involves the use of technologies like facial recognition and social media monitoring to gather information on individuals and groups deemed to be potential threats to national security. This information is then used to inform military operations and decision-making. This paper wants to analyze in a critical way the phenomenon of surveillance capitalism, proposed under two different ethical framework perspectives. Utilitarianism, a consequentialist ethical theory that judges actions based on their ability to bring about the greatest amount of happiness or pleasure for the greatest number of people, and Kantian deontology, a non-consequentialist ethical theory that emphasizes the importance of individual autonomy, freedom, and dignity. On one side, the utilitarian framework enlightens how Information Technology (IT) and the features provided offer, at first sight, all the positive perceptions to the majority of people, happiness, entertainment, and pleasure. On the other side, the Kantian deontology framework mostly focuses on the aspect of freedom and free will of the individual. This topic is particularly related to the concession of permissions to access data in change of services and the degree of influence that manipulation performed by surveillance capitalism can generate.
As Autonomous Systems (AS) become more ubiquitous in society, more responsible for our safety and our interaction with them more frequent, it is essential that they are trustworthy. Assessing the trustworthiness of AS is a mandatory challenge for the verification and development community. This will require appropriate standards and suitable metrics that may serve to objectively and comparatively judge trustworthiness of AS across the broad range of current and future applications. The meta-expression `trustworthiness' is examined in the context of AS capturing the relevant qualities that comprise this term in the literature. Recent developments in standards and frameworks that support assurance of autonomous systems are reviewed. A list of key challenges are identified for the community and we present an outline of a process that can be used as a trustworthiness assessment framework for AS.
Peter Andrews has proposed, in 1971, the problem of finding an analog of the Skolem theorem for Simple Type Theory. A first idea lead to a naive rule that worked only for Simple Type Theory with the axiom of choice and the general case has only been solved, more than ten years later, by Dale Miller. More recently, we have proposed with Th{\'e}r{\`e}se Hardin and Claude Kirchner a new way to prove analogs of the Miller theorem for different, but equivalent, formulations of Simple Type Theory. In this paper, that does not contain new technical results, I try to show that the history of the skolemization problem and of its various solutions is an illustration of a tension between two points of view on Simple Type Theory: the logical and the theoretical points of view.
We provide a psychometric-grounded exposition of bias and fairness as applied to a typical machine learning pipeline for affective computing. We expand on an interpersonal communication framework to elucidate how to identify sources of bias that may arise in the process of inferring human emotions and other psychological constructs from observed behavior. Various methods and metrics for measuring fairness and bias are discussed along with pertinent implications within the United States legal context. We illustrate how to measure some types of bias and fairness in a case study involving automatic personality and hireability inference from multimodal data collected in video interviews for mock job applications. We encourage affective computing researchers and practitioners to encapsulate bias and fairness in their research processes and products and to consider their role, agency, and responsibility in promoting equitable and just systems.
Decision-making algorithms are being used in important decisions, such as who should be enrolled in health care programs and be hired. Even though these systems are currently deployed in high-stakes scenarios, many of them cannot explain their decisions. This limitation has prompted the Explainable Artificial Intelligence (XAI) initiative, which aims to make algorithms explainable to comply with legal requirements, promote trust, and maintain accountability. This paper questions whether and to what extent explainability can help solve the responsibility issues posed by autonomous AI systems. We suggest that XAI systems that provide post-hoc explanations could be seen as blameworthy agents, obscuring the responsibility of developers in the decision-making process. Furthermore, we argue that XAI could result in incorrect attributions of responsibility to vulnerable stakeholders, such as those who are subjected to algorithmic decisions (i.e., patients), due to a misguided perception that they have control over explainable algorithms. This conflict between explainability and accountability can be exacerbated if designers choose to use algorithms and patients as moral and legal scapegoats. We conclude with a set of recommendations for how to approach this tension in the socio-technical process of algorithmic decision-making and a defense of hard regulation to prevent designers from escaping responsibility.
The existence of representative datasets is a prerequisite of many successful artificial intelligence and machine learning models. However, the subsequent application of these models often involves scenarios that are inadequately represented in the data used for training. The reasons for this are manifold and range from time and cost constraints to ethical considerations. As a consequence, the reliable use of these models, especially in safety-critical applications, is a huge challenge. Leveraging additional, already existing sources of knowledge is key to overcome the limitations of purely data-driven approaches, and eventually to increase the generalization capability of these models. Furthermore, predictions that conform with knowledge are crucial for making trustworthy and safe decisions even in underrepresented scenarios. This work provides an overview of existing techniques and methods in the literature that combine data-based models with existing knowledge. The identified approaches are structured according to the categories integration, extraction and conformity. Special attention is given to applications in the field of autonomous driving.
Autonomous driving has achieved a significant milestone in research and development over the last decade. There is increasing interest in the field as the deployment of self-operating vehicles on roads promises safer and more ecologically friendly transportation systems. With the rise of computationally powerful artificial intelligence (AI) techniques, autonomous vehicles can sense their environment with high precision, make safe real-time decisions, and operate more reliably without human interventions. However, intelligent decision-making in autonomous cars is not generally understandable by humans in the current state of the art, and such deficiency hinders this technology from being socially acceptable. Hence, aside from making safe real-time decisions, the AI systems of autonomous vehicles also need to explain how these decisions are constructed in order to be regulatory compliant across many jurisdictions. Our study sheds a comprehensive light on developing explainable artificial intelligence (XAI) approaches for autonomous vehicles. In particular, we make the following contributions. First, we provide a thorough overview of the present gaps with respect to explanations in the state-of-the-art autonomous vehicle industry. We then show the taxonomy of explanations and explanation receivers in this field. Thirdly, we propose a framework for an architecture of end-to-end autonomous driving systems and justify the role of XAI in both debugging and regulating such systems. Finally, as future research directions, we provide a field guide on XAI approaches for autonomous driving that can improve operational safety and transparency towards achieving public approval by regulators, manufacturers, and all engaged stakeholders.