Humankind faces many existential threats, but has limited resources to mitigate them. Choosing how and when to deploy those resources is, therefore, a fateful decision. Here, I analyze the priority for allocating resources to mitigate the risk of superintelligences. Part I observes that a superintelligence unconnected to the outside world (de-efferented) carries no threat, and that any threat from a harmful superintelligence derives from the peripheral systems to which it is connected, e.g., nuclear weapons, biotechnology, etc. Because existentially-threatening peripheral systems already exist and are controlled by humans, the initial effects of a superintelligence would merely add to the existing human-derived risk. This additive risk can be quantified and, with specific assumptions, is shown to decrease with the square of the number of humans having the capability to collapse civilization. Part II proposes that biotechnology ranks high in risk among peripheral systems because, according to all indications, many humans already have the technological capability to engineer harmful microbes having pandemic spread. Progress in biomedicine and computing will proliferate this threat. ``Savant'' software that is not generally superintelligent will underpin much of this progress, thereby becoming the software responsible for the highest and most imminent existential risk -- ahead of hypothetical risk from superintelligences. The analysis concludes that resources should be preferentially applied to mitigating the risk of peripheral systems and savant software. Concerns about superintelligence are at most secondary, and possibly superfluous.
Increasing problems in the transportation segment are accidents, bad traffic flow and pollution. The Intelligent Transportation System with the use of external infrastructure (ITS) can tackle these problems. To the best of our knowledge, there exists no current systematic review of the existing solutions. To fill this knowledge gap, this paper provides an overview about existing ITS which use external infrastructure. Furthermore, this paper discovers the currently not adequately answered research questions. For this reason, we performed a literature review to documents, which describes existing ITS solutions since 2009 until today. We categorized the results according to his technology level and analyzed their properties. Thereby, we made the several ITS comparable and highlighted the past development as well as the current trends. According to the mentioned method, we analyzed more than 346 papers, which includes 40 test bed projects. In summary, the current ITS can deliver high accurate information about individuals in traffic situations in real-time. However, further research in ITS should focus on more reliable perception of the traffic with the use of modern sensors, plug and play mechanism as well as secure real-time distribution in decentralized manner for a high amount of data. With addressing these topics, the development of Intelligent Transportation Systems is in a correction direction for the comprehensive roll-out.
Test flakiness forms a major testing concern. Flaky tests manifest non-deterministic outcomes that cripple continuous integration and lead developers to investigate false alerts. Industrial reports indicate that on a large scale, the accrual of flaky tests breaks the trust in test suites and entails significant computational cost. To alleviate this, practitioners are constrained to identify flaky tests and investigate their impact. To shed light on such mitigation mechanisms, we interview 14 practitioners with the aim to identify (i) the sources of flakiness within the testing ecosystem, (ii) the impacts of flakiness, (iii) the measures adopted by practitioners when addressing flakiness, and (iv) the automation opportunities for these measures. Our analysis shows that, besides the tests and code, flakiness stems from interactions between the system components, the testing infrastructure, and external factors. We also highlight the impact of flakiness on testing practices and product quality and show that the adoption of guidelines together with a stable infrastructure are key measures in mitigating the problem.
Intelligent Personal Assistants (IPAs) like Amazon Alexa, Apple Siri, and Google Assistant are increasingly becoming a part of our everyday. As IPAs become ubiquitous and their applications expand, users turn to them for not just routine tasks, but also intelligent conversations. In this study, we measure the emotional intelligence (EI) displayed by IPAs in the English and Hindi languages; to our knowledge, this is a pioneering effort in probing the emotional intelligence of IPAs in Indian languages. We pose utterances that convey the Sadness or Humor emotion and evaluate IPA responses. We build on previous research to propose a quantitative and qualitative evaluation scheme encompassing new criteria from social science perspectives (display of empathy, wit, understanding) and IPA-specific features (voice modulation, search redirects). We find EI displayed by Google Assistant in Hindi is comparable to EI displayed in English, with the assistant employing both voice modulation and emojis in text. However, we do find that IPAs are unable to understand and respond intelligently to all queries, sometimes even offering counter-productive and problematic responses. Our experiment offers evidence and directions to augment the potential for EI in IPAs.
Policymakers face a broader challenge of how to view AI capabilities today and where does society stand in terms of those capabilities. This paper surveys AI capabilities and tackles this very issue, exploring it in context of political security in digital societies. We introduce a Matrix of Machine Influence to frame and navigate the adversarial applications of AI, and further extend the ideas of Information Management to better understand contemporary AI systems deployment as part of a complex information system. Providing a comprehensive review of man-machine interactions in our networked society and political systems, we suggest that better regulation and management of information systems can more optimally offset the risks of AI and utilise the emerging capabilities which these systems have to offer to policymakers and political institutions across the world. Hopefully this long essay will actuate further debates and discussions over these ideas, and prove to be a useful contribution towards governing the future of AI.
What is a system? Is one of those questions that is yet not clear to most individuals in this world. A system is an assemblage of interacting, interrelated and interdependent components forming a complex and integrated whole with an unambiguous and common goal. This paper emphasizes on the fact that all components of a complex system are inter-related and interdependent in some way and the behavior of that system depends on these independences. A health care system as portrayed in this article is widespread and complex. This encompasses not only hospitals but also governing bodies like the FDA, technologies such as AI, biomedical devices, Cloud computing and many more. The interactions between all these components govern the behavior and existence of the overall healthcare system. In this paper, we focus on the interaction of artificial intelligence, care providers and policymakers and analyze using systems thinking approach, their impact on clinical decision making
Meta-learning, or learning to learn, has gained renewed interest in recent years within the artificial intelligence community. However, meta-learning is incredibly prevalent within nature, has deep roots in cognitive science and psychology, and is currently studied in various forms within neuroscience. The aim of this review is to recast previous lines of research in the study of biological intelligence within the lens of meta-learning, placing these works into a common framework. More recent points of interaction between AI and neuroscience will be discussed, as well as interesting new directions that arise under this perspective.
Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis in locations close to where data is captured based on artificial intelligence. The aim of edge intelligence is to enhance the quality and speed of data processing and protect the privacy and security of the data. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this paper, we present a thorough and comprehensive survey on the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, namely edge caching, edge training, edge inference, and edge offloading, based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare and analyse the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, etc. This survey article provides a comprehensive introduction to edge intelligence and its application areas. In addition, we summarise the development of the emerging research field and the current state-of-the-art and discuss the important open issues and possible theoretical and technical solutions.
To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.
There is a resurgent interest in developing intelligent open-domain dialog systems due to the availability of large amounts of conversational data and the recent progress on neural approaches to conversational AI. Unlike traditional task-oriented bots, an open-domain dialog system aims to establish long-term connections with users by satisfying the human need for communication, affection, and social belonging. This paper reviews the recent works on neural approaches that are devoted to addressing three challenges in developing such systems: semantics, consistency, and interactiveness. Semantics requires a dialog system to not only understand the content of the dialog but also identify user's social needs during the conversation. Consistency requires the system to demonstrate a consistent personality to win users trust and gain their long-term confidence. Interactiveness refers to the system's ability to generate interpersonal responses to achieve particular social goals such as entertainment, conforming, and task completion. The works we select to present here is based on our unique views and are by no means complete. Nevertheless, we hope that the discussion will inspire new research in developing more intelligent dialog systems.
Privacy is a major good for users of personalized services such as recommender systems. When applied to the field of health informatics, privacy concerns of users may be amplified, but the possible utility of such services is also high. Despite availability of technologies such as k-anonymity, differential privacy, privacy-aware recommendation, and personalized privacy trade-offs, little research has been conducted on the users' willingness to share health data for usage in such systems. In two conjoint-decision studies (sample size n=521), we investigate importance and utility of privacy-preserving techniques related to sharing of personal health data for k-anonymity and differential privacy. Users were asked to pick a preferred sharing scenario depending on the recipient of the data, the benefit of sharing data, the type of data, and the parameterized privacy. Users disagreed with sharing data for commercial purposes regarding mental illnesses and with high de-anonymization risks but showed little concern when data is used for scientific purposes and is related to physical illnesses. Suggestions for health recommender system development are derived from the findings.