This paper presents SimAEN, an agent-based simulation whose purpose is to assist public health in understanding and controlling AEN. SimAEN models a population of interacting individuals, or 'agents', in which COVID-19 is spreading. These individuals interact with a public health system that includes Automated Exposure Notifiation (AEN) and Manual Contact Tracing (MCT). These interactions influence when individuals enter and leave quarantine, affecting the spread of the simulated disease. Over 70 user-configurable parameters influence the outcome of SimAEN's simulations. These parameters allow the user to tailor SimAEN to a specific public health jurisdiction and to test the effects of various interventions, including different sensitivity settings of AEN.
Passengers (drivers) of level 3-5 autonomous personal mobility vehicles (APMV) and cars can perform non-driving tasks, such as reading books and smartphones, while driving. It has been pointed out that such activities may increase motion sickness. Many studies have been conducted to build countermeasures, of which various computational motion sickness models have been developed. Many of these are based on subjective vertical conflict (SVC) theory, which describes vertical changes in direction sensed by human sensory organs vs. those expected by the central nervous system. Such models are expected to be applied to autonomous driving scenarios. However, no current computational model can integrate visual vertical information with vestibular sensations. We proposed a 6 DoF SVC-VV model which add a visually perceived vertical block into a conventional six-degrees-of-freedom SVC model to predict VV directions from image data simulating the visual input of a human. Hence, a simple image-based VV estimation method is proposed. As the validation of the proposed model, this paper focuses on describing the fact that the motion sickness increases as a passenger reads a book while using an AMPV, assuming that visual vertical (VV) plays an important role. In the static experiment, it is demonstrated that the estimated VV by the proposed method accurately described the gravitational acceleration direction with a low mean absolute deviation. In addition, the results of the driving experiment using an APMV demonstrated that the proposed 6 DoF SVC-VV model could describe that the increased motion sickness experienced when the VV and gravitational acceleration directions were different.
Policymakers face a broader challenge of how to view AI capabilities today and where does society stand in terms of those capabilities. This paper surveys AI capabilities and tackles this very issue, exploring it in context of political security in digitally networked societies. We extend the ideas of Information Management to better understand contemporary AI systems as part of a larger and more complex information system. Comprehensively reviewing AI capabilities and contemporary man-machine interactions, we undertake conceptual development to suggest that better information management could allow states to more optimally offset the risks of AI enabled influence and better utilise the emerging capabilities which these systems have to offer to policymakers and political institutions across the world. Hopefully this long essay will actuate further debates and discussions over these ideas, and prove to be a useful contribution towards governing the future of AI.
The concept of differential privacy has widely penetrated academia and industry, with its formal guarantee on individual privacy that leads to compliances with privacy legislation, e.g., GDPR. However, there is a lack of understanding on tools capable of achieving differential privacy, and it is not clear what to expect from existing differential privacy tools when implementing privacy protection. Such an obstacle limits private applications' further prosperity. This paper reviews and evaluates the state-of-the-art open-source differential privacy tools of different domains using various estimating categories and privacy settings. Particularly, we look into the performances of three differential privacy tools for machine learning, two for statistical query, and four for synthetic data generation. We test all the tools on both continuous and categorical data and quantify their performance under different privacy budget and data size w.r.t. utility loss and system overhead. The accumulated evaluation results reveal several patterns that users can follow to optimally configure the tools, and provide preliminary guidelines on tool selection under different criteria. Finally, we openly release our evaluation coding repository, a framework that users can reuse to further evaluate the studied tools and beyond. We anticipate this work to provide a comprehensive insight into the performances of the existing dominant privacy tools, and a concrete reference for a potentially large developer community on private applications, thus narrowing the gap between conceptual differential privacy and private functionality development.
Cooperative driving systems, such as platooning, rely on communication and information exchange to create situational awareness for each agent. Design and performance of control components are therefore tightly coupled with communication component performance. The information flow between vehicles can significantly affect the dynamics of a platoon. Therefore, both the performance and the stability of a platoon depend not only on the vehicle's controller but also on the information flow Topology (IFT). The IFT can cause limitations for certain platoon properties, i.e., stability and scalability. Cellular Vehicle-To-Everything (C-V2X) has emerged as one of the main communication technologies to support connected and automated vehicle applications. As a result of packet loss, wireless channels create random link interruption and changes in network topologies. In this paper, we model the communication links between vehicles with a first-order Markov model to capture the prevalent time correlations for each link. These models enable performance evaluation through better approximation of communication links during system design stages. Our approach is to use data from experiments to model the Inter-Packet Gap (IPG) using Markov chains and derive transition probability matrices for consecutive IPG states. Training data is collected from high fidelity simulations using models derived based on empirical data for a variety of different vehicle densities and communication rates. Utilizing the IPG models, we analyze the mean-square stability of a platoon of vehicles with the standard consensus protocol tuned for ideal communication and compare the degradation in performance for different scenarios.
Cyber human interaction is a broad term encompassing the range of interactions that humans can have with technology. While human interaction with fixed and mobile computers is well understood, the world is on the cusp of ubiquitous and sustained interactions between humans and robots. While robotic systems are intertwined with computing and computing technologies, the word robot here describes technologies that can physically affect and in turn be affected by their environments which includes humans. This chapter delves into issues of cyber human interaction from the perspective of humans interacting with a subset of robots known as assistive robots. Assistive robots are robots designed to assist individuals with mobility or capacity limitations in completing everyday activities, commonly called instrumental activities of daily living. These range from household chores, eating or drinking to any activity with which a user may need the daily assistance of a caregiver to complete. One common type of assistive robot is the wheelchair mounted robotic arm. This device is designed to attach to a user's wheelchair to allow him or her to complete their activities independently. In short, these devices have sensors that allow them to sense and process their environment with varying levels of autonomy to perform actions that benefit and improve the well-being of people with capability limitations or disabilities. While human robot interaction is a popular research topic, not much research has been dedicated with regard to individual with limitations. In this chapter, we provide an overview of assistive robotic devices, discuss common methods of user interaction, and the need for an adaptive compensation framework to support potential users in regaining their functional capabilities.
Identifying personalized interventions for an individual is an important task. Recent work has shown that interventions that do not consider the demographic background of individual consumers can, in fact, produce the reverse effect, strengthening opposition to electric vehicles. In this work, we focus on methods for personalizing interventions based on an individual's demographics to shift the preferences of consumers to be more positive towards Battery Electric Vehicles (BEVs). One of the constraints in building models to suggest interventions for shifting preferences is that each intervention can influence the effectiveness of later interventions. This, in turn, requires many subjects to evaluate effectiveness of each possible intervention. To address this, we propose to identify personalized factors influencing BEV adoption, such as barriers and motivators. We present a method for predicting these factors and show that the performance is better than always predicting the most frequent factors. We then present a Reinforcement Learning (RL) model that learns the most effective interventions, and compare the number of subjects required for each approach.
Counterfactual explanations are usually generated through heuristics that are sensitive to the search's initial conditions. The absence of guarantees of performance and robustness hinders trustworthiness. In this paper, we take a disciplined approach towards counterfactual explanations for tree ensembles. We advocate for a model-based search aiming at "optimal" explanations and propose efficient mixed-integer programming approaches. We show that isolation forests can be modeled within our framework to focus the search on plausible explanations with a low outlier score. We provide comprehensive coverage of additional constraints that model important objectives, heterogeneous data types, structural constraints on the feature space, along with resource and actionability restrictions. Our experimental analyses demonstrate that the proposed search approach requires a computational effort that is orders of magnitude smaller than previous mathematical programming algorithms. It scales up to large data sets and tree ensembles, where it provides, within seconds, systematic explanations grounded on well-defined models solved to optimality.
Imitation learning aims to extract knowledge from human experts' demonstrations or artificially created agents in order to replicate their behaviors. Its success has been demonstrated in areas such as video games, autonomous driving, robotic simulations and object manipulation. However, this replicating process could be problematic, such as the performance is highly dependent on the demonstration quality, and most trained agents are limited to perform well in task-specific environments. In this survey, we provide a systematic review on imitation learning. We first introduce the background knowledge from development history and preliminaries, followed by presenting different taxonomies within Imitation Learning and key milestones of the field. We then detail challenges in learning strategies and present research opportunities with learning policy from suboptimal demonstration, voice instructions and other associated optimization schemes.
Click-through rate (CTR) estimation plays as a core function module in various personalized online services, including online advertising, recommender systems, and web search etc. From 2015, the success of deep learning started to benefit CTR estimation performance and now deep CTR models have been widely applied in many industrial platforms. In this survey, we provide a comprehensive review of deep learning models for CTR estimation tasks. First, we take a review of the transfer from shallow to deep CTR models and explain why going deep is a necessary trend of development. Second, we concentrate on explicit feature interaction learning modules of deep CTR models. Then, as an important perspective on large platforms with abundant user histories, deep behavior models are discussed. Moreover, the recently emerged automated methods for deep CTR architecture design are presented. Finally, we summarize the survey and discuss the future prospects of this field.
To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.