Increasing urbanization and exacerbation of sustainability goals threaten the operational efficiency of current transportation systems and confront cities with complex choices with huge impact on future generations. At the same time, the rise of private, profit-maximizing Mobility Service Providers leveraging public resources, such as ride-hailing companies, entangles current regulation schemes. This calls for tools to study such complex socio-technical problems. In this paper, we provide a game-theoretic framework to study interactions between stakeholders of the mobility ecosystem, modeling regulatory aspects such as taxes and public transport prices, as well as operational matters for Mobility Service Providers such as pricing strategy, fleet sizing, and vehicle design. Our framework is modular and can readily accommodate different types of Mobility Service Providers, actions of municipalities, and low-level models of customers choices in the mobility system. Through both an analytical and a numerical case study for the city of Berlin, Germany, we showcase the ability of our framework to compute equilibria of the problem, to study fundamental tradeoffs, and to inform stakeholders and policy makers on the effects of interventions. Among others, we show tradeoffs between customers satisfaction, environmental impact, and public revenue, as well as the impact of strategic decisions on these metrics.
Whether a population of decision-making individuals will reach a state of satisfactory decisions is a fundamental problem in studying collective behaviors. In the framework of evolutionary game theory and by means of potential functions, researchers have established equilibrium convergence under different update rules, including best-response and imitation, by imposing certain conditions on agents' utility functions. Then by using the proposed potential functions, they have been able to control these populations towards some desired equilibrium. Nevertheless, finding a potential function is often daunting, if not near impossible. We introduce the so-called coordinating agent who tends to switch to a decision only if at least another agent has done so. We prove that any population of coordinating agents, a coordinating population, almost surely equilibrates. Apparently, some binary network games that were proven to equilibrate using potential functions are coordinating, and some coloring problems can be solved using this notion. We additionally show that any mixed network of agents following best-response, imitation, or rational imitation, and associated with coordination payoff matrices is coordinating, and hence, equilibrates. As a second contribution, we provide an incentive-based control algorithm that leads coordinating populations to a desired equilibrium. The algorithm iteratively maximizes the ratio of the number of agents choosing the desired decision to the provided incentive. It performs near optimal and as well as specialized algorithms proposed for best-response and imitation; however, it does not require a potential function. Therefore, this control algorithm can be readily applied in general situations where no potential function is yet found for a given decision-making population.
Biometric recognition is a highly adopted technology to support different kinds of applications, ranging from security and access control applications to low enforcement applications. However, such systems raise serious privacy and data protection concerns. Misuse of data, compromising the privacy of individuals and/or authorized processing of data may be irreversible and could have severe consequences on the individual's rights to privacy and data protection. This is partly due to the lack of methods and guidance for the integration of data protection and privacy by design in the system development process. In this paper, we present an example of privacy and data protection best practices to provide more guidance for data controllers and developers on how to comply with the legal obligation for data protection. These privacy and data protection best practices and considerations are based on the lessons learned from the SMart mobILity at the European land borders (SMILE) project.
Unsignalized intersection driving is challenging for automated vehicles. For safe and efficient performances, the diverse and dynamic behaviors of interacting vehicles should be considered. Based on a game-theoretic framework, a human-like payoff design methodology is proposed for the automated decision at unsignalized intersections. Prospect Theory is introduced to map the objective collision risk to the subjective driver payoffs, and the driving style can be quantified as a tradeoff between safety and speed. To account for the dynamics of interaction, a probabilistic model is further introduced to describe the acceleration tendency of drivers. Simulation results show that the proposed decision algorithm can describe the dynamic process of two-vehicle interaction in limit cases. Statistics of uniformly-sampled cases simulation indicate that the success rate of safe interaction reaches 98%, while the speed efficiency can also be guaranteed. The proposed approach is further applied and validated in four-vehicle interaction scenarios at a four-arm intersection.
Understanding decision-making in dynamic and complex settings is a challenge yet essential for preventing, mitigating, and responding to adverse events (e.g., disasters, financial crises). Simulation games have shown promise to advance our understanding of decision-making in such settings. However, an open question remains on how we extract useful information from these games. We contribute an approach to model human-simulation interaction by leveraging existing methods to characterize: (1) system states of dynamic simulation environments (with Principal Component Analysis), (2) behavioral responses from human interaction with simulation (with Hidden Markov Models), and (3) behavioral responses across system states (with Sequence Analysis). We demonstrate this approach with our game simulating drug shortages in a supply chain context. Results from our experimental study with 135 participants show different player types (hoarders, reactors, followers), how behavior changes in different system states, and how sharing information impacts behavior. We discuss how our findings challenge existing literature.
We present Project IRL (In Real Life), a suite of five mobile apps we created to explore novel ways of supporting in-person social interactions with augmented reality. In recent years, the tone of public discourse surrounding digital technology has become increasingly critical, and technology's influence on the way people relate to each other has been blamed for making people feel "alone together," diverting their attention from truly engaging with one another when they interact in person. Motivated by this challenge, we focus on an under-explored design space: playful co-located interactions. We evaluated the apps through a deployment study that involved interviews and participant observations with 101 people. We synthesized the results into a series of design guidelines that focus on four themes: (1) device arrangement (e.g., are people sharing one phone, or does each person have their own?), (2) enablers (e.g., should the activity focus on an object, body part, or pet?), (3) affordances of modifying reality (i.e., features of the technology that enhance its potential to encourage various aspects of social interaction), and (4) co-located play (i.e., using technology to make in-person play engaging and inviting). We conclude by presenting our design guidelines for future work on embodied social AR.
The Internet of Behaviors (IoB) puts human behavior at the core of engineering intelligent connected systems. IoB links the digital world to human behavior to establish human-driven design, development, and adaptation processes. This paper defines the novel concept by an IoB model based on a collective effort interacting with software engineers, human-computer interaction scientists, social scientists, and cognitive science communities. The model for IoB is created based on an exploratory study that synthesizes state-of-the-art analysis and experts interviews. The architecture of a real industry 4.0 manufacturing infrastructure helps to explain the IoB model and it's application. The conceptual model was used to successfully implement a socio-technical infrastructure for a crowd monitoring and queue management system for the Uffizi Galleries, Florence, Italy. The experiment, which started in the fall of 2016 and was operational in the fall of 2018, used a data-driven approach to feed the system with real-time sensory data. It also incorporated prediction models on visitors' mobility behavior. The system's main objective was to capture human behavior, model it, and build a mechanism that considers changes, adapts in real-time, and continuously learns from repetitive behaviors. In addition to the conceptual model and the real-life evaluation, this paper provides recommendations from experts and gives future directions for IoB to become a significant technological advancement in the coming few years.
In the Internet of Things (IoT) era, vehicles and other intelligent components in an intelligent transportation system (ITS) are connected, forming Vehicular Networks (VNs) that provide efficient and secure traffic and ubiquitous access to various applications. However, as the number of nodes in ITS increases, it is challenging to satisfy a varied and large number of service requests with different Quality of Service and security requirements in highly dynamic VNs. Intelligent nodes in VNs can compete or cooperate for limited network resources to achieve either an individual or a group's objectives. Game Theory (GT), a theoretical framework designed for strategic interactions among rational decision-makers sharing scarce resources, can be used to model and analyze individual or group behaviors of communicating entities in VNs. This paper primarily surveys the recent developments of GT in solving various challenges of VNs. This survey starts with an introduction to the background of VNs. A review of GT models studied in the VNs is then introduced, including its basic concepts, classifications, and applicable vehicular issues. After discussing the requirements of VNs and the motivation of using GT, a comprehensive literature review on GT applications in dealing with the challenges of current VNs is provided. Furthermore, recent contributions of GT to VNs integrating with diverse emerging 5G technologies are surveyed. Finally, the lessons learned are given, and several key research challenges and possible solutions for applying GT in VNs are outlined.
Spatio-temporal constraints coupled with social constructs have the potential to create fluid predictability to human mobility patterns. Accordingly, predictability in human mobility is non-monotonic and varies according to this spatio-socio-temporal context. Here, we propose that the predictability in human mobility is a {\em state} and not a static trait of individuals. First, we show that time (of the week) explains people's whereabouts more than the sequences of locations they visit. Then, we show that not only does predictability depend on time but also the type of activity an individual is engaged in, thus establishing the importance of contexts in human mobility.
To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.
This paper introduces a novel neural network-based reinforcement learning approach for robot gaze control. Our approach enables a robot to learn and to adapt its gaze control strategy for human-robot interaction neither with the use of external sensors nor with human supervision. The robot learns to focus its attention onto groups of people from its own audio-visual experiences, independently of the number of people, of their positions and of their physical appearances. In particular, we use a recurrent neural network architecture in combination with Q-learning to find an optimal action-selection policy; we pre-train the network using a simulated environment that mimics realistic scenarios that involve speaking/silent participants, thus avoiding the need of tedious sessions of a robot interacting with people. Our experimental evaluation suggests that the proposed method is robust against parameter estimation, i.e. the parameter values yielded by the method do not have a decisive impact on the performance. The best results are obtained when both audio and visual information is jointly used. Experiments with the Nao robot indicate that our framework is a step forward towards the autonomous learning of socially acceptable gaze behavior.