Airline disruption management traditionally seeks to address three problem dimensions: aircraft scheduling, crew scheduling, and passenger scheduling, in that order. However, current efforts have, at most, only addressed the first two problem dimensions concurrently and do not account for the propagative effects that uncertain scheduling outcomes in one dimension can have on another dimension. In addition, existing approaches for airline disruption management include human specialists who decide on necessary corrective actions for airline schedule disruptions on the day of operation. However, human specialists are limited in their ability to process copious amounts of information imperative for making robust decisions that simultaneously address all problem dimensions during disruption management. Therefore, there is a need to augment the decision-making capabilities of a human specialist with quantitative and qualitative tools that can rationalize complex interactions amongst all dimensions in airline disruption management, and provide objective insights to the specialists in the airline operations control center. To that effect, we provide a discussion and demonstration of an agnostic and systematic paradigm for enabling expeditious simultaneously-integrated recovery of all problem dimensions during airline disruption management, through an intelligent multi-agent system that employs principles from artificial intelligence and distributed ledger technology. Results indicate that our paradigm for simultaneously-integrated recovery is effective when all the flights in the airline route network are disrupted.
Smart cities are revolutionizing the transportation infrastructure by the integration of technology. However, ensuring that various transportation system components are operating as expected and in a safe manner is a great challenge. In this work, we propose the use of formal methods as a means to specify and reason about the traffic network's complex properties. Formal methods provide a flexible tool to define the safe operation of the traffic network by capturing non-conforming behavior, exploring various possible states of the traffic scene, and detecting any inconsistencies within it. Hence, we develop specification-based monitoring for the analysis of traffic networks using the formal language, Signal Temporal Logic. We develop monitors that identify safety-related behavior such as conforming to speed limits and maintaining appropriate headway. The framework is tested using a calibrated micro-simulated highway scenario and offline specification-based monitoring is applied to individual vehicle trajectories to understand whether they violate or satisfy the defined safety specifications. Statistical analysis of the outputs show that our approach can differentiate violating from conforming vehicle trajectories based on the defined specifications. This work can be utilized by traffic management centers to study the traffic stream properties, identify possible hazards, and provide valuable feedback for automating the traffic monitoring systems.
This paper addresses the problem of learning an equilibrium efficiently in general-sum Markov games through decentralized multi-agent reinforcement learning. Given the fundamental difficulty of calculating a Nash equilibrium (NE), we instead aim at finding a coarse correlated equilibrium (CCE), a solution concept that generalizes NE by allowing possible correlations among the agents' strategies. We propose an algorithm in which each agent independently runs optimistic V-learning (a variant of Q-learning) to efficiently explore the unknown environment, while using a stabilized online mirror descent (OMD) subroutine for policy updates. We show that the agents can find an $\epsilon$-approximate CCE in at most $\widetilde{O}( H^6S A /\epsilon^2)$ episodes, where $S$ is the number of states, $A$ is the size of the largest individual action space, and $H$ is the length of an episode. This appears to be the first sample complexity result for learning in generic general-sum Markov games. Our results rely on a novel investigation of an anytime high-probability regret bound for OMD with a dynamic learning rate and weighted regret, which would be of independent interest. One key feature of our algorithm is that it is fully \emph{decentralized}, in the sense that each agent has access to only its local information, and is completely oblivious to the presence of others. This way, our algorithm can readily scale up to an arbitrary number of agents, without suffering from the exponential dependence on the number of agents.
For AI technology to fulfill its full promises, we must have effective means to ensure Responsible AI behavior and curtail potential irresponsible use, e.g., in areas of privacy protection, human autonomy, robustness, and prevention of biases and discrimination in automated decision making. Recent literature in the field has identified serious shortcomings of narrow technology focused and formalism-oriented research and has proposed an interdisciplinary approach that brings the social context into the scope of study. In this paper, we take a sociotechnical approach to propose a more expansive framework of thinking about the Responsible AI challenges in both technical and social context. Effective solutions need to bridge the gap between a technical system with the social system that it will be deployed to. To this end, we propose human agency and regulation as main mechanisms of intervention and propose a decentralized computational infrastructure, or a set of public utilities, as the computational means to bridge this gap. A decentralized infrastructure is uniquely suited for meeting this challenge and enable technical solutions and social institutions in a mutually reinforcing dynamic to achieve Responsible AI goals. Our approach is novel in its sociotechnical approach and its aim in tackling the structural issues that cannot be solved within the narrow confines of AI technical research. We then explore possible features of the proposed infrastructure and discuss how it may help solve example problems recently studied in the field.
Wireless traffic prediction is essential for cellular networks to realize intelligent network operations, such as load-aware resource management and predictive control. Existing prediction approaches usually adopt centralized training architectures and require the transferring of huge amounts of traffic data, which may raise delay and privacy concerns for certain scenarios. In this work, we propose a novel wireless traffic prediction framework named \textit{Dual Attention-Based Federated Learning} (FedDA), by which a high-quality prediction model is trained collaboratively by multiple edge clients. To simultaneously capture the various wireless traffic patterns and keep raw data locally, FedDA first groups the clients into different clusters by using a small augmentation dataset. Then, a quasi-global model is trained and shared among clients as prior knowledge, aiming to solve the statistical heterogeneity challenge confronted with federated learning. To construct the global model, a dual attention scheme is further proposed by aggregating the intra- and inter-cluster models, instead of simply averaging the weights of local models. We conduct extensive experiments on two real-world wireless traffic datasets and results show that FedDA outperforms state-of-the-art methods. The average mean squared error performance gains on the two datasets are up to 10\% and 30\%, respectively.
In a global economy with many competitive participants, licensing and tracking of 3D printed parts is desirable if not mandatory for many use-cases. We investigate a blockchain-based approach, as blockchains provide many attractive features, like decentralized architecture and high security assurances. An often neglected aspect of the product life-cycle management is the confidentiality of transactions to hide valuable business information from competitors. To solve the combined problem of trust and confidentiality, we present a confidential licensing and tracking system which works on any publicly verifiable, token-based blockchain that supports tokens of different types representing licenses or attributes of parts. Together with the secure integration of a unique eID in each part, our system provides an efficient, immutable and authenticated transaction log scalable to thousands of transactions per second. With our confidential Token-Based License Management system (cTLM), large industries such as automotive or aviation can license and trace all parts confidentially.
The fundamental aim of the healthcare sector is to incorporate different technologies to observe and keep a track of the various clinical parameters of the patients in day to day life. Distant patient observation applications are becoming popular as economical healthcare services are facilitated by these apps. The process of data management gathered through these applications also require due attention. Although cloud facilitated healthcare applications cater a variety of solutions to store patients record and deliver the required data as per need of all the stakeholders but are affected by security issues, more response time and affecting the continues availability of the system. To overcome these challenges, an intelligent IoT based distributed framework to deploy remote healthcare services is proposed in this chapter. In the proposed model, various entities of the system are interconnected using IoT and Distributed Database Management Systems is used to cater secure and fast data availability to the patients and health care workers. The concept of Blockchain is used to ensure the security of the patient medical records. The proposed model will comprise of intelligent analysis of the clinical records fetched from Distributed Database Management Systems secured with Blockchain. Proposed model is tested with true clinical data and results are discussed in detail.
Emerging mobility systems such as connected and automated vehicles (CAVs) provide the most intriguing opportunity for more accessible, safe, and efficient transportation. CAVs are expected to significantly improve safety by eliminating the human factor and ensure transportation efficiency by allowing users to monitor transportation network conditions and make better operating decisions. However, CAVs could alter the users' tendency-to-travel, leading to a higher traffic demand than expected, thus causing rebound effects (e.g., increased vehicle-miles-traveled). In this chapter, we focus on tackling social factors that could drive an emerging mobility system to unsustainable congestion levels. We propose a mobility market that models the economic in-nature interactions of the travelers in a smart city network with roads and public transit infrastructure. Using techniques from mechanism design, we introduce appropriate monetary incentives (e.g., tolls, fares, fees) and show how a mobility system consisting of selfish travelers that seek to travel either with a CAV or use public transit can be socially efficient. Furthermore, the proposed mobility market ensures that travelers always report their true travel preferences and always benefit from participating in the market; lastly, we also show that the market generates enough revenue to potentially cover its operating costs.
In decentralized machine learning, workers compute model updates on their local data. Because the workers only communicate with few neighbors without central coordination, these updates propagate progressively over the network. This paradigm enables distributed training on networks without all-to-all connectivity, helping to protect data privacy as well as to reduce the communication cost of distributed training in data centers. A key challenge, primarily in decentralized deep learning, remains the handling of differences between the workers' local data distributions. To tackle this challenge, we introduce the RelaySum mechanism for information propagation in decentralized learning. RelaySum uses spanning trees to distribute information exactly uniformly across all workers with finite delays depending on the distance between nodes. In contrast, the typical gossip averaging mechanism only distributes data uniformly asymptotically while using the same communication volume per step as RelaySum. We prove that RelaySGD, based on this mechanism, is independent of data heterogeneity and scales to many workers, enabling highly accurate decentralized deep learning on heterogeneous data. Our code is available at //github.com/epfml/relaysgd.
Minimum-time navigation within constrained and dynamic environments is of special relevance in robotics. Seeking time-optimality, while guaranteeing the integrity of time-varying spatial bounds, is an appealing trade-off for agile vehicles, such as quadrotors. State of the art approaches, either assume bounds to be static and generate time-optimal trajectories offline, or compromise time-optimality for constraint satisfaction. Leveraging nonlinear model predictive control and a path parametric reformulation of the quadrotor model, we present a real-time control that approximates time-optimal behavior and remains within dynamic corridors. The efficacy of the approach is evaluated according to simulated results, showing itself capable of performing extremely aggressive maneuvers as well as stop-and-go and backward motions.
To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.