Traditionally, traffic incident management (TIM) programs coordinate the deployment of emergency resources to immediate incident requests without accommodating the interdependencies on incident evolutions in the environment. However, ignoring inherent interdependencies on the evolution of incidents in the environment while making current deployment decisions is shortsighted, and the resulting naive deployment strategy can significantly worsen the overall incident delay impact on the network. The interdependencies on incident evolution in the environment, including those between incident occurrences, and those between resource availability in near-future requests and the anticipated duration of the immediate incident request, should be considered through a look-ahead model when making current-stage deployment decisions. This study develops a new proactive framework based on the distributed constraint optimization problem (DCOP) to address the above limitations, overcoming conventional TIM models that cannot accommodate the dependencies in the TIM problem. Furthermore, the optimization objective is formulated to incorporate Unmanned Aerial Vehicles (UAVs). The UAVs' role in TIM includes exploring uncertain traffic conditions, detecting unexpected events, and augmenting information from roadway traffic sensors. Robustness analysis of our model for multiple TIM scenarios shows satisfactory performance using local search exploration heuristics. Overall, our model reports a significant reduction in total incident delay compared to conventional TIM models. With UAV support, we demonstrate a further decrease in the overall incident delay through the shorter response time of emergency vehicles, and a reduction in uncertainties associated with the estimated incident delay impact.
This work-in-progress report presents both the design and partial evaluation of distributed execution indexing, a technique for microservice applications that precisely identifies dynamic instances of inter-service remote procedure calls (RPCs). Such an indexing scheme is critical for request-level fault injection techniques, which aim to automatically find failure-handling bugs in microservice applications.Distributed execution indexes enable granular specification of request-level faults, while also establishing a correspondence between inter-service RPCs across multiple executions, as is required to perform a systematic search of the fault space.In this paper, we formally define the general concept of a distributed execution index, which can be parameterized on different ways of identifying an RPC in a single service. We identify an instantiation that maintains precision in the presence of a variety of program structure complexities such as loops, function indirection, and concurrency with scheduling nondeterminism. We demonstrate that this particular instantiation addresses gaps in the state-of-the-art in request-level fault injection and show that they are all special cases of distributed execution indexing. We discuss the implementation challenges and provide an implementation of distributed execution indexing as an extension of \Filibuster{}, a resilience testing tool for microservice applications for the Java programming language, which supports fault injection for gRPC and HTTP.
Indoor motion planning focuses on solving the problem of navigating an agent through a cluttered environment. To date, quite a lot of work has been done in this field, but these methods often fail to find the optimal balance between computationally inexpensive online path planning, and optimality of the path. Along with this, these works often prove optimality for single-start single-goal worlds. To address these challenges, we present a multiple waypoint path planner and controller stack for navigation in unknown indoor environments where waypoints include the goal along with the intermediary points that the robot must traverse before reaching the goal. Our approach makes use of a global planner (to find the next best waypoint at any instant), a local planner (to plan the path to a specific waypoint), and an adaptive Model Predictive Control strategy (for robust system control and faster maneuvers). We evaluate our algorithm on a set of randomly generated obstacle maps, intermediate waypoints, and start-goal pairs, with results indicating a significant reduction in computational costs, with high accuracies and robust control.
Population protocols are a relatively novel computational model in which very resource-limited anonymous agents interact in pairs with the goal of computing predicates. We consider the probabilistic version of this model, which naturally allows to consider the setup in which a small probability of an incorrect output is tolerated. The main focus of this thesis is the question of confident leader election, which is an extension of the regular leader election problem with an extra requirement for the eventual leader to detect its uniqueness. Having a confident leader allows the population protocols to determine the convergence of its computations. This behaviour of the model is highly beneficial and was shown to be feasible when the original model is extended in various ways. We show that it takes a linear in terms of the population size number of interactions for a probabilistic population protocol to have a non-zero fraction of agents in all reachable states, starting from a configuration with all agents in the same state. This leads us to a conclusion that confident leader election is out of reach even with the probabilistic version of the model.
Fairness testing aims at mitigating unintended discrimination in the decision-making process of data-driven AI systems. Individual discrimination may occur when an AI model makes different decisions for two distinct individuals who are distinguishable solely according to protected attributes, such as age and race. Such instances reveal biased AI behaviour, and are called Individual Discriminatory Instances (IDIs). In this paper, we propose an approach for the selection of the initial seeds to generate IDIs for fairness testing. Previous studies mainly used random initial seeds to this end. However this phase is crucial, as these seeds are the basis of the follow-up IDIs generation. We dubbed our proposed seed selection approach I&D. It generates a large number of initial IDIs exhibiting a great diversity, aiming at improving the overall performance of fairness testing. Our empirical study reveal that I&D is able to produce a larger number of IDIs with respect to four state-of-the-art seed generation approaches, generating 1.68X more IDIs on average. Moreover, we compare the use of I&D to train machine learning models and find that using I&D reduces the number of remaining IDIs by 29% when compared to the state-of-the-art, thus indicating that I&D is effective for improving model fairness
Quadruped robots are usually equipped with additional arms for manipulation, negatively impacting price and weight. On the other hand, the requirements of legged locomotion mean that the legs of such robots often possess the needed torque and precision to perform manipulation. In this paper, we present a novel design for a small-scale quadruped robot equipped with two leg-mounted manipulators inspired by crustacean chelipeds and knuckle-walker forelimbs. By making use of the actuators already present in the legs, we can achieve manipulation using only 3 additional motors per limb. The design enables the use of small and inexpensive actuators relative to the leg motors, further reducing cost and weight. The moment of inertia impact on the leg is small thanks to an integrated cable/pulley system. As we show in a suite of tele-operation experiments, the robot is capable of performing single- and dual-limb manipulation, as well as transitioning between manipulation modes. The proposed design performs similarly to an additional arm while weighing and costing 5 times less per manipulator and enabling the completion of tasks requiring 2 manipulators.
Autonomous Racing awards agents that react to opponents' behaviors with agile maneuvers towards progressing along the track while penalizing both over-aggressive and over-conservative agents. Understanding the intent of other agents is crucial to deploying autonomous systems in adversarial multi-agent environments. Current approaches either oversimplify the discretization of the action space of agents or fail to recognize the long-term effect of actions and become myopic. Our work focuses on addressing these two challenges. First, we propose a novel dimension reduction method that encapsulates diverse agent behaviors while conserving the continuity of agent actions. Second, we formulate the two-agent racing game as a regret minimization problem and provide a solution for tractable counterfactual regret minimization with a regret prediction model. Finally, we validate our findings experimentally on scaled autonomous vehicles. We demonstrate that using the proposed game-theoretic planner using agent characterization with the objective space significantly improves the win rate against different opponents, and the improvement is transferable to unseen opponents in an unseen environment.
Background: Instrumental variables (IVs) can be used to provide evidence as to whether a treatment X has a causal effect on an outcome Y. Even if the instrument Z satisfies the three core IV assumptions of relevance, independence and the exclusion restriction, further assumptions are required to identify the average causal effect (ACE) of X on Y. Sufficient assumptions for this include: homogeneity in the causal effect of X on Y; homogeneity in the association of Z with X; and no effect modification (NEM). Methods: We describe the NO Simultaneous Heterogeneity (NOSH) assumption, which requires the heterogeneity in the X-Y causal effect to be mean independent of (i.e., uncorrelated with) both Z and heterogeneity in the Z-X association. This happens, for example, if there are no common modifiers of the X-Y effect and the Z-X association, and the X-Y effect is additive linear. We illustrate NOSH using simulations and by re-examining selected published studies. Results: When NOSH holds, the Wald estimand equals the ACE even if both homogeneity assumptions and NEM (which we demonstrate to be special cases of - and therefore stronger than - NOSH) are violated. Conclusions: NOSH is sufficient for identifying the ACE using IVs. Since NOSH is weaker than existing assumptions for ACE identification, doing so may be more plausible than previously anticipated.
Bid optimization for online advertising from single advertiser's perspective has been thoroughly investigated in both academic research and industrial practice. However, existing work typically assume competitors do not change their bids, i.e., the wining price is fixed, leading to poor performance of the derived solution. Although a few studies use multi-agent reinforcement learning to set up a cooperative game, they still suffer the following drawbacks: (1) They fail to avoid collusion solutions where all the advertisers involved in an auction collude to bid an extremely low price on purpose. (2) Previous works cannot well handle the underlying complex bidding environment, leading to poor model convergence. This problem could be amplified when handling multiple objectives of advertisers which are practical demands but not considered by previous work. In this paper, we propose a novel multi-objective cooperative bid optimization formulation called Multi-Agent Cooperative bidding Games (MACG). MACG sets up a carefully designed multi-objective optimization framework where different objectives of advertisers are incorporated. A global objective to maximize the overall profit of all advertisements is added in order to encourage better cooperation and also to protect self-bidding advertisers. To avoid collusion, we also introduce an extra platform revenue constraint. We analyze the optimal functional form of the bidding formula theoretically and design a policy network accordingly to generate auction-level bids. Then we design an efficient multi-agent evolutionary strategy for model optimization. Offline experiments and online A/B tests conducted on the Taobao platform indicate both single advertiser's objective and global profit have been significantly improved compared to state-of-art methods.
With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.
Object detection is an important and challenging problem in computer vision. Although the past decade has witnessed major advances in object detection in natural scenes, such successes have been slow to aerial imagery, not only because of the huge variation in the scale, orientation and shape of the object instances on the earth's surface, but also due to the scarcity of well-annotated datasets of objects in aerial scenes. To advance object detection research in Earth Vision, also known as Earth Observation and Remote Sensing, we introduce a large-scale Dataset for Object deTection in Aerial images (DOTA). To this end, we collect $2806$ aerial images from different sensors and platforms. Each image is of the size about 4000-by-4000 pixels and contains objects exhibiting a wide variety of scales, orientations, and shapes. These DOTA images are then annotated by experts in aerial image interpretation using $15$ common object categories. The fully annotated DOTA images contains $188,282$ instances, each of which is labeled by an arbitrary (8 d.o.f.) quadrilateral To build a baseline for object detection in Earth Vision, we evaluate state-of-the-art object detection algorithms on DOTA. Experiments demonstrate that DOTA well represents real Earth Vision applications and are quite challenging.