This paper addresses the challenge of generating optimal vehicle flow at the macroscopic level. Although several studies have focused on optimizing vehicle flow, little attention has been given to ensuring it can be practically achieved. To overcome this issue, we propose a route-recovery and eco-driving strategy for connected and automated vehicles (CAVs) that guarantees optimal flow generation. Our approach involves identifying the optimal vehicle flow that minimizes total travel time, given the constant travel demands in urban areas. We then develop a heuristic route-recovery algorithm to assign routes to CAVs that satisfy all travel demands while maintaining the optimal flow. Our method lets CAVs arrive at each road segment at their desired arrival time based on their assigned route and desired flow. In addition, we present an efficient coordination framework to minimize the energy consumption of CAVs and prevent collisions while crossing intersections. The proposed method can effectively generate optimal vehicle flow and potentially reduce travel time and energy consumption in urban areas.
In this work we consider a new family of algorithms for sequential prediction, Hierarchical Partitioning Forecasters (HPFs). Our goal is to provide appealing theoretical - regret guarantees on a powerful model class - and practical - empirical performance comparable to deep networks - properties at the same time. We built upon three principles: hierarchically partitioning the feature space into sub-spaces, blending forecasters specialized to each sub-space and learning HPFs via local online learning applied to these individual forecasters. Following these principles allows us to obtain regret guarantees, where Constant Partitioning Forecasters (CPFs) serve as competitor. A CPF partitions the feature space into sub-spaces and predicts with a fixed forecaster per sub-space. Fixing a hierarchical partition $\mathcal H$ and considering any CPF with a partition that can be constructed using elements of $\mathcal H$ we provide two guarantees: first, a generic one that unveils how local online learning determines regret of learning the entire HPF online; second, a concrete instance that considers HPF with linear forecasters (LHPF) and exp-concave losses where we obtain $O(k \log T)$ regret for sequences of length $T$ where $k$ is a measure of complexity for the competing CPF. Finally, we provide experiments that compare LHPF to various baselines, including state of the art deep learning models, in precipitation nowcasting. Our results indicate that LHPF is competitive in various settings.
This paper aims at obtaining, by means of integral transforms, analytical approximations in short times of solutions to boundary value problems for the one-dimensional reaction-diffusion equation with constant coefficients. The general form of the equation is considered on a bounded generic interval and the three classical types of boundary conditions, i.e., Dirichlet as well as Neumann and mixed boundary conditions are considered in a unified way. The Fourier and Laplace integral transforms are successively applied and an exact solution is obtained in the Laplace domain. This operational solution is proven to be the accurate Laplace transform of the infinite series obtained by the Fourier decomposition method and presented in the literature as solutions to this type of problem. On the basis of this unified operational solution, four cases are distinguished where innovative formulas expressing consistent analytical approximations in short time limits are derived with respect to the behavior of the solution at the boundaries. Compared to the infinite series solutions, the analytical approximations may open new perspectives and applications, among which can be noted the improvement of numerical efficiency in simulations of one-dimensional moving boundary problems, such as in Stefan models.
In a network of reinforced stochastic processes, for certain values of the parameters, all the agents' inclinations synchronize and converge almost surely toward a certain random variable. The present work aims at clarifying when the agents can asymptotically polarize, i.e. when the common limit inclination can take the extreme values, 0 or 1, with probability zero, strictly positive, or equal to one. Moreover, we present a suitable technique in order to estimate this probability that, along with the theoretical results, has been framed in the general setting of a class of martingales taking values in [0,1].
In recent years, learning-based control in robotics has gained significant attention due to its capability to address complex tasks in real-world environments. With the advances in machine learning algorithms and computational capabilities, this approach is becoming increasingly important for solving challenging control problems in robotics by learning unknown or partially known robot dynamics. Active exploration, in which a robot directs itself to states that yield the highest information gain, is essential for efficient data collection and minimizing human supervision. Similarly, uncertainty-aware deployment has been a growing concern in robotic control, as uncertain actions informed by the learned model can lead to unstable motions or failure. However, active exploration and uncertainty-aware deployment have been studied independently, and there is limited literature that seamlessly integrates them. This paper presents a unified model-based reinforcement learning framework that bridges these two tasks in the robotics control domain. Our framework uses a probabilistic ensemble neural network for dynamics learning, allowing the quantification of epistemic uncertainty via Jensen-Renyi Divergence. The two opposing tasks of exploration and deployment are optimized through state-of-the-art sampling-based MPC, resulting in efficient collection of training data and successful avoidance of uncertain state-action spaces. We conduct experiments on both autonomous vehicles and wheeled robots, showing promising results for both exploration and deployment.
Robots with the ability to balance time against the thoroughness of search have the potential to provide time-critical assistance in applications such as search and rescue. Current advances in ergodic coverage-based search methods have enabled robots to completely explore and search an area in a fixed amount of time. However, optimizing time against the quality of autonomous ergodic search has yet to be demonstrated. In this paper, we investigate solutions to the time-optimal ergodic search problem for fast and adaptive robotic search and exploration. We pose the problem as a minimum time problem with an ergodic inequality constraint whose upper bound regulates and balances the granularity of search against time. Solutions to the problem are presented analytically using Pontryagin's conditions of optimality and demonstrated numerically through a direct transcription optimization approach. We show the efficacy of the approach in generating time-optimal ergodic search trajectories in simulation and with drone experiments in a cluttered environment. Obstacle avoidance is shown to be readily integrated into our formulation, and we perform ablation studies that investigate parameter dependence on optimized time and trajectory sensitivity for search.
We introduce a dynamic mechanism design problem in which the designer wants to offer for sale an item to an agent, and another item to the same agent at some point in the future. The agent's joint distribution of valuations for the two items is known, and the agent knows the valuation for the current item (but not for the one in the future). The designer seeks to maximize expected revenue, and the auction must be deterministic, truthful, and ex post individually rational. The optimum mechanism involves a protocol whereby the seller elicits the buyer's current valuation, and based on the bid makes two take-it-or-leave-it offers, one for now and one for the future. We show that finding the optimum deterministic mechanism in this situation - arguably the simplest meaningful dynamic mechanism design problem imaginable - is NP-hard. We also prove several positive results, among them a polynomial linear programming-based algorithm for the optimum randomized auction (even for many bidders and periods), and we show strong separations in revenue between non-adaptive, adaptive, and randomized auctions, even when the valuations in the two periods are uncorrelated. Finally, for the same problem in an environment in which contracts cannot be enforced, and thus perfection of equilibrium is necessary, we show that the optimum randomized mechanism requires multiple rounds of cheap talk-like interactions.
The problem of designing distributed optimization algorithms that are resilient to Byzantine adversaries has received significant attention. For the Byzantine-resilient distributed optimization problem, the goal is to (approximately) minimize the average of the local cost functions held by the regular (non adversarial) agents in the network. In this paper, we provide a general algorithmic framework for Byzantine-resilient distributed optimization which includes some state-of-the-art algorithms as special cases. We analyze the convergence of algorithms within the framework, and derive a geometric rate of convergence of all regular agents to a ball around the optimal solution (whose size we characterize). Furthermore, we show that approximate consensus can be achieved geometrically fast under some minimal conditions. Our analysis provides insights into the relationship among the convergence region, distance between regular agents' values, step-size, and properties of the agents' functions for Byzantine-resilient distributed optimization.
In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.
The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.
Bid optimization for online advertising from single advertiser's perspective has been thoroughly investigated in both academic research and industrial practice. However, existing work typically assume competitors do not change their bids, i.e., the wining price is fixed, leading to poor performance of the derived solution. Although a few studies use multi-agent reinforcement learning to set up a cooperative game, they still suffer the following drawbacks: (1) They fail to avoid collusion solutions where all the advertisers involved in an auction collude to bid an extremely low price on purpose. (2) Previous works cannot well handle the underlying complex bidding environment, leading to poor model convergence. This problem could be amplified when handling multiple objectives of advertisers which are practical demands but not considered by previous work. In this paper, we propose a novel multi-objective cooperative bid optimization formulation called Multi-Agent Cooperative bidding Games (MACG). MACG sets up a carefully designed multi-objective optimization framework where different objectives of advertisers are incorporated. A global objective to maximize the overall profit of all advertisements is added in order to encourage better cooperation and also to protect self-bidding advertisers. To avoid collusion, we also introduce an extra platform revenue constraint. We analyze the optimal functional form of the bidding formula theoretically and design a policy network accordingly to generate auction-level bids. Then we design an efficient multi-agent evolutionary strategy for model optimization. Offline experiments and online A/B tests conducted on the Taobao platform indicate both single advertiser's objective and global profit have been significantly improved compared to state-of-art methods.