An informative measurement is the most efficient way to gain information about an unknown state. We give a first-principles derivation of a general-purpose dynamic programming algorithm that returns a sequence of informative measurements by sequentially maximizing the entropy of possible measurement outcomes. This algorithm can be used by an autonomous agent or robot to decide where best to measure next, planning a path corresponding to an optimal sequence of informative measurements. This algorithm is applicable to states and controls that are continuous or discrete, and agent dynamics that is either stochastic or deterministic; including Markov decision processes. Recent results from approximate dynamic programming and reinforcement learning, including on-line approximations such as rollout and Monte Carlo tree search, allow an agent or robot to solve the measurement task in real-time. The resulting near-optimal solutions include non-myopic paths and measurement sequences that can generally outperform, sometimes substantially, commonly-used greedy heuristics such as maximizing the entropy of each measurement outcome. This is demonstrated for a global search problem, where on-line planning with an extended local search is found to reduce the number of measurements in the search by half.
To solve the autonomous navigation problem in complex environments, an efficient motion planning approach is newly presented in this paper. Considering the challenges from large-scale, partially unknown complex environments, a three-layer motion planning framework is elaborately designed, including global path planning, local path optimization, and time-optimal velocity planning. Compared with existing approaches, the novelty of this work is twofold: 1) a novel heuristic-guided pruning strategy of motion primitives is proposed and fully integrated into the state lattice-based global path planner to further improve the computational efficiency of graph search, and 2) a new soft-constrained local path optimization approach is proposed, wherein the sparse-banded system structure of the underlying optimization problem is fully exploited to efficiently solve the problem. We validate the safety, smoothness, flexibility, and efficiency of our approach in various complex simulation scenarios and challenging real-world tasks. It is shown that the computational efficiency is improved by 66.21% in the global planning stage and the motion efficiency of the robot is improved by 22.87% compared with the recent quintic B\'{e}zier curve-based state space sampling approach. We name the proposed motion planning framework E$ \mathrm{^3} $MoP, where the number 3 not only means our approach is a three-layer framework but also means the proposed approach is efficient in three stages.
Softwarization and virtualization are key concepts for emerging industries that require ultra-low latency. This is only possible if computing resources, traditionally centralized at the core of communication networks, are moved closer to the user, to the network edge. However, the realization of Edge Computing (EC) in the sixth generation (6G) of mobile networks requires efficient resource allocation mechanisms for the placement of the Virtual Network Functions (VNFs). Machine learning (ML) methods, and more specifically, Reinforcement Learning (RL), are a promising approach to solve this problem. The main contributions of this work are twofold: first, we obtain the theoretical performance bound for VNF placement in EC-enabled6G networks by formulating the problem mathematically as a finite Markov Decision Process (MDP) and solving it using a dynamic programming method called Policy Iteration (PI). Second, we develop a practical solution to the problem using RL, where the problem is treated with Q-Learning that considers both computational and communication resources when placing VNFs in the network. The simulation results under different settings of the system parameters show that the performance of the Q-Learning approach is close to the optimal PI algorithm (without having its restrictive assumptions on service statistics). This is particularly interesting when the EC resources are scarce and efficient management of these resources is required.
The concept of Nash equilibrium enlightens the structure of rational behavior in multi-agent settings. However, the concept is as helpful as one may compute it efficiently. We introduce the Cut-and-Play, an algorithm to compute Nash equilibria for non-cooperative simultaneous games where each player's objective is linear in their variables and bilinear in the other players' variables. Using the rich theory of integer programming, we alternate between constructing (i.) increasingly tighter outer approximations of the convex hull of each player's feasible set -- by using branching and cutting plane methods -- and (ii.) increasingly better inner approximations of these hulls -- by finding extreme points and rays of the convex hulls. In particular, we prove the correctness of our algorithm when these convex hulls are polyhedra. Our algorithm allows us to leverage the mixed integer programming technology to compute equilibria for a large class of games. Further, we integrate existing cutting plane families inside the algorithm, significantly speeding up equilibria computation. We showcase a set of extensive computational results for Integer Programming Games and simultaneous games among bilevel leaders. In both cases, our framework outperforms the state-of-the-art in computing time and solution quality.
This paper presents a distributed algorithm applicable to a wide range of practical multi-robot applications. In such multi-robot applications, the user-defined objectives of the mission can be cast as a general optimization problem, without explicit guidelines of the subtasks per different robot. Owing to the unknown environment, unknown robot dynamics, sensor nonlinearities, etc., the analytic form of the optimization cost function is not available a priori. Therefore, standard gradient-descent-like algorithms are not applicable to these problems. To tackle this, we introduce a new algorithm that carefully designs each robot's subcost function, the optimization of which can accomplish the overall team objective. Upon this transformation, we propose a distributed methodology based on the cognitive-based adaptive optimization (CAO) algorithm, that is able to approximate the evolution of each robot's cost function and to adequately optimize its decision variables (robot actions). The latter can be achieved by online learning only the problem-specific characteristics that affect the accomplishment of mission objectives. The overall, low-complexity algorithm can straightforwardly incorporate any kind of operational constraint, is fault tolerant, and can appropriately tackle time-varying cost functions. A cornerstone of this approach is that it shares the same convergence characteristics as those of block coordinate descent algorithms. The proposed algorithm is evaluated in three heterogeneous simulation set-ups under multiple scenarios, against both general-purpose and problem-specific algorithms. Source code is available at \url{//github.com/athakapo/A-distributed-plug-n-play-algorithm-for-multi-robot-applications}.
In this paper we get error bounds for fully discrete approximations of infinite horizon problems via the dynamic programming approach. It is well known that considering a time discretization with a positive step size $h$ an error bound of size $h$ can be proved for the difference between the value function (viscosity solution of the Hamilton-Jacobi-Bellman equation corresponding to the infinite horizon) and the value function of the discrete time problem. However, including also a spatial discretization based on elements of size $k$ an error bound of size $O(k/h)$ can be found in the literature for the error between the value functions of the continuous problem and the fully discrete problem. In this paper we revise the error bound of the fully discrete method and prove, under similar assumptions to those of the time discrete case, that the error of the fully discrete case is in fact $O(h+k)$ which gives first order in time and space for the method. This error bound matches the numerical experiments of many papers in the literature in which the behaviour $1/h$ from the bound $O(k/h)$ have not been observed.
We revisit Min-Mean-Cycle, the classical problem of finding a cycle in a weighted directed graph with minimum mean weight. Despite an extensive algorithmic literature, previous work falls short of a near-linear runtime in the number of edges $m$. We propose an approximation algorithm that, for graphs with polylogarithmic diameter, achieves a near-linear runtime. In particular, this is the first algorithm whose runtime scales in the number of vertices $n$ as $\tilde{O}(n^2)$ for the complete graph. Moreover, unconditionally on the diameter, the algorithm uses only $O(n)$ memory beyond reading the input, making it "memory-optimal". Our approach is based on solving a linear programming relaxation using entropic regularization, which reduces the problem to Matrix Balancing -- \'a la the popular reduction of Optimal Transport to Matrix Scaling. The algorithm is practical and simple to implement.
This paper presents a hybrid motion planning strategy that combines a deep generative network with a conventional motion planning method. Existing planning methods such as A* and Hybrid A* are widely used in path planning tasks because of their ability to determine feasible paths even in complex environments; however, they have limitations in terms of efficiency. To overcome these limitations, a path planning algorithm based on a neural network, namely the neural Hybrid A*, is introduced. This paper proposes using a conditional variational autoencoder (CVAE) to guide the search algorithm by exploiting the ability of CVAE to learn information about the planning space given the information of the parking environment. A non-uniform expansion strategy is utilized based on a distribution of feasible trajectories learned in the demonstrations. The proposed method effectively learns the representations of a given state, and shows improvement in terms of algorithm performance.
Many important real-world problems have action spaces that are high-dimensional, continuous or both, making full enumeration of all possible actions infeasible. Instead, only small subsets of actions can be sampled for the purpose of policy evaluation and improvement. In this paper, we propose a general framework to reason in a principled way about policy evaluation and improvement over such sampled action subsets. This sample-based policy iteration framework can in principle be applied to any reinforcement learning algorithm based upon policy iteration. Concretely, we propose Sampled MuZero, an extension of the MuZero algorithm that is able to learn in domains with arbitrarily complex action spaces by planning over sampled actions. We demonstrate this approach on the classical board game of Go and on two continuous control benchmark domains: DeepMind Control Suite and Real-World RL Suite.
Retrosynthetic planning is a critical task in organic chemistry which identifies a series of reactions that can lead to the synthesis of a target product. The vast number of possible chemical transformations makes the size of the search space very big, and retrosynthetic planning is challenging even for experienced chemists. However, existing methods either require expensive return estimation by rollout with high variance, or optimize for search speed rather than the quality. In this paper, we propose Retro*, a neural-based A*-like algorithm that finds high-quality synthetic routes efficiently. It maintains the search as an AND-OR tree, and learns a neural search bias with off-policy data. Then guided by this neural network, it performs best-first search efficiently during new planning episodes. Experiments on benchmark USPTO datasets show that, our proposed method outperforms existing state-of-the-art with respect to both the success rate and solution quality, while being more efficient at the same time.
We consider the task of learning the parameters of a {\em single} component of a mixture model, for the case when we are given {\em side information} about that component, we call this the "search problem" in mixture models. We would like to solve this with computational and sample complexity lower than solving the overall original problem, where one learns parameters of all components. Our main contributions are the development of a simple but general model for the notion of side information, and a corresponding simple matrix-based algorithm for solving the search problem in this general setting. We then specialize this model and algorithm to four common scenarios: Gaussian mixture models, LDA topic models, subspace clustering, and mixed linear regression. For each one of these we show that if (and only if) the side information is informative, we obtain parameter estimates with greater accuracy, and also improved computation complexity than existing moment based mixture model algorithms (e.g. tensor methods). We also illustrate several natural ways one can obtain such side information, for specific problem instances. Our experiments on real data sets (NY Times, Yelp, BSDS500) further demonstrate the practicality of our algorithms showing significant improvement in runtime and accuracy.