A novel approach to efficiently treat pure-state equality constraints in optimal control problems (OCPs) using a Riccati recursion algorithm is proposed. The proposed method transforms a pure-state equality constraint into a mixed state-control constraint such that the constraint is expressed by variables at a certain previous time stage. It is showed that if the solution satisfies the second-order sufficient conditions of the OCP with the transformed mixed state-control constraints, it is a local minimum of the OCP with the original pure-state constraints. A Riccati recursion algorithm is derived to solve the OCP using the transformed constraints with linear time complexity in the grid number of the horizon, in contrast to a previous approach that scales cubically with respect to the total dimension of the pure-state equality constraints. Numerical experiments on the whole-body optimal control of quadrupedal gaits that involve pure-state equality constraints owing to contact switches demonstrate the effectiveness of the proposed method over existing approaches.
We develop a computationally tractable method for estimating the optimal map between two distributions over $\mathbb{R}^d$ with rigorous finite-sample guarantees. Leveraging an entropic version of Brenier's theorem, we show that our estimator -- the barycentric projection of the optimal entropic plan -- is easy to compute using Sinkhorn's algorithm. As a result, unlike current approaches for map estimation, which are slow to evaluate when the dimension or number of samples is large, our approach is parallelizable and extremely efficient even for massive data sets. Under smoothness assumptions on the optimal map, we show that our estimator enjoys comparable statistical performance to other estimators in the literature, but with much lower computational cost. We showcase the efficacy of our proposed estimator through numerical examples. Our proofs are based on a modified duality principle for entropic optimal transport and on a method for approximating optimal entropic plans due to Pal (2019).
We present an a posteriori error estimate based on equilibrated stress reconstructions for the finite element approximation of a unilateral contact problem with weak enforcement of the contact conditions. We start by proving a guaranteed upper bound for the dual norm of the residual. This norm is shown to control the natural energy norm up to a boundary term, which can be removed under a saturation assumption. The basic estimate is then refined to distinguish the different components of the error, and is used as a starting point to design an algorithm including adaptive stopping criteria for the nonlinear solver and automatic tuning of a regularization parameter. We then discuss an actual way of computing the stress reconstruction based on the Arnold-Falk-Winther finite elements. Finally, after briefly discussing the efficiency of our estimators, we showcase their performance on a panel of numerical tests.
An informative measurement is the most efficient way to gain information about an unknown state. We give a first-principles derivation of a general-purpose dynamic programming algorithm that returns a sequence of informative measurements by sequentially maximizing the entropy of possible measurement outcomes. This algorithm can be used by an autonomous agent or robot to decide where best to measure next, planning a path corresponding to an optimal sequence of informative measurements. This algorithm is applicable to states and controls that are continuous or discrete, and agent dynamics that is either stochastic or deterministic; including Markov decision processes. Recent results from approximate dynamic programming and reinforcement learning, including on-line approximations such as rollout and Monte Carlo tree search, allow an agent or robot to solve the measurement task in real-time. The resulting near-optimal solutions include non-myopic paths and measurement sequences that can generally outperform, sometimes substantially, commonly-used greedy heuristics such as maximizing the entropy of each measurement outcome. This is demonstrated for a global search problem, where on-line planning with an extended local search is found to reduce the number of measurements in the search by half.
Navigation of mobile robots within crowded environments is an essential task in various use cases, such as delivery, health care, or logistics. Deep Reinforcement Learning (DRL) emerged as an alternative method to replace overly conservative approaches and promises more efficient and flexible navigation. However, Deep Reinforcement Learning is limited to local navigation due to its myopic nature. Previous research works proposed various ways to combine Deep Reinforcement Learning with conventional methods but a common problem is the complexity of highly dynamic environments due to the unpredictability of humans and other objects within the environment. In this paper, we propose a hierarchical waypoint generator, which considers moving obstacles and thus generates safer and more robust waypoints for Deep-Reinforcement-Learning-based local planners. Therefore, we utilize Delaunay Triangulation to encode obstacles and incorporate an extended hybrid A-Star approach to efficiently search for an optimal solution in the time-state space. We compared our waypoint generator against two baseline approaches and outperform them in terms of safety, efficiency, and robustness.
We consider the control of McKean-Vlasov dynamics whose coefficients have mean field interactions in the state and control. We show that for a class of linear-convex mean field control problems, the unique optimal open-loop control admits the optimal 1/2-H\"{o}lder regularity in time. Consequently, we prove that the value function can be approximated by one with piecewise constant controls and discrete-time state processes arising from Euler-Maruyama time stepping, up to an order 1/2 error, and the optimal control can be approximated up to an order 1/4 error. These results are novel even for the case without mean field interaction.
Soft robots are made of compliant and deformable materials and can perform tasks challenging for conventional rigid robots. The inherent compliance of soft robots makes them more suitable and adaptable for interactions with humans and the environment. However, this preeminence comes at a cost: their continuum nature makes it challenging to develop robust model-based control strategies. Specifically, an adaptive control approach addressing this challenge has not yet been applied to physical soft robotic arms. This work presents a reformulation of dynamics for a soft continuum manipulator using the Euler-Lagrange method. The proposed model eliminates the simplifying assumption made in previous works and provides a more accurate description of the robot's inertia. Based on our model, we introduce a task-space adaptive control scheme. This controller is robust against model parameter uncertainties and unknown input disturbances. The controller is implemented on a physical soft continuum arm. A series of experiments were carried out to validate the effectiveness of the controller in task-space trajectory tracking under different payloads. The controller outperforms the state-of-the-art method both in terms of accuracy and robustness. Moreover, the proposed model-based control design is flexible and can be generalized to any continuum robotic arm with an arbitrary number of continuum segments.
We present a perception constrained visual predictive control (PCVPC) algorithm for quadrotors to enable aggressive flights without using any position information. Our framework leverages nonlinear model predictive control (NMPC) to formulate a constrained image-based visual servoing (IBVS) problem. The quadrotor dynamics, image dynamics, actuation constraints, and visibility constraints are taken into account to handle quadrotor maneuvers with high agility. Two main challenges of applying IBVS to agile drones are considered: (i) high sensitivity of depths to intense orientation changes, and (ii) conflict between the visual servoing objective and action objective due to the underactuated nature. To deal with the first challenge, we parameterize a visual feature by a bearing vector and a distance, by which the depth will no longer be involved in the image dynamics. Meanwhile, we settle the conflict problem by compensating for the rotation in the future visual servoing cost using the predicted orientations of the quadrotor. Our approach in simulation shows that (i) it can work without any position information, (ii) it can achieve a maximum referebce speed of 9 m/s in trajectory tracking without losing the target, and (iii) it can reach a landmark, e.g., a gate in drone racing, from varied initial configurations.
Heat Equation Driven Area Coverage (HEDAC) is a state-of-the-art multi-agent ergodic motion control guided by a gradient of a potential field. A finite element method is hereby implemented to obtain a solution of Helmholtz partial differential equation, which models the potential field for surveying motion control. This allows us to survey arbitrarily shaped domains and to include obstacles in an elegant and robust manner intrinsic to HEDAC's fundamental idea. For a simple kinematic motion, the obstacles and boundary avoidance constraints are successfully handled by directing the agent motion with the gradient of the potential. However, including additional constraints, such as the minimal clearance dsitance from stationary and moving obstacles and the minimal path curvature radius, requires further alternations of the control algorithm. We introduce a relatively simple yet robust approach for handling these constraints by formulating a straightforward optimization problem based on collision-free escapes route maneuvers. This approach provides a guaranteed collision avoidance mechanism, while being computationally inexpensive as a result of the optimization problem partitioning. The proposed motion control is evaluated in three realistic surveying scenarios simulations, showing the effectiveness of the surveying and the robustness of the control algorithm. Furthermore, potential maneuvering difficulties due to improperly defined surveying scenarios are highlighted and we provide guidelines on how to overpass them. The results are promising and indiacate real-world applicability of proposed constrained multi-agent motion control for autonomous surveying and potentially other HEDAC utilizations.
Persistently monitoring a region under localization and communication constraints is a challenging problem. In this paper, we consider a heterogenous robotic system consisting of two types of agents -- anchor agents that have accurate localization capability, and auxiliary agents that have low localization accuracy. The auxiliary agents must be within the communication range of an {anchor}, directly or indirectly to localize itself. The objective of the robotic team is to minimize the uncertainty in the environment through persistent monitoring. We propose a multi-agent deep reinforcement learning (MADRL) based architecture with graph attention called Graph Localized Proximal Policy Optimization (GALLOP), which incorporates the localization and communication constraints of the agents along with persistent monitoring objective to determine motion policies for each agent. We evaluate the performance of GALLOP on three different custom-built environments. The results show the agents are able to learn a stable policy and outperform greedy and random search baseline approaches.
We study constrained reinforcement learning (CRL) from a novel perspective by setting constraints directly on state density functions, rather than the value functions considered by previous works. State density has a clear physical and mathematical interpretation, and is able to express a wide variety of constraints such as resource limits and safety requirements. Density constraints can also avoid the time-consuming process of designing and tuning cost functions required by value function-based constraints to encode system specifications. We leverage the duality between density functions and Q functions to develop an effective algorithm to solve the density constrained RL problem optimally and the constrains are guaranteed to be satisfied. We prove that the proposed algorithm converges to a near-optimal solution with a bounded error even when the policy update is imperfect. We use a set of comprehensive experiments to demonstrate the advantages of our approach over state-of-the-art CRL methods, with a wide range of density constrained tasks as well as standard CRL benchmarks such as Safety-Gym.