In this paper, we tackle the problem of flying a quadrotor using time-optimal control policies that can be replanned online when the environment changes or when encountering unknown disturbances. This problem is challenging as the time-optimal trajectories that consider the full quadrotor dynamics are computationally expensive to generate (order of minutes or even hours). We introduce a sampling-based method for efficient generation of time-optimal paths of a point-mass model. These paths are then tracked using a Model Predictive Contouring Control approach that considers the full quadrotor dynamics and the single rotor thrust limits. Our combined approach is able to run in real-time, being the first time-optimal method that is able to adapt to changes on-the-fly. We showcase our approach's adaption capabilities by flying a quadrotor at more than 60 km/h in a racing track where gates are moving. Additionally, we show that our online replanning approach can cope with strong disturbances caused by winds of up to 68 km/h.
Indoor motion planning focuses on solving the problem of navigating an agent through a cluttered environment. To date, quite a lot of work has been done in this field, but these methods often fail to find the optimal balance between computationally inexpensive online path planning, and optimality of the path. Along with this, these works often prove optimality for single-start single-goal worlds. To address these challenges, we present a multiple waypoint path planner and controller stack for navigation in unknown indoor environments where waypoints include the goal along with the intermediary points that the robot must traverse before reaching the goal. Our approach makes use of a global planner (to find the next best waypoint at any instant), a local planner (to plan the path to a specific waypoint), and an adaptive Model Predictive Control strategy (for robust system control and faster maneuvers). We evaluate our algorithm on a set of randomly generated obstacle maps, intermediate waypoints, and start-goal pairs, with results indicating a significant reduction in computational costs, with high accuracies and robust control.
We consider the problem of learning the optimal threshold policy for control problems. Threshold policies make control decisions by evaluating whether an element of the system state exceeds a certain threshold, whose value is determined by other elements of the system state. By leveraging the monotone property of threshold policies, we prove that their policy gradients have a surprisingly simple expression. We use this simple expression to build an off-policy actor-critic algorithm for learning the optimal threshold policy. Simulation results show that our policy significantly outperforms other reinforcement learning algorithms due to its ability to exploit the monotone property. In addition, we show that the Whittle index, a powerful tool for restless multi-armed bandit problems, is equivalent to the optimal threshold policy for an alternative problem. This observation leads to a simple algorithm that finds the Whittle index by learning the optimal threshold policy in the alternative problem. Simulation results show that our algorithm learns the Whittle index much faster than several recent studies that learn the Whittle index through indirect means.
The significant presence of demand charges in electric bills motivates large-load customers to utilize energy storage to reduce the peak procurement from the grid. We herein study the problem of energy storage allocation for peak minimization, under the online setting where irrevocable decisions are sequentially made without knowing future demands. The problem is uniquely challenging due to (i) the coupling of online decisions across time imposed by the inventory constraints and (ii) the noncumulative nature of the peak procurement. We apply the CR-Pursuit framework and address the challenges unique to our minimization problem to design an online algorithm achieving the optimal competitive ratio (CR) among all online algorithms. We show that the optimal CR can be computed in polynomial time by solving a linear number of linear-fractional problems. More importantly, we generalize our approach to develop an \emph{anytime-optimal} online algorithm that achieves the best possible CR at any epoch, given the inputs and online decisions so far. The algorithm retains the optimal worst-case performance and attains adaptive average-case performance. Trace-driven simulations show that our algorithm can decrease the peak demand by an extra 19% compared to baseline alternatives under typical settings.
Performing highly agile dynamic motions, such as jumping or running on uneven stepping stones has remained a challenging problem in legged robot locomotion. This paper presents a framework that combines trajectory optimization and model predictive control to perform robust and consecutive jumping on stepping stones. In our approach, we first utilize trajectory optimization based on full-nonlinear dynamics of the robot to generate periodic jumping trajectories for various jumping distances. A jumping controller based on a model predictive control is then designed for realizing smooth jumping transitions, enabling the robot to achieve continuous jumps on stepping stones. Thanks to the incorporation of MPC as a real-time feedback controller, the proposed framework is also validated to be robust to uneven platforms with unknown height perturbations and model uncertainty on the robot dynamics.
Assessing goodness-of-fit is challenging because theoretically there is no uniformly powerful test, whereas in practice the question `what would be a preferable default test?' is important to applied statisticians. To take a look at this so-called omnibus testing problem, this paper considers the class of reweighted Anderson-Darling tests and makes two fold contributions. The first contribution is to provide a geometric understanding of the problem via establishing an explicit one-to-one correspondence between the weights and their focal directions of deviations of the distributions under alternative hypothesis from those under the null. It is argued that the weights that produce the test statistic with minimum variance can serve as a general-purpose test. In addition, this default or optimal weights-based test is found to be practically equivalent to the Zhang test, which has been commonly perceived powerful. The second contribution is to establish new large-sample results. It is shown that like Anderson-Darling, the minimum variance test statistic under the null has the same distribution as that of a weighted sum of an infinite number of independent squared normal random variables. These theoretical results are shown to be useful for large sample-based approximations. Finally, the paper concludes with a few remarks, including how the present approach can be extended to create new multinomial goodness-of-fit tests.
The hand-eye calibration problem is an important application problem in robot research. Based on the 2-norm of dual quaternion vectors, we propose a new dual quaternion optimization method for the hand-eye calibration problem. The dual quaternion optimization problem is decomposed to two quaternion optimization subproblems. The first quaternion optimization subproblem governs the rotation of the robot hand. It can be solved efficiently by the eigenvalue decomposition or singular value decomposition. If the optimal value of the first quaternion optimization subproblem is zero, then the system is rotationwise noiseless, i.e., there exists a ``perfect'' robot hand motion which meets all the testing poses rotationwise exactly. In this case, we apply the regularization technique for solving the second subproblem to minimize the distance of the translation. Otherwise we apply the patching technique to solve the second quaternion optimization subproblem. Then solving the second quaternion optimization subproblem turns out to be solving a quadratically constrained quadratic program. In this way, we give a complete description for the solution set of hand-eye calibration problems. This is new in the hand-eye calibration literature. The numerical results are also presented to show the efficiency of the proposed method.
We describe the orchestration of a decentralized swarm of rotary-wing UAV-relays, augmenting the coverage and service capabilities of a terrestrial base station. Our goal is to minimize the time-average service latencies involved in handling transmission requests from ground users under Poisson arrivals, subject to an average UAV power constraint. Equipped with rate adaptation to efficiently leverage air-to-ground channel stochastics, we first derive the optimal control policy for a single relay via a semi-Markov decision process formulation, with competitive swarm optimization for UAV trajectory design. Accordingly, we detail a multiscale decomposition of this construction: outer decisions on radial wait velocities and end positions optimize the expected long-term delay-power trade-off; consequently, inner decisions on angular wait velocities, service schedules, and UAV trajectories greedily minimize the instantaneous delay-power costs. Next, generalizing to UAV swarms via replication and consensus-driven command-and-control, this policy is embedded with spread maximization and conflict resolution heuristics. We demonstrate that our framework offers superior performance vis-\`a-vis average service latencies and average per-UAV power consumption: 11x faster data payload delivery relative to static UAV-relay deployments and 2x faster than a deep-Q network solution; remarkably, one relay with our scheme outclasses three relays under a joint successive convex approximation policy by 62%.
This paper describes a resilient navigation and planning system used in the Indy Autonomous Challenge (IAC) competition. The IAC is a competition where full-scale race cars run autonomously on Indianapolis Motor Speedway(IMS) up to 290 km/h (180 mph). Race cars will experience severe vibrations. Especially at high speeds. These vibrations can degrade standard localization algorithms based on precision GPS-aided inertial measurement units. Degraded localization can lead to serious problems, including collisions. Therefore, we propose a resilient navigation system that enables a race car to stay within the track in the event of localization failures. Our navigation system uses a multi-sensor fusion-based Kalman filter. We detect degradation of the navigation solution using probabilistic approaches to computing optimal measurement values for the correction step of our Kalman filter. In addition, an optimal path planning algorithm for obstacle avoidance is proposed. In this challenge, the track has static obstacles on the track. The vehicle is required to avoid them with minimal time loss. By taking the original optimal racing line, obstacles, and vehicle dynamics into account, we propose a road-graph-based path planning algorithm to ensure that our race car can perform efficient obstacle avoidance. The proposed localization system was successfully validated to show its capability to prevent localization failures in the event of faulty GPS measurements during the historic world's first autonomous racing at IMS. Owing to our robust navigation and planning algorithm, we were able to finish the race as one of the top four teams while the remaining five teams failed to finish due to collisions or out-of-track violations.
We introduce two new tools to assess the validity of statistical distributions. These tools are based on components derived from a new statistical quantity, the $comparison$ $curve$. The first tool is a graphical representation of these components on a $bar$ $plot$ (B plot), which can provide a detailed appraisal of the validity of the statistical model, in particular when supplemented by acceptance regions related to the model. The knowledge gained from this representation can sometimes suggest an existing $goodness$-$of$-$fit$ test to supplement this visual assessment with a control of the type I error. Otherwise, an adaptive test may be preferable and the second tool is the combination of these components to produce a powerful $\chi^2$-type goodness-of-fit test. Because the number of these components can be large, we introduce a new selection rule to decide, in a data driven fashion, on their proper number to take into consideration. In a simulation, our goodness-of-fit tests are seen to be powerwise competitive with the best solutions that have been recommended in the context of a fully specified model as well as when some parameters must be estimated. Practical examples show how to use these tools to derive principled information about where the model departs from the data.
Interpretability methods are developed to understand the working mechanisms of black-box models, which is crucial to their responsible deployment. Fulfilling this goal requires both that the explanations generated by these methods are correct and that people can easily and reliably understand them. While the former has been addressed in prior work, the latter is often overlooked, resulting in informal model understanding derived from a handful of local explanations. In this paper, we introduce explanation summary (ExSum), a mathematical framework for quantifying model understanding, and propose metrics for its quality assessment. On two domains, ExSum highlights various limitations in the current practice, helps develop accurate model understanding, and reveals easily overlooked properties of the model. We also connect understandability to other properties of explanations such as human alignment, robustness, and counterfactual minimality and plausibility.