Reliable and efficient trajectory generation methods are a fundamental need for autonomous dynamical systems of tomorrow. The goal of this article is to provide a comprehensive tutorial of three major convex optimization-based trajectory generation methods: lossless convexification (LCvx), and two sequential convex programming algorithms known as SCvx and GuSTO. In this article, trajectory generation is the computation of a dynamically feasible state and control signal that satisfies a set of constraints while optimizing key mission objectives. The trajectory generation problem is almost always nonconvex, which typically means that it is not readily amenable to efficient and reliable solution onboard an autonomous vehicle. The three algorithms that we discuss use problem reformulation and a systematic algorithmic strategy to nonetheless solve nonconvex trajectory generation tasks through the use of a convex optimizer. The theoretical guarantees and computational speed offered by convex optimization have made the algorithms popular in both research and industry circles. To date, the list of applications includes rocket landing, spacecraft hypersonic reentry, spacecraft rendezvous and docking, aerial motion planning for fixed-wing and quadrotor vehicles, robot motion planning, and more. Among these applications are high-profile rocket flights conducted by organizations like NASA, Masten Space Systems, SpaceX, and Blue Origin. This article aims to give the reader the tools and understanding necessary to work with each algorithm, and to know what each method can and cannot do. A publicly available source code repository supports the provided numerical examples. By the end of the article, the reader should be ready to use the methods, to extend them, and to contribute to their many exciting modern applications.
In this paper we combine the k-means and/or k-means type algorithms with a hill climbing algorithm in stages to solve the joint stratification and sample allocation problem. This is a combinatorial optimisation problem in which we search for the optimal stratification from the set of all possible stratifications of basic strata. Each stratification being a solution the quality of which is measured by its cost. This problem is intractable for larger sets. Furthermore evaluating the cost of each solution is expensive. A number of heuristic algorithms have already been developed to solve this problem with the aim of finding acceptable solutions in reasonable computation times. However, the heuristics for these algorithms need to be trained in order to optimise performance in each instance. We compare the above multi-stage combination of algorithms with three recent algorithms and report the solution costs, evaluation times and training times. The multi-stage combinations generally compare well with the recent algorithms both in the case of atomic and continuous strata and provide the survey designer with a greater choice of algorithms to choose from.
Based on a novel generalized second law of thermodynamics, we demonstrate that feedback control enjoys more net-extractable work than the control without measurements. The generalized second law asserts that the total entropy production of a closed system is bounded below by correlation's dissipation, simply named co-dissipation. Accordingly, co-dissipation is entropy production not to be converted to work. For the control without measurement, co-dissipation is caused by the loss of internal correlations among subsystems. On the other hand, because the feedback control can vanish the co-dissipation, it can extract work in principle from the internal correlation loss, which results in its fundamental advantage. Moreover, the characteristics of co-dissipation are consistent with heat dissipation in terms of irreversible useless entropy production. Hence, the generalized second law implies that the system's entropy production is bounded by the sum of two types of dissipation: heat and correlation. The generalized second law is derived by taking the sum of the entropy productions for all subsystems. We develop the technique for this computation, where the entropy productions are summed in parallel with the sequence of the graphs representing the dependency among subsystems. Furthermore, the positivity of co-dissipation is guaranteed by a purely information-theoretic fact, the data processing inequality, which will possibly shed light on the relation between thermodynamics and information theory.
JavaScript implementations are tested for conformance to the ECMAScript standard using a large hand-written test suite. Not only in this a tedious approach, it also relies solely on the natural language specification for differentiating behaviors, while hidden implementation details can also affect behavior and introduce divergences. We propose to generate conformance tests through dynamic symbolic execution of polyfills, drop-in replacements for newer JavaScript language features that are not yet widely supported. We then run these generated tests against multiple implementations of JavaScript, using a majority vote to identify the correct behavior. To facilitate test generation for polyfill code, we introduce a model for structured symbolic inputs that is suited to the dynamic nature of JavaScript. In our evaluation, we found 17 divergences in the widely used core-js polyfill and were able to increase branch coverage in interpreter code by up to 15%. Because polyfills are typically written even before standardization, our approach will allow to maintain and extend standardization test suites with reduced effort.
Actor-critic (AC) algorithms are known for their efficacy and high performance in solving reinforcement learning problems, but they also suffer from low sampling efficiency. An AC based policy optimization process is iterative and needs to frequently access the agent-environment system to evaluate and update the policy by rolling out the policy, collecting rewards and states (i.e. samples), and learning from them. It ultimately requires a huge number of samples to learn an optimal policy. To improve sampling efficiency, we propose a strategy to optimize the training dataset that contains significantly less samples collected from the AC process. The dataset optimization is made of a best episode only operation, a policy parameter-fitness model, and a genetic algorithm module. The optimal policy network trained by the optimized training dataset exhibits superior performance compared to many contemporary AC algorithms in controlling autonomous dynamical systems. Evaluation on standard benchmarks show that the method improves sampling efficiency, ensures faster convergence to optima, and is more data-efficient than its counterparts.
Kinodynamic motion planning for non-holomonic mobile robots is a challenging problem that is lacking a universal solution. One of the computationally efficient ways to solve it is to build a geometric path first and then transform this path into a kinematically feasible one. Gradient-informed Path Smoothing (GRIPS) is a recently introduced method for such transformation. GRIPS iteratively deforms the path and adds/deletes the waypoints while trying to connect each consecutive pair of them via the provided steering function that respects the kinematic constraints. The algorithm is relatively fast but, unfortunately, does not provide any guarantees that it will succeed. In practice, it often fails to produce feasible trajectories for car-like robots with large turning radius. In this work, we introduce a range of modifications that are aimed at increasing the success rate of GRIPS for car-like robots. The main enhancement is adding the additional step that heuristically samples waypoints along the bottleneck parts of the geometric paths (such as sharp turns). The results of the experimental evaluation provide a clear evidence that the success rate of the suggested algorithm is up to 40% higher compared to the original GRIPS and hits the bar of 90%, while its runtime is lower.
Motion planning under uncertainty is of significant importance for safety-critical systems such as autonomous vehicles. Such systems have to satisfy necessary constraints (e.g., collision avoidance) with potential uncertainties coming from either disturbed system dynamics or noisy sensor measurements. However, existing motion planning methods cannot efficiently find the robust optimal solutions under general nonlinear and non-convex settings. In this paper, we formulate such problem as chance-constrained Gaussian belief space planning and propose the constrained iterative Linear Quadratic Gaussian (CILQG) algorithm as a real-time solution. In this algorithm, we iteratively calculate a Gaussian approximation of the belief and transform the chance-constraints. We evaluate the effectiveness of our method in simulations of autonomous driving planning tasks with static and dynamic obstacles. Results show that CILQG can handle uncertainties more appropriately and has faster computation time than baseline methods.
The study of trajectories produced by the motion of particles, objects or animals is often a core task in many research fields such as biology or robotics. There are many challenges in the process, from the building of trajectories from raw sensor data (images, inertial measurement data, etc) to the recognition of key statistical patterns that may emerge in groups of trajectories. This work introduces a software library that addresses the problem as a whole, covering a full pipeline to process raw data and produce the analytics typically demanded by scientific reports. Unlike other trajectory analysis software, our library focuses on the subject in a quite abstract way, allowing its usage across several fields. We validate the software by the reproduction of key results associated to different original research articles, providing an example script in each case. Our aim is that the present software provides researchers with limited experience in programming or computer vision with an easy-to-handle toolbox for approaching trajectory data.
This paper seeks to develop a deeper understanding of the fundamental properties of neural text generations models. The study of artifacts that emerge in machine generated text as a result of modeling choices is a nascent research area. Previously, the extent and degree to which these artifacts surface in generated text has not been well studied. In the spirit of better understanding generative text models and their artifacts, we propose the new task of distinguishing which of several variants of a given model generated a piece of text, and we conduct an extensive suite of diagnostic tests to observe whether modeling choices (e.g., sampling methods, top-$k$ probabilities, model architectures, etc.) leave detectable artifacts in the text they generate. Our key finding, which is backed by a rigorous set of experiments, is that such artifacts are present and that different modeling choices can be inferred by observing the generated text alone. This suggests that neural text generators may be more sensitive to various modeling choices than previously thought.
Inferring the most likely configuration for a subset of variables of a joint distribution given the remaining ones - which we refer to as co-generation - is an important challenge that is computationally demanding for all but the simplest settings. This task has received a considerable amount of attention, particularly for classical ways of modeling distributions like structured prediction. In contrast, almost nothing is known about this task when considering recently proposed techniques for modeling high-dimensional distributions, particularly generative adversarial nets (GANs). Therefore, in this paper, we study the occurring challenges for co-generation with GANs. To address those challenges we develop an annealed importance sampling based Hamiltonian Monte Carlo co-generation algorithm. The presented approach significantly outperforms classical gradient based methods on a synthetic and on the CelebA and LSUN datasets.
Many resource allocation problems in the cloud can be described as a basic Virtual Network Embedding Problem (VNEP): finding mappings of request graphs (describing the workloads) onto a substrate graph (describing the physical infrastructure). In the offline setting, the two natural objectives are profit maximization, i.e., embedding a maximal number of request graphs subject to the resource constraints, and cost minimization, i.e., embedding all requests at minimal overall cost. The VNEP can be seen as a generalization of classic routing and call admission problems, in which requests are arbitrary graphs whose communication endpoints are not fixed. Due to its applications, the problem has been studied intensively in the networking community. However, the underlying algorithmic problem is hardly understood. This paper presents the first fixed-parameter tractable approximation algorithms for the VNEP. Our algorithms are based on randomized rounding. Due to the flexible mapping options and the arbitrary request graph topologies, we show that a novel linear program formulation is required. Only using this novel formulation the computation of convex combinations of valid mappings is enabled, as the formulation needs to account for the structure of the request graphs. Accordingly, to capture the structure of request graphs, we introduce the graph-theoretic notion of extraction orders and extraction width and show that our algorithms have exponential runtime in the request graphs' maximal width. Hence, for request graphs of fixed extraction width, we obtain the first polynomial-time approximations. Studying the new notion of extraction orders we show that (i) computing extraction orders of minimal width is NP-hard and (ii) that computing decomposable LP solutions is in general NP-hard, even when restricting request graphs to planar ones.