Real-time scheduling theory assists developers of embedded systems in verifying that the timing constraints required by critical software tasks can be feasibly met on a given hardware platform. Fundamental problems in the theory are often formulated as search problems for fixed points of functions and are solved by fixed-point iterations. These fixed-point methods are used widely because they are simple to understand, simple to implement, and seem to work well in practice. These fundamental problems can also be formulated as integer programs and solved with algorithms that are based on theories of linear programming and cutting planes amongst others. However, such algorithms are harder to understand and implement than fixed-point iterations. In this research, we show that ideas like linear programming duality and cutting planes can be used to develop algorithms that are as easy to implement as existing fixed-point iteration schemes but have better convergence properties. We evaluate the algorithms on synthetically generated problem instances to demonstrate that the new algorithms are faster than the existing algorithms.
We study the expressibility and learnability of convex optimization solution functions and their multi-layer architectural extension. The main results are: \emph{(1)} the class of solution functions of linear programming (LP) and quadratic programming (QP) is a universal approximant for the $C^k$ smooth model class or some restricted Sobolev space, and we characterize the rate-distortion, \emph{(2)} the approximation power is investigated through a viewpoint of regression error, where information about the target function is provided in terms of data observations, \emph{(3)} compositionality in the form of a deep architecture with optimization as a layer is shown to reconstruct some basic functions used in numerical analysis without error, which implies that \emph{(4)} a substantial reduction in rate-distortion can be achieved with a universal network architecture, and \emph{(5)} we discuss the statistical bounds of empirical covering numbers for LP/QP, as well as a generic optimization problem (possibly nonconvex) by exploiting tame geometry. Our results provide the \emph{first rigorous analysis of the approximation and learning-theoretic properties of solution functions} with implications for algorithmic design and performance guarantees.
In this work we analyze the inverse problem of recovering the space-dependent potential coefficient in an elliptic / parabolic problem from distributed observation. We establish novel (weighted) conditional stability estimates under very mild conditions on the problem data. Then we provide an error analysis of a standard reconstruction scheme based on the standard output least-squares formulation with Tikhonov regularization (by an $H^1$-seminorm penalty), which is then discretized by the Galerkin finite element method with continuous piecewise linear finite elements in space (and also backward Euler method in time for parabolic problems). We present a detailed analysis of the discrete scheme, and provide convergence rates in a weighted $L^2(\Omega)$ for discrete approximations with respect to the exact potential. The error bounds are explicitly dependent on the noise level, regularization parameter and discretization parameter(s). Under suitable conditions, we also derive error estimates in the standard $L^2(\Omega)$ and interior $L^2$ norms. The analysis employs sharp a priori error estimates and nonstandard test functions. Several numerical experiments are given to complement the theoretical analysis.
With the aim of further enabling the exploitation of impacts in robotic manipulation, a control framework is presented that directly tackles the challenges posed by tracking control of robotic manipulators that are tasked to perform nominally simultaneous impacts associated to multiple contact points. To this end, we extend the framework of reference spreading, which uses an extended ante- and post-impact reference coherent with a rigid impact map, determined under the assumption of an inelastic simultaneous impact. In practice, the robot will not reside exactly on the reference at the impact moment; as a result a sequence of impacts at the different contact points will typically occur. Our new approach extends reference spreading in this context via the introduction of an additional interim control mode. In this mode, a torque command is still based on the ante-impact reference with the goal of reaching the target contact state, but velocity feedback is disabled as this can be potentially harmful due to rapid velocity changes. With an eye towards real implementation, the approach is formulated using a quadratic programming (QP) control framework and is validated using numerical simulations both on a rigid robot model and on a realistic robot model with flexible joints.
Many applications, such as system identification, classification of time series, direct and inverse problems in partial differential equations, and uncertainty quantification lead to the question of approximation of a non-linear operator between metric spaces $\mathfrak{X}$ and $\mathfrak{Y}$. We study the problem of determining the degree of approximation of such operators on a compact subset $K_\mathfrak{X}\subset \mathfrak{X}$ using a finite amount of information. If $\mathcal{F}: K_\mathfrak{X}\to K_\mathfrak{Y}$, a well established strategy to approximate $\mathcal{F}(F)$ for some $F\in K_\mathfrak{X}$ is to encode $F$ (respectively, $\mathcal{F}(F)$) in terms of a finite number $d$ (repectively $m$) of real numbers. Together with appropriate reconstruction algorithms (decoders), the problem reduces to the approximation of $m$ functions on a compact subset of a high dimensional Euclidean space $\mathbb{R}^d$, equivalently, the unit sphere $\mathbb{S}^d$ embedded in $\mathbb{R}^{d+1}$. The problem is challenging because $d$, $m$, as well as the complexity of the approximation on $\mathbb{S}^d$ are all large, and it is necessary to estimate the accuracy keeping track of the inter-dependence of all the approximations involved. In this paper, we establish constructive methods to do this efficiently; i.e., with the constants involved in the estimates on the approximation on $\mathbb{S}^d$ being $\mathcal{O}(d^{1/6})$. We study different smoothness classes for the operators, and also propose a method for approximation of $\mathcal{F}(F)$ using only information in a small neighborhood of $F$, resulting in an effective reduction in the number of parameters involved.
The emerging modular vehicle (MV) technology possesses the ability to physically connect/disconnect with each other and thus travel in platoon for less energy consumption. Moreover, a platoon of MVs can be regarded as a new bus-like platform with expanded on-board carrying capacity and provide larger service throughput according to the demand density. This innovation concept might solve the mismatch problems between the fixed vehicle capacity and the temporal-spatial variations of demand in current transportation system. To obtain the optimal assignments and routes for the operation of MVs, a mixed integer linear programming (MILP) model is formulated to minimize the weighted total cost of vehicle travel cost and passenger service time. The temporal and spatial synchronization of vehicle platoons and passenger en-route transfers are determined and optimized by the MILP model while constructing the paths. Heuristic algorithms based on large neighborhood search are developed to solve the modular dial-a-ride problem (MDARP) for practical scenarios. A set of small-scale synthetic numerical experiments are tested to evaluate the optimality gap and computation time between our proposed MILP model and heuristic algorithms. Large-scale experiments are conducted on the Anaheim network with 378 candidate join/split nodes to further explore the potentials and identify the ideal operation scenarios of MVs. The results show that the innovative MV technology can save up to 52.0% in vehicle travel cost, 35.6% in passenger service time, and 29.4% in total cost against existing on-demand mobility services. Results suggest that MVs best benefit from platooning by serving enclave pairs as a hub-and-spoke service.
We present several new techniques for evolving code through sequences of mutations. Among these are (1) a method of local scoring assigning a score to each expression in a program, allowing us to more precisely identify buggy code, (2) suppose-expressions which act as an intermediate step to evolving if-conditionals, and (3) cyclic evolution in which we evolve programs through phases of expansion and reduction. To demonstrate their merits, we provide a basic proof-of-concept implementation which we show evolves correct code for several functions manipulating integers and lists, including some that are intractable by means of existing Genetic Programming techniques.
In video surveillance as well as automotive applications, so-called fisheye cameras are often employed to capture a very wide angle of view. As such cameras depend on projections quite different from the classical perspective projection, the resulting fisheye image and video data correspondingly exhibits non-rectilinear image characteristics. Typical image and video processing algorithms, however, are not designed for these fisheye characteristics. To be able to develop and evaluate algorithms specifically adapted to fisheye images and videos, a corresponding test data set is therefore introduced in this paper. The first of those sequences were generated during the authors' own work on motion estimation for fish-eye videos and further sequences have gradually been added to create a more extensive collection. The data set now comprises synthetically generated fisheye sequences, ranging from simple patterns to more complex scenes, as well as fisheye video sequences captured with an actual fisheye camera. For the synthetic sequences, exact information on the lens employed is available, thus facilitating both verification and evaluation of any adapted algorithms. For the real-world sequences, we provide calibration data as well as the settings used during acquisition. The sequences are freely available via www.lms.lnt.de/fisheyedataset/.
Reinforcement learning (RL) problems over general state and action spaces are notoriously challenging. In contrast to the tableau setting, one can not enumerate all the states and then iteratively update the policies for each state. This prevents the application of many well-studied RL methods especially those with provable convergence guarantees. In this paper, we first present a substantial generalization of the recently developed policy mirror descent method to deal with general state and action spaces. We introduce new approaches to incorporate function approximation into this method, so that we do not need to use explicit policy parameterization at all. Moreover, we present a novel policy dual averaging method for which possibly simpler function approximation techniques can be applied. We establish linear convergence rate to global optimality or sublinear convergence to stationarity for these methods applied to solve different classes of RL problems under exact policy evaluation. We then define proper notions of the approximation errors for policy evaluation and investigate their impact on the convergence of these methods applied to general-state RL problems with either finite-action or continuous-action spaces. To the best of our knowledge, the development of these algorithmic frameworks as well as their convergence analysis appear to be new in the literature.
Mixed-Integer Linear Programming (MILP) plays an important role across a range of scientific disciplines and within areas of strategic importance to society. The MILP problems, however, suffer from combinatorial complexity. Because of integer decision variables, as the problem size increases, the number of possible solutions increases super-linearly thereby leading to a drastic increase in the computational effort. To efficiently solve MILP problems, a "price-based" decomposition and coordination approach is developed to exploit 1. the super-linear reduction of complexity upon the decomposition and 2. the geometric convergence potential inherent to Polyak's stepsizing formula for the fastest coordination possible to obtain near-optimal solutions in a computationally efficient manner. Unlike all previous methods to set stepsizes heuristically by adjusting hyperparameters, the key novel way to obtain stepsizes is purely decision-based: a novel "auxiliary" constraint satisfaction problem is solved, from which the appropriate stepsizes are inferred. Testing results for large-scale Generalized Assignment Problems (GAP) demonstrate that for the majority of instances, certifiably optimal solutions are obtained. For stochastic job-shop scheduling as well as for pharmaceutical scheduling, computational results demonstrate the two orders of magnitude speedup as compared to Branch-and-Cut (B&C). The new method has a major impact on the efficient resolution of complex Mixed-Integer Programming (MIP) problems arising within a variety of scientific fields.
There are existing standard solvers for tackling discrete optimization problems. However, in practice, it is uncommon to apply them directly to the large input space typical of this class of problems. Rather, the input is preprocessed to look for simplifications and to extract the core subset of the problem space, which is called the Kernel. This pre-processing procedure is known in the context of parameterized complexity theory as Kernelization. In this thesis, I implement parallel versions of some Kernelization algorithms and evaluate their performance. The performance of Kernelization algorithms is measured either by the size of the output Kernel or by the time it takes to compute the kernel. Sometimes the Kernel is the same as the original input, so it is desirable to know this, as soon as possible. The problem scope is limited to a particular type of discrete optimisation problem which is a version of the K-clique problem in which nodes of the given graph are pre-coloured legally using k colours. The final evaluation shows that my parallel implementations achieve over 50% improvement in efficiency for at least one of these algorithms. This is attained not just in terms of speed, but it is also able to produce a smaller kernel.