Floor planning is an important and difficult task in architecture. When planning office buildings, rooms that belong to the same organisational unit should be placed close to each other. This leads to the following NP-hard mathematical optimization problem. Given the outline of each floor, a list of room sizes, and, for each room, the unit to which it belongs, the aim is to compute floor plans such that each room is placed on some floor and the total distance of the rooms within each unit is minimized. The problem can be formulated as an integer linear program (ILP). Commercial ILP solvers exist, but due to the difficulty of the problem, only small to medium instances can be solved to (near-) optimality. For solving larger instances, we propose to split the problem into two subproblems; floor assignment and planning single floors. We formulate both subproblems as ILPs and solve realistic problem instances. Our experimental study shows that the problem helps to reduce the computation time considerably. Where we were able to compute the global optimum, the solution cost of the combined approach increased very little.
We propose a detailed proof of the fact that the inverse of Ackermann function is computable in linear time.
In this paper, we investigate the problem of a last-mile delivery service that selects up to $N$ available vehicles to deliver $M$ packages from a centralized depot to $M$ delivery locations. The objective of the last-mile delivery service is to jointly maximize customer satisfaction (minimize delivery time) and minimize operating cost (minimize total travel time) by selecting the optimal number of vehicles to perform the deliveries. We model this as an assignment (vehicles to packages) and path planning (determining the delivery order and route) problem, which is equivalent to the NP-hard multiple traveling salesperson problem. We propose a scalable heuristic algorithm, which sacrifices some optimality to achieve a reasonable computational cost for a high number of packages. The algorithm combines hierarchical clustering with a greedy search. To validate our approach, we compare the results of our simulation to experiments in a $1$:$25$ scale robotic testbed for future mobility systems.
Chance constrained optimization problems allow to model problems where constraints involving stochastic components should only be violated with a small probability. Evolutionary algorithms have recently been applied to this scenario and shown to achieve high quality results. With this paper, we contribute to the theoretical understanding of evolutionary algorithms for chance constrained optimization. We study the scenario of stochastic components that are independent and Normally distributed. By generalizing results for the class of linear functions to the sum of transformed linear functions, we show that the (1+1)~EA can optimize the chance constrained setting without additional constraints in time O(n log n). However, we show that imposing an additional uniform constraint already leads to local optima for very restricted scenarios and an exponential optimization time for the (1+1)~EA. We therefore propose a multi-objective formulation of the problem which trades off the expected cost and its variance. We show that multi-objective evolutionary algorithms are highly effective when using this formulation and obtain a set of solutions that contains an optimal solution for any possible confidence level imposed on the constraint. Furthermore, we show that this approach can also be used to compute a set of optimal solutions for the chance constrained minimum spanning tree problem.
Consider the task of estimating a 3-order $n \times n \times n$ tensor from noisy observations of randomly chosen entries in the sparse regime. We introduce a similarity based collaborative filtering algorithm for sparse tensor estimation and argue that it achieves sample complexity that nearly matches the conjectured computationally efficient lower bound on the sample complexity for the setting of low-rank tensors. Our algorithm uses the matrix obtained from the flattened tensor to compute similarity, and estimates the tensor entries using a nearest neighbor estimator. We prove that the algorithm recovers a low rank tensor with maximum entry-wise error (MEE) and mean-squared-error (MSE) decaying to $0$ as long as each entry is observed independently with probability $p = \Omega(n^{-3/2 + \kappa})$ for any arbitrarily small $\kappa > 0$. % as long as tensor has finite rank $r = \Theta(1)$. More generally, we establish robustness of the estimator, showing that when arbitrary noise bounded by $\epsilon \geq 0$ is added to each observation, the estimation error with respect to MEE and MSE degrades by ${\sf poly}(\epsilon)$. Consequently, even if the tensor may not have finite rank but can be approximated within $\epsilon \geq 0$ by a finite rank tensor, then the estimation error converges to ${\sf poly}(\epsilon)$. Our analysis sheds insight into the conjectured sample complexity lower bound, showing that it matches the connectivity threshold of the graph used by our algorithm for estimating similarity between coordinates.
We perform a systematic exploration of the principle of Space Utilization Optimization (SUO) as a heuristic for planning better individual paths in a decoupled multi-robot path planner, with applications to both one-shot and life-long multi-robot path planning problems. We show that the decentralized heuristic set, SU-I, preserves single path optimality and significantly reduces congestion that naturally happens when many paths are planned without coordination. Integration of SU-I into complete planners brings dramatic reductions in computation time due to the significantly reduced number of conflicts and leads to sizable solution optimality gains in diverse evaluation scenarios with medium and large maps, for both one-shot and life-long problem settings.
In this paper we deal with a practical problem that arises in military situations. The problem is to plan a path for one, or more, agents to reach a target without being detected by enemy sensors. Agents are not passive, rather they can initiate actions which aid evasion. They can knockout, completely disable, sensors. They can also confuse sensors, so reduce sensor detection probabilities. Agent actions are path dependent and time limited. By path dependent we mean that an agent needs to be sufficiently close to a sensor to knock it out. By time limited we mean that a limit is imposed on how long a sensor is knocked out or confused before it reverts back to its original operating state. The approach adopted breaks the continuous space in which agents move into a discrete space. This enables the problem to be formulated as a zero-one integer program with linear constraints. The advantage of representing the problem in this manner is that powerful commercial software optimisation packages exist to solve the problem to proven global optimality. A heuristic for the problem based on successive shortest paths is also presented. Computational results are presented for a number of randomly generated test problems.
Imitation learning seeks to circumvent the difficulty in designing proper reward functions for training agents by utilizing expert behavior. With environments modeled as Markov Decision Processes (MDP), most of the existing imitation algorithms are contingent on the availability of expert demonstrations in the same MDP as the one in which a new imitation policy is to be learned. In this paper, we study the problem of how to imitate tasks when there exist discrepancies between the expert and agent MDP. These discrepancies across domains could include differing dynamics, viewpoint, or morphology; we present a novel framework to learn correspondences across such domains. Importantly, in contrast to prior works, we use unpaired and unaligned trajectories containing only states in the expert domain, to learn this correspondence. We utilize a cycle-consistency constraint on both the state space and a domain agnostic latent space to do this. In addition, we enforce consistency on the temporal position of states via a normalized position estimator function, to align the trajectories across the two domains. Once this correspondence is found, we can directly transfer the demonstrations on one domain to the other and use it for imitation. Experiments across a wide variety of challenging domains demonstrate the efficacy of our approach.
We present Neural A*, a novel data-driven search method for path planning problems. Despite the recent increasing attention to data-driven path planning, a machine learning approach to search-based planning is still challenging due to the discrete nature of search algorithms. In this work, we reformulate a canonical A* search algorithm to be differentiable and couple it with a convolutional encoder to form an end-to-end trainable neural network planner. Neural A* solves a path planning problem by encoding a problem instance to a guidance map and then performing the differentiable A* search with the guidance map. By learning to match the search results with ground-truth paths provided by experts, Neural A* can produce a path consistent with the ground truth accurately and efficiently. Our extensive experiments confirmed that Neural A* outperformed state-of-the-art data-driven planners in terms of the search optimality and efficiency trade-off, and furthermore, successfully predicted realistic human trajectories by directly performing search-based planning on natural image inputs.
In two-phase image segmentation, convex relaxation has allowed global minimisers to be computed for a variety of data fitting terms. Many efficient approaches exist to compute a solution quickly. However, we consider whether the nature of the data fitting in this formulation allows for reasonable assumptions to be made about the solution that can improve the computational performance further. In particular, we employ a well known dual formulation of this problem and solve the corresponding equations in a restricted domain. We present experimental results that explore the dependence of the solution on this restriction and quantify imrovements in the computational performance. This approach can be extended to analogous methods simply and could provide an efficient alternative for problems of this type.
The Normalized Cut (NCut) objective function, widely used in data clustering and image segmentation, quantifies the cost of graph partitioning in a way that biases clusters or segments that are balanced towards having lower values than unbalanced partitionings. However, this bias is so strong that it avoids any singleton partitions, even when vertices are very weakly connected to the rest of the graph. Motivated by the B\"uhler-Hein family of balanced cut costs, we propose the family of Compassionately Conservative Balanced (CCB) Cut costs, which are indexed by a parameter that can be used to strike a compromise between the desire to avoid too many singleton partitions and the notion that all partitions should be balanced. We show that CCB-Cut minimization can be relaxed into an orthogonally constrained $\ell_{\tau}$-minimization problem that coincides with the problem of computing Piecewise Flat Embeddings (PFE) for one particular index value, and we present an algorithm for solving the relaxed problem by iteratively minimizing a sequence of reweighted Rayleigh quotients (IRRQ). Using images from the BSDS500 database, we show that image segmentation based on CCB-Cut minimization provides better accuracy with respect to ground truth and greater variability in region size than NCut-based image segmentation.