We propose new query applications of the well known randomized incremental construction of the Trapezoidal Search DAG (TSD) on a set of $n$ line segments in the plane, where queries are allowed to be any axis aligned window. We show that our algorithm reports the $m$ trapezoids that are intersected by the query in $\mathcal{O}(m+\log n)$ expected time, regardless of the spatial location of the segment set and the query. In case the query is a {\em vertical segment}, the query time bound reduces to $\mathcal{O}(k +\log n)$ where $k$ is the number of segments that are intersected. This improves on the query and space bound of the well known Segment Tree based approach, which is to date the theoretical bottleneck for optimal query time. In the case where the set of segments is a connected planar subdivision, this method can easily be extended to an algorithm which reports the $k$ segments which intersect an axis aligned query window in $\mathcal{O}(k + \log n)$ expected time. Our publicly available implementation handles degeneracies exactly, including segments with overlap and multi-intersections. Experiments show that the method is practical and provides more reliable query times in comparison to R-trees and the segment tree based data structure on real-world and synthetic data sets.
We investigate the problem of approximating the matrix function $f(A)$ by $r(A)$, with $f$ a Markov function, $r$ a rational interpolant of $f$, and $A$ a symmetric Toeplitz matrix. In a first step, we obtain a new upper bound for the relative interpolation error $1-r/f$ on the spectral interval of $A$. By minimizing this upper bound over all interpolation points, we obtain a new, simple and sharp a priori bound for the relative interpolation error. We then consider three different approaches of representing and computing the rational interpolant $r$. Theoretical and numerical evidence is given that any of these methods for a scalar argument allows to achieve high precision, even in the presence of finite precision arithmetic. We finally investigate the problem of efficiently evaluating $r(A)$, where it turns out that the relative error for a matrix argument is only small if we use a partial fraction decomposition for $r$ following Antoulas and Mayo. An important role is played by a new stopping criterion which ensures to automatically find the degree of $r$ leading to a small error, even in presence of finite precision arithmetic.
The Wiener-Hopf equations are a Toeplitz system of linear equations that naturally arise in several applications in time series. These include the update and prediction step of the stationary Kalman filter equations and the prediction of bivariate time series. The celebrated Wiener-Hopf technique is usually used for solving these equations and is based on a comparison of coefficients in a Fourier series expansion. However, a statistical interpretation of both the method and solution is opaque. The purpose of this note is to revisit the (discrete) Wiener-Hopf equations and obtain an alternative solution that is more aligned with classical techniques in time series analysis. Specifically, we propose a solution to the Wiener-Hopf equations that combines linear prediction with deconvolution. The Wiener-Hopf solution requires the spectral factorization of the underlying spectral density function. For ease of evaluation it is often assumed that the spectral density is rational. This allows one to obtain a computationally tractable solution. However, this leads to an approximation error when the underlying spectral density is not a rational function. We use the proposed solution with Baxter's inequality to derive an error bound for the rational spectral density approximation.
For optimal power flow problems with chance constraints, a particularly effective method is based on a fixed point iteration applied to a sequence of deterministic power flow problems. However, a priori, the convergence of such an approach is not necessarily guaranteed. This article analyses the convergence conditions for this fixed point approach, and reports numerical experiments including for large IEEE networks.
In this paper, we study a non-local approximation of the time-dependent (local) Eikonal equation with Dirichlet-type boundary conditions, where the kernel in the non-local problem is properly scaled. Based on the theory of viscosity solutions, we prove existence and uniqueness of the viscosity solutions of both the local and non-local problems, as well as regularity properties of these solutions in time and space. We then derive error bounds between the solution to the non-local problem and that of the local one, both in continuous-time and Backward Euler time discretization. We then turn to studying continuum limits of non-local problems defined on random weighted graphs with $n$ vertices. In particular, we establish that if the kernel scale parameter decreases at an appropriate rate as $n$ grows, then almost surely, the solution of the problem on graphs converges uniformly to the viscosity solution of the local problem as the time step vanishes and the number vertices $n$ grows large.
Local search has been demonstrated as an efficient approach for both Partial MaxSAT (PMS) and Weighted PMS (WPMS), denoted as (W)PMS, two practical generalizations to the typical combinatorial problem of MaxSAT. In this work, we observe that most (W)PMS local search solvers usually flip a single variable per iteration. Such a mechanism may lead to relatively low-quality local optimal solutions, and may limit the diversity of the search directions to escape from local optima. To this end, we propose a general strategy called farsighted probabilistic sampling (FPS) to replace the single flipping mechanism to boost the (W)PMS local search algorithms. FPS considers the benefit of continuously flipping a pair of variables, so as to find higher-quality local optimal solutions. Moreover, FPS presents an effective approach to escape from local optima by preferring the best to flip among the best sampled single variable and the best sampled variable pair. Extensive experiments demonstrate that our proposed FPS strategy significantly improves the (W)PMS state-of-the-art (local search) solvers, and FPS has an excellent generalization to various (Max)SAT local search solvers.
Lattice-Boltzmann methods are known for their simplicity, efficiency and ease of parallelization, usually relying on uniform Cartesian meshes with a strong bond between spatial and temporal discretization. This fact complicates the crucial issue of reducing the computational cost and the memory impact by automatically coarsening the grid where a fine mesh is unnecessary, still ensuring the overall quality of the numerical solution through error control. This work provides a possible answer to this interesting question, by connecting, for the first time, the field of lattice-Boltzmann Methods (LBM) to the adaptive multiresolution (MR) approach based on wavelets. To this end, we employ a MR multi-scale transform to adapt the mesh as the solution evolves in time according to its local regularity. The collision phase is not affected due to its inherent local nature and because we do not modify the speed of the sound, contrarily to most of the LBM/Adaptive Mesh Refinement (AMR) strategies proposed in the literature, thus preserving the original structure of any LBM scheme. Besides, an original use of the MR allows the scheme to resolve the proper physics by efficiently controlling the accuracy of the transport phase. We carefully test our method to conclude on its adaptability to a wide family of existing lattice Boltzmann schemes, treating both hyperbolic and parabolic systems of equations, thus being less problem-dependent than the AMR approaches, which have a hard time guaranteeing an effective control on the error. The ability of the method to yield a very efficient compression rate and thus a computational cost reduction for solutions involving localized structures with loss of regularity is also shown, while guaranteeing a precise control on the approximation error introduced by the spatial adaptation of the grid. The numerical strategy is implemented on a specific open-source platform called SAMURAI with a dedicated data-structure relying on set algebra.
In warehouses, order picking is known to be the most labor-intensive and costly task in which the employees account for a large part of the warehouse performance. Hence, many approaches exist, that optimize the order picking process based on diverse economic criteria. However, most of these approaches focus on a single economic objective at once and disregard ergonomic criteria in their optimization. Further, the influence of the placement of the items to be picked is underestimated and accordingly, too little attention is paid to the interdependence of these two problems. In this work, we aim at optimizing the storage assignment and the order picking problem within mezzanine warehouse with regards to their reciprocal influence. We propose a customized version of the Non-dominated Sorting Genetic Algorithm II (NSGA-II) for optimizing the storage assignment problem as well as an Ant Colony Optimization (ACO) algorithm for optimizing the order picking problem. Both algorithms incorporate multiple economic and ergonomic constraints simultaneously. Furthermore, the algorithms incorporate knowledge about the interdependence between both problems, aiming to improve the overall warehouse performance. Our evaluation results show that our proposed algorithms return better storage assignments and order pick routes compared to commonly used techniques for the following quality indicators for comparing Pareto fronts: Coverage, Generational Distance, Euclidian Distance, Pareto Front Size, and Inverted Generational Distance. Additionally, the evaluation regarding the interaction of both algorithms shows a better performance when combining both proposed algorithms.
Neural Architecture Search (NAS) was first proposed to achieve state-of-the-art performance through the discovery of new architecture patterns, without human intervention. An over-reliance on expert knowledge in the search space design has however led to increased performance (local optima) without significant architectural breakthroughs, thus preventing truly novel solutions from being reached. In this work we 1) are the first to investigate casting NAS as a problem of finding the optimal network generator and 2) we propose a new, hierarchical and graph-based search space capable of representing an extremely large variety of network types, yet only requiring few continuous hyper-parameters. This greatly reduces the dimensionality of the problem, enabling the effective use of Bayesian Optimisation as a search strategy. At the same time, we expand the range of valid architectures, motivating a multi-objective learning approach. We demonstrate the effectiveness of this strategy on six benchmark datasets and show that our search space generates extremely lightweight yet highly competitive models.
Querying graph structured data is a fundamental operation that enables important applications including knowledge graph search, social network analysis, and cyber-network security. However, the growing size of real-world data graphs poses severe challenges for graph databases to meet the response-time requirements of the applications. Planning the computational steps of query processing - Query Planning - is central to address these challenges. In this paper, we study the problem of learning to speedup query planning in graph databases towards the goal of improving the computational-efficiency of query processing via training queries.We present a Learning to Plan (L2P) framework that is applicable to a large class of query reasoners that follow the Threshold Algorithm (TA) approach. First, we define a generic search space over candidate query plans, and identify target search trajectories (query plans) corresponding to the training queries by performing an expensive search. Subsequently, we learn greedy search control knowledge to imitate the search behavior of the target query plans. We provide a concrete instantiation of our L2P framework for STAR, a state-of-the-art graph query reasoner. Our experiments on benchmark knowledge graphs including DBpedia, YAGO, and Freebase show that using the query plans generated by the learned search control knowledge, we can significantly improve the speed of STAR with negligible loss in accuracy.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.