In this paper we deal with a complex real world scheduling problem closely related to the well-known Resource-Constrained Project Scheduling Problem (RCPSP). The problem concerns industrial test laboratories in which a large number of tests has to be performed by qualified personnel using specialised equipment, while respecting deadlines and other constraints. We present different constraint programming models and search strategies for this problem. Furthermore, we propose a Very Large Neighborhood Search approach based on our CP methods. Our models are evaluated using CP solvers and a MIP solver both on real-world test laboratory data and on a set of generated instances of different sizes based on the real-world data. Further, we compare the exact approaches with VLNS and a Simulated Annealing heuristic. We could find feasible solutions for all instances and several optimal solutions and we show that using VLNS we can improve upon the results of the other approaches.
The kinetic theory provides a good basis for developing numerical methods for multiscale gas flows covering a wide range of flow regimes. A particular challenge for kinetic schemes is whether they can capture the correct hydrodynamic behaviors of the system in the continuum regime (i.e., as the Knudsen number $\epsilon\ll 1$ ) without enforcing kinetic scale resolution. At the current stage, the main approach to analyze such property is the asymptotic preserving (AP) concept, which aims to show whether the kinetic scheme reduces to a solver for the hydrodynamic equations as $\epsilon \to 0$. However, the detailed asymptotic properties of the kinetic scheme are indistinguishable as $\epsilon$ is small but finite under the AP framework. In order to distinguish different characteristics of kinetic schemes, in this paper we introduce the concept of unified preserving (UP) aiming at assessing asmyptotic orders (in terms of $\epsilon$) of a kinetic scheme by employing the modified equation approach and Chapman-Enskon analysis. It is shown that the UP properties of a kinetic scheme generally depend on the spatial/temporal accuracy and closely on the inter-connections among the three scales (kinetic scale, numerical scale, and hydrodynamic scale). Specifically, the numerical resolution and specific discretization determine the numerical flow behaviors of the scheme in different regimes, especially in the near continuum limit. As two examples, the UP analysis is applied to the discrete unified gas-kinetic scheme (DUGKS) and a second-order implicit-explicit Runge-Kutta (IMEX-RK) scheme to evaluate their asymptotic behaviors in the continuum limit.
Serverless computing has seen rapid growth due to the ease-of-use and cost-efficiency it provides. However, function scheduling, a critical component of serverless systems, has been overlooked. In this paper, we take a first-principles approach toward designing a scheduler that caters to the unique characteristics of serverless functions as seen in real-world deployments. We first create a taxonomy of scheduling policies along three dimensions. Next, we use simulation to explore the scheduling policy space for the function characteristics in a 14-day trace of Azure functions and conclude that frequently used features such as late binding and random load balancing are sub-optimal for common execution time distributions and load ranges. We use these insights to design Hermes, a scheduler for serverless functions with three key characteristics. First, to avoid head-of-line blocking due to high function execution time variability, Hermes uses a combination of early binding and processor sharing for scheduling at individual worker machines. Second, Hermes uses a hybrid load balancing approach that improves consolidation at low load while employing least-loaded balancing at high load to retain high performance. Third, Hermes is both load and locality-aware, reducing the number of cold starts compared to pure load-based policies. We implement Hermes for Apache OpenWhisk and demonstrate that, for the case of the function patterns observed both in the Azure and in other real-world traces, it achieves up to 85% lower function slowdown and 60% higher throughput compared to existing policies.
We study an EM algorithm for estimating product-term regression models with missing data. The study of such problems in the likelihood tradition has thus far been restricted to an EM algorithm method using full numerical integration. However, under most missing data patterns, we show that this problem can be solved analytically, and numerical approximations are only needed under specific conditions. Thus we propose a hybrid EM algorithm, which uses analytic solutions when available and approximate solutions only when needed. The theoretical framework of our algorithm is described herein, along with two numerical experiments using both simulated and real data. We show that our algorithm confers higher accuracy to the estimation process, relative to the existing full numerical integration method. We conclude with a discussion of applications, extensions, and topics of further research.
We consider the non-preemptive scheduling problem on identical machines where there is a parameter B and each machine in every unit length time interval can process up to B different jobs. The goal function we consider is the makespan minimization and we develop an EPTAS for this problem. Prior to our work a PTAS was known only for the case of one machine and constant values of B, and even the case of non-constant values of B on one machine was not known to admit a PTAS.
The framework of database repairs and consistent answers to queries is a principled approach to managing inconsistent databases. We describe the first system able to compute the consistent answers of general aggregation queries with the COUNT(A), COUNT(*), SUM(A), MIN(A), and MAX(A) operators, and with or without grouping constructs. Our system uses reductions to optimization versions of Boolean satisfiability (SAT) and then leverages powerful SAT solvers. We carry out an extensive set of experiments on both synthetic and real-world data that demonstrate the usefulness and scalability of this approach.
This paper investigates the problem of online statistical inference of model parameters in stochastic optimization problems via the Kiefer-Wolfowitz algorithm with random search directions. We first present the asymptotic distribution for the Polyak-Ruppert-averaging type Kiefer-Wolfowitz (AKW) estimators, whose asymptotic covariance matrices depend on the function-value query complexity and the distribution of search directions. The distributional result reflects the trade-off between statistical efficiency and function query complexity. We further analyze the choices of random search directions to minimize the asymptotic covariance matrix, and conclude that the optimal search direction depends on the optimality criteria with respect to different summary statistics of the Fisher information matrix. Based on the asymptotic distribution result, we conduct online statistical inference by providing two construction procedures of valid confidence intervals. We provide numerical experiments verifying our theoretical results with the practical effectiveness of the procedures.
Image-to-image translation (I2I) aims to transfer images from a source domain to a target domain while preserving the content representations. I2I has drawn increasing attention and made tremendous progress in recent years because of its wide range of applications in many computer vision and image processing problems, such as image synthesis, segmentation, style transfer, restoration, and pose estimation. In this paper, we provide an overview of the I2I works developed in recent years. We will analyze the key techniques of the existing I2I works and clarify the main progress the community has made. Additionally, we will elaborate on the effect of I2I on the research and industry community and point out remaining challenges in related fields.
While existing work in robust deep learning has focused on small pixel-level $\ell_p$ norm-based perturbations, this may not account for perturbations encountered in several real world settings. In many such cases although test data might not be available, broad specifications about the types of perturbations (such as an unknown degree of rotation) may be known. We consider a setup where robustness is expected over an unseen test domain that is not i.i.d. but deviates from the training domain. While this deviation may not be exactly known, its broad characterization is specified a priori, in terms of attributes. We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space, without having access to the data from the test domain. Our adversarial training solves a min-max optimization problem, with the inner maximization generating adversarial perturbations, and the outer minimization finding model parameters by optimizing the loss on adversarial perturbations generated from the inner maximization. We demonstrate the applicability of our approach on three types of naturally occurring perturbations -- object-related shifts, geometric transformations, and common image corruptions. Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations. We demonstrate the usefulness of the proposed approach by showing the robustness gains of deep neural networks trained using our adversarial training on MNIST, CIFAR-10, and a new variant of the CLEVR dataset.
The problem of Approximate Nearest Neighbor (ANN) search is fundamental in computer science and has benefited from significant progress in the past couple of decades. However, most work has been devoted to pointsets whereas complex shapes have not been sufficiently treated. Here, we focus on distance functions between discretized curves in Euclidean space: they appear in a wide range of applications, from road segments to time-series in general dimension. For $\ell_p$-products of Euclidean metrics, for any $p$, we design simple and efficient data structures for ANN, based on randomized projections, which are of independent interest. They serve to solve proximity problems under a notion of distance between discretized curves, which generalizes both discrete Fr\'echet and Dynamic Time Warping distances. These are the most popular and practical approaches to comparing such curves. We offer the first data structures and query algorithms for ANN with arbitrarily good approximation factor, at the expense of increasing space usage and preprocessing time over existing methods. Query time complexity is comparable or significantly improved by our algorithms, our algorithm is especially efficient when the length of the curves is bounded.
We report an evaluation of the effectiveness of the existing knowledge base embedding models for relation prediction and for relation extraction on a wide range of benchmarks. We also describe a new benchmark, which is much larger and complex than previous ones, which we introduce to help validate the effectiveness of both tasks. The results demonstrate that knowledge base embedding models are generally effective for relation prediction but unable to give improvements for the state-of-art neural relation extraction model with the existing strategies, while pointing limitations of existing methods.