Renting servers in the cloud is a generalization of the bin packing problem, motivated by job allocation to servers in cloud computing applications. Jobs arrive in an online manner, and need to be assigned to servers; their duration and size are known at the time of arrival. There is an infinite supply of identical servers, each having one unit of computational capacity per unit of time. A server can be rented at any time and continues to be rented until all jobs assigned to it finish. The cost of an assignment is the sum of durations of rental periods of all servers. The goal is to assign jobs to servers to minimize the overall cost while satisfying server capacity constraints. We focus on analyzing two natural algorithms, NextFit and FirstFit, for the case of jobs of equal duration. It is known that the competitive ratio of NextFit and FirstFit are at most 3 and 4 respectively for this case. We prove a tight bound of 2 on the competitive ratio of NextFit. For FirstFit, we establish a lower bound of 2.519 on the competitive ratio, even when jobs have only two distinct arrival times. For the case when jobs have arrival times 0 and 1 and duration 2, we show a lower bound of 1.89 and an upper bound of 2 on the strict competitive ratio of FirstFit. Finally, using the weight function technique, we obtain stronger results for the case of uniform servers.
We study a fundamental question concerning adversarial noise models in statistical problems where the algorithm receives i.i.d. draws from a distribution $\mathcal{D}$. The definitions of these adversaries specify the type of allowable corruptions (noise model) as well as when these corruptions can be made (adaptivity); the latter differentiates between oblivious adversaries that can only corrupt the distribution $\mathcal{D}$ and adaptive adversaries that can have their corruptions depend on the specific sample $S$ that is drawn from $\mathcal{D}$. In this work, we investigate whether oblivious adversaries are effectively equivalent to adaptive adversaries, across all noise models studied in the literature. Specifically, can the behavior of an algorithm $\mathcal{A}$ in the presence of oblivious adversaries always be well-approximated by that of an algorithm $\mathcal{A}'$ in the presence of adaptive adversaries? Our first result shows that this is indeed the case for the broad class of statistical query algorithms, under all reasonable noise models. We then show that in the specific case of additive noise, this equivalence holds for all algorithms. Finally, we map out an approach towards proving this statement in its fullest generality, for all algorithms and under all reasonable noise models.
A homomorphic secret sharing (HSS) scheme is a secret sharing scheme that supports evaluating functions on shared secrets by means of a local mapping from input shares to output shares. We initiate the study of the download rate of HSS, namely, the achievable ratio between the length of the output shares and the output length when amortized over $\ell$ function evaluations. We obtain the following results. * In the case of linear information-theoretic HSS schemes for degree-$d$ multivariate polynomials, we characterize the optimal download rate in terms of the optimal minimal distance of a linear code with related parameters. We further show that for sufficiently large $\ell$ (polynomial in all problem parameters), the optimal rate can be realized using Shamir's scheme, even with secrets over $\mathbb{F}_2$. * We present a general rate-amplification technique for HSS that improves the download rate at the cost of requiring more shares. As a corollary, we get high-rate variants of computationally secure HSS schemes and efficient private information retrieval protocols from the literature. * We show that, in some cases, one can beat the best download rate of linear HSS by allowing nonlinear output reconstruction and $2^{-\Omega(\ell)}$ error probability.
Edge computing brings several advantages, such as reduced latency, increased bandwidth, and improved locality of traffic. One aspect that is not sufficiently understood is the impact of the different communication latency experienced in the edge-cloud continuum on the energy consumption of clients. We studied how a request-response communication scheme is influenced by different placements of the server, when communication is based on LTE. Results show that by accurately selecting the operational parameters a significant amount of energy can be saved.
We present and analyze a cut finite element method for the weak imposition of the Neumann boundary conditions of the Darcy problem. The Raviart-Thomas mixed element on both triangular and quadrilateral meshes is considered. Our method is based on the Nitsche formulation studied in [10.1515/jnma-2021-0042] and can be considered as a first attempt at extension in the unfitted case. The key feature is to add two ghost penalty operators to stabilize both the velocity and pressure fields. We rigorously prove our stabilized formulation to be well-posed and derive a priori error estimates for the velocity and pressure fields. We show that an upper bound for the condition number of the stiffness matrix holds as well. Numerical examples corroborating the theory are included.
In this paper we consider the Recoverable Traveling Salesman Problem (TSP). Here the task is to find two tours simultaneously, such that the intersection between the tours is at least a given minimum size, while the sum of travel distances with respect to two different distance metrics is minimized. Building upon the classic double-tree method, we derive a 4-approximation algorithm for the RecoverableTSP. We also show that if the required size of the intersection between the tours is constant, a 2-approximation guarantee can be achieved, even if more than two tours need to be constructed. We discuss consequences for approximability results in the more general area of recoverable robust optimization.
We study the $k$-server problem with time-windows. In this problem, each request $i$ arrives at some point $v_i$ of an $n$-point metric space at time $b_i$ and comes with a deadline $e_i$. One of the $k$ servers must be moved to $v_i$ at some time in the interval $[b_i, e_i]$ to satisfy this request. We give an online algorithm for this problem with a competitive ratio of ${\rm polylog} (n,\Delta)$, where $\Delta$ is the aspect ratio of the metric space. Prior to our work, the best competitive ratio known for this problem was $O(k \cdot {\rm polylog}(n))$ given by Azar et al. (STOC 2017). Our algorithm is based on a new covering linear program relaxation for $k$-server on HSTs. This LP naturally corresponds to the min-cost flow formulation of $k$-server, and easily extends to the case of time-windows. We give an online algorithm for obtaining a feasible fractional solution for this LP, and a primal dual analysis framework for accounting the cost of the solution. Together, they yield a new $k$-server algorithm with poly-logarithmic competitive ratio, and extend to the time-windows case as well. Our principal technical contribution lies in thinking of the covering LP as yielding a {\em truncated} covering LP at each internal node of the tree, which allows us to keep account of server movements across subtrees. We hope that this LP relaxation and the algorithm/analysis will be a useful tool for addressing $k$-server and related problems.
In the last decades, the classical Vehicle Routing Problem (VRP), i.e., assigning a set of orders to vehicles and planning their routes has been intensively researched. As only the assignment of order to vehicles and their routes is already an NP-complete problem, the application of these algorithms in practice often fails to take into account the constraints and restrictions that apply in real-world applications, the so called rich VRP (rVRP) and are limited to single aspects. In this work, we incorporate the main relevant real-world constraints and requirements. We propose a two-stage strategy and a Timeline algorithm for time windows and pause times, and apply a Genetic Algorithm (GA) and Ant Colony Optimization (ACO) individually to the problem to find optimal solutions. Our evaluation of eight different problem instances against four state-of-the-art algorithms shows that our approach handles all given constraints in a reasonable time.
The containerized services allocated in the mobile edge clouds bring up the opportunity for large-scale and real-time applications to have low latency responses. Meanwhile, live container migration is introduced to support dynamic resource management and users' mobility. However, with the expansion of network topology scale and increasing migration requests, the current multiple migration planning and scheduling algorithms of cloud data centers can not suit large-scale scenarios in edge computing. The user mobility-induced live migrations in edge computing require near real-time level scheduling. Therefore, in this paper, through the Software-Defined Networking (SDN) controller, the resource competitions among live migrations are modeled as a dynamic resource dependency graph. We propose an iterative Maximal Independent Set (MIS)-based multiple migration planning and scheduling algorithm. Using real-world mobility traces of taxis and telecom base station coordinates, the evaluation results indicate that our solution can efficiently schedule multiple live container migrations in large-scale edge computing environments. It improves the processing time by 3000 times compared with the state-of-the-art migration planning algorithm in clouds while providing guaranteed migration performance for time-critical migrations.
We prove that the number of tangencies between the members of two families, each of which consists of $n$ pairwise disjoint curves, can be as large as $\Omega(n^{4/3})$. If the families are doubly-grounded, this is sharp. We also show that if the curves are required to be $x$-monotone, then the maximum number of tangencies is $\Theta(n\log n)$, which improves a result by Pach, Suk, and Treml.
In this paper, an interference-aware path planning scheme for a network of cellular-connected unmanned aerial vehicles (UAVs) is proposed. In particular, each UAV aims at achieving a tradeoff between maximizing energy efficiency and minimizing both wireless latency and the interference level caused on the ground network along its path. The problem is cast as a dynamic game among UAVs. To solve this game, a deep reinforcement learning algorithm, based on echo state network (ESN) cells, is proposed. The introduced deep ESN architecture is trained to allow each UAV to map each observation of the network state to an action, with the goal of minimizing a sequence of time-dependent utility functions. Each UAV uses ESN to learn its optimal path, transmission power level, and cell association vector at different locations along its path. The proposed algorithm is shown to reach a subgame perfect Nash equilibrium (SPNE) upon convergence. Moreover, an upper and lower bound for the altitude of the UAVs is derived thus reducing the computational complexity of the proposed algorithm. Simulation results show that the proposed scheme achieves better wireless latency per UAV and rate per ground user (UE) while requiring a number of steps that is comparable to a heuristic baseline that considers moving via the shortest distance towards the corresponding destinations. The results also show that the optimal altitude of the UAVs varies based on the ground network density and the UE data rate requirements and plays a vital role in minimizing the interference level on the ground UEs as well as the wireless transmission delay of the UAV.