亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Function-as-a-Service (FaaS) paradigm has a lot of potential as a computing model for fog environments comprising both cloud and edge nodes, as compute requests can be scheduled across the entire fog continuum in a fine-grained manner. When the request rate exceeds capacity limits at the resource-constrained edge, some functions need to be offloaded towards the cloud. In this paper, we present an auction-inspired approach in which application developers bid on resources while fog nodes decide locally which functions to execute and which to offload in order to maximize revenue. Unlike many current approaches to function placement in the fog, our approach can work in an online and decentralized manner. We also present our proof-of-concept prototype AuctionWhisk that illustrates how such an approach can be implemented in a real FaaS platform. Through a number of simulation runs and system experiments, we show that revenue for overloaded nodes can be maximized without dropping function requests.

相關內容

We propose throughput and cost optimal job scheduling algorithms in cloud computing platforms offering Infrastructure as a Service. We first consider online migration and propose job scheduling algorithms to minimize job migration and server running costs. We consider algorithms that assume knowledge of job-size on arrival of jobs. We characterize the optimal cost subject to system stability. We develop a drift-plus-penalty framework based algorithm that can achieve optimal cost arbitrarily closely. Specifically this algorithm yields a trade-off between delay and costs. We then relax the job-size knowledge assumption and give an algorithm that uses readily offered service to the jobs. We show that this algorithm gives order-wise identical cost as the job size based algorithm. Later, we consider offline job migration that incurs migration delays. We again present throughput optimal algorithms that minimize server running cost. We illustrate the performance of the proposed algorithms and compare these to the existing algorithms via simulation.

Serverless Computing is a virtualisation-related paradigm that promises to simplify application management and to solve the last challenges in the field: scale down and easy to use. The implied cost reduction, coupled with a simplified management of underlying applications, are expected to further push the adoption of virtualisation-based solutions, including cloud-computing or telco-cloud solutions. However, in this quest for efficiency, security is not ranked among the top priorities, also because of the (misleading) belief that current solutions developed for virtualised environments could be applied (as is) to this new paradigm. Unfortunately, this is not the case, due to the highlighted idiosyncratic features of serverless computing. In this paper, we review the current serverless architectures, abstract and categorise their founding principles, and provide an in depth analyse of them from the point of view of security, referring to principles and practices of the cybersecurity domain. In particular, we show the security shortcomings of the analysed serverless architectural paradigms, point to possible countermeasures, and highlight a few research directions.

We consider the cache-aided multiple-input single-output (MISO) broadcast channel, which consists of a server with $L$ antennas and $K$ single-antenna users, where the server contains $N$ files of equal length and each user is equipped with a local cache of size $M$ files. Each user requests an arbitrary file from library. The objective is to design a coded caching scheme based on uncoded placement and one-shot linear delivery phases, to achieve the maximum worst-case sum Degree-of-Freedom (sum-DoF) with low subpacketization. Previously proposed schemes for this setting incurred either an exponential subpacketization order in $K$, or required specific conditions in the system parameters $L$, $K$, $M$ and $N$. In this paper, we propose a new combinatorial structure called multiple-antenna placement delivery array (MAPDA). Based on MAPDA and Latin square, the first proposed scheme achieves the sum-DoF $L+\frac{KM}{N}$ with the subpacketization of $K$ when $\frac{KM}{N}+L=K$. Subsequently, for the general case we propose a transformation approach to construct an MAPDA from any $g$-regular PDA (a class of PDA where each integer in the array occurs $g$ times) for the original shared-link coded caching problem. If the original PDA is the seminal coded caching scheme proposed by Maddah-Ali and Niesen, the resulting scheme can achieve the sum-DoF $L+\frac{KM}{N}$ with reduced subpacketization than the existing schemes.The work can be extended to the multiple independent single-antenna transmitters (servers) corresponding to the cache-aided interference channel proposed by Naderializadeh et al. and the scenario of transmitters equipped with multiple antennas.

With the growth of 5G, Internet of Things (IoT), edge computing and cloud computing technologies, the infrastructure (compute and network) available to emerging applications (AR/VR, autonomous driving, industry 4.0, etc.) has become quite complex. There are multiple tiers of computing (IoT devices, near edge, far edge, cloud, etc.) that are connected with different types of networking technologies (LAN, LTE, 5G, MAN, WAN, etc.). Deployment and management of applications in such an environment is quite challenging. In this paper, we propose ROMA, which performs resource orchestration for microservices-based 5G applications in a dynamic, heterogeneous, multi-tiered compute and network fabric. We assume that only application-level requirements are known, and the detailed requirements of the individual microservices in the application are not specified. As part of our solution, ROMA identifies and leverages the coupling relationship between compute and network usage for various microservices and solves an optimization problem in order to appropriately identify how each microservice should be deployed in the complex, multi-tiered compute and network fabric, so that the end-to-end application requirements are optimally met. We implemented two real-world 5G applications in video surveillance and intelligent transportation system (ITS) domains. Through extensive experiments, we show that ROMA is able to save up to 90%, 55% and 44% compute and up to 80%, 95% and 75% network bandwidth for the surveillance (watchlist) and transportation application (person and car detection), respectively. This improvement is achieved while honoring the application performance requirements, and it is over an alternative scheme that employs a static and overprovisioned resource allocation strategy by ignoring the resource coupling relationships.

In cross-device Federated Learning (FL), clients with low computational power train a common machine model by exchanging parameters updates instead of potentially private data. Federated Dropout (FD) is a technique that improves the communication efficiency of a FL session by selecting a subset of model variables to be updated in each training round. However, FD produces considerably lower accuracy and higher convergence time compared to standard FL. In this paper, we leverage coding theory to enhance FD by allowing a different sub-model to be used at each client. We also show that by carefully tuning the server learning rate hyper-parameter, we can achieve higher training speed and up to the same final accuracy of the no dropout case. For the EMNIST dataset, our mechanism achieves 99.6 % of the final accuracy of the no dropout case while requiring 2.43x less bandwidth to achieve this accuracy level.

Given a graph $G = (V,E)$, a threshold function $t~ :~ V \rightarrow \mathbb{N}$ and an integer $k$, we study the Harmless Set problem, where the goal is to find a subset of vertices $S \subseteq V$ of size at least $k$ such that every vertex $v\in V$ has less than $t(v)$ neighbors in $S$. We enhance our understanding of the problem from the viewpoint of parameterized complexity. Our focus lies on parameters that measure the structural properties of the input instance. We show that the problem is W[1]-hard parameterized by a wide range of fairly restrictive structural parameters such as the feedback vertex set number, pathwidth, treedepth, and even the size of a minimum vertex deletion set into graphs of pathwidth and treedepth at most three. On dense graphs, we show that the problem is W[1]-hard parameterized by cluster vertex deletion number. We also show that the Harmless Set problem with majority thresholds is W[1]-hard when parameterized by the treewidth of the input graph. We prove that the Harmless Set problem can be solved in polynomial time on graph with bounded cliquewidth. On the positive side, we obtain fixed-parameter algorithms for the problem with respect to neighbourhood diversity, twin cover and vertex integrity of the input graph. We show that the problem parameterized by the solution size is fixed parameter tractable on planar graphs. We thereby resolve two open questions stated in C. Bazgan and M. Chopin (2014) concerning the complexity of {\sc Harmless Set} parameterized by the treewidth of the input graph and on planar graphs with respect to the solution size.

In multichannel signal processing with distributed sensors, choosing the optimal subset of observed sensor signals to be exploited is crucial in order to maximize algorithmic performance and reduce computational load, ideally both at the same time. In the acoustic domain, signal cross-correlation is a natural choice to quantify the usefulness of microphone signals, i. e., microphone utility, for array processing, but its estimation requires that the uncoded signals are synchronized and transmitted between nodes. In resource-constrained environments like acoustic sensor networks, low data transmission rates often make transmission of all observed signals to the centralized location infeasible, thus discouraging direct estimation of signal cross-correlation. Instead, we employ characteristic features of the recorded signals to estimate the usefulness of individual microphone signals. In this contribution, we provide a comprehensive analysis of model-based microphone utility estimation approaches that use signal features and, as an alternative, also propose machine learning-based estimation methods that identify optimal sensor signal utility features. The performance of both approaches is validated experimentally using both simulated and recorded acoustic data, comprising a variety of realistic and practically relevant acoustic scenarios including moving and static sources.

Serverless computing platforms currently rely on basic pricing schemes that are static and do not reflect customer feedback. This leads to significant inefficiencies from a total utility perspective. As one of the fastest-growing cloud services, serverless computing provides an opportunity to better serve both users and providers through the incorporation of market-based strategies for pricing and resource allocation. With the help of utility functions to model the delay-sensitivity of customers, we propose a novel scheduler to allocate resources for serverless computing. The resulting resource allocation scheme is optimal in the sense that it maximizes the aggregate utility of all users across the system, thus maximizing social welfare. Our approach gives rise to a natural dynamic pricing scheme that is obtained by solving an optimization problem in its dual form. We further develop feedback mechanisms that allow the cloud provider to converge to optimal resource allocation, even when the users' utilities are private and unknown to the service provider. Simulations show that our approach can track market demand and achieve significantly higher social welfare (or, equivalently, cost savings for customers) compared to existing schemes.

There is growing interest in object detection in advanced driver assistance systems and autonomous robots and vehicles. To enable such innovative systems, we need faster object detection. In this work, we investigate the trade-off between accuracy and speed with domain-specific approximations, i.e. category-aware image size scaling and proposals scaling, for two state-of-the-art deep learning-based object detection meta-architectures. We study the effectiveness of applying approximation both statically and dynamically to understand the potential and the applicability of them. By conducting experiments on the ImageNet VID dataset, we show that domain-specific approximation has great potential to improve the speed of the system without deteriorating the accuracy of object detectors, i.e. up to 7.5x speedup for dynamic domain-specific approximation. To this end, we present our insights toward harvesting domain-specific approximation as well as devise a proof-of-concept runtime, AutoFocus, that exploits dynamic domain-specific approximation.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

北京阿比特科技有限公司