亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Serverless computing has seen rapid growth due to the ease-of-use and cost-efficiency it provides. However, function scheduling, a critical component of serverless systems, has been overlooked. In this paper, we take a first-principles approach toward designing a scheduler that caters to the unique characteristics of serverless functions as seen in real-world deployments. We first create a taxonomy of scheduling policies along three dimensions. Next, we use simulation to explore the scheduling policy space for the function characteristics in a 14-day trace of Azure functions and conclude that frequently used features such as late binding and random load balancing are sub-optimal for common execution time distributions and load ranges. We use these insights to design Hermes, a scheduler for serverless functions with three key characteristics. First, to avoid head-of-line blocking due to high function execution time variability, Hermes uses a combination of early binding and processor sharing for scheduling at individual worker machines. Second, Hermes uses a hybrid load balancing approach that improves consolidation at low load while employing least-loaded balancing at high load to retain high performance. Third, Hermes is both load and locality-aware, reducing the number of cold starts compared to pure load-based policies. We implement Hermes for Apache OpenWhisk and demonstrate that, for the case of the function patterns observed both in the Azure and in other real-world traces, it achieves up to 85% lower function slowdown and 60% higher throughput compared to existing policies.

相關內容

For distributed protocols involving many servers, assuming that they do not collude with each other makes some secrecy problems solvable and reduces overheads and computational hardness assumptions in others. While the non-collusion assumption is pervasive among privacy-preserving systems, it remains highly susceptible to covert, undetectable collusion among computing parties. This work stems from an observation that if the number of available computing parties is much higher than the number of parties required to perform a secure computation, collusion attempts could be deterred. We focus on the standard problem of multi-server private information retrieval (PIR) that inherently assumes that servers do not collude. For PIR application scenarios, such as those for blockchain light clients, where the available servers are plentiful, a single server's deviating action is not tremendously beneficial to itself. We can make deviations undesired through small amounts of rewards and penalties, thus raising the bar for collusion significantly. For any given multi-server 1-private PIR (i.e. the base PIR scheme is constructed assuming no pairwise collusion), we provide a collusion mitigation mechanism. We first define a two-stage sequential game that captures how rational servers interact with each other during collusion, then determine the payment rules such that the game realizes the unique sequential equilibrium: a non-collusion outcome. We also offer privacy protection for an extended period beyond the time the query executions happen, and guarantee user compensation in case of a reported privacy breach. Overall, we conjecture that the incentive structure for collusion mitigation to be functional towards relaxing the strong non-collusion assumptions across a variety of multi-party computation tasks.

This paper analyzes wireless network control for remote estimation of linear time-invariant (LTI) dynamical systems under various Hybrid Automatic Repeat Request (HARQ) based packet retransmission schemes. In conventional HARQ, packet reliability increases gradually with additional packets; however, each retransmission maximally increases the Age of Information (AoI). A slight increase in AoI can cause severe degradation in mean squared error (MSE) performance. We optimize standard HARQ schemes by allowing partial retransmissions to increase the packet reliability gradually and limit the AoI growth. In incremental redundancy HARQ (IR-HARQ), we utilize a shorter time for retransmission, which improves the MSE performance by enabling the early arrival of fresh status updates. In Chase combining HARQ (CC-HARQ), since packet length remains fixed, we propose sending retransmission for an old update and new updates in a single time slot using non-orthogonal signaling. Non-orthogonal retransmissions increase the packet reliability without delaying the fresh updates. Using the Markov decision process formulation, we find the optimal policies of the proposed HARQ based schemes to optimize the MSE performance. We provide static and dynamic policy optimization techniques to improve the MSE performance. The simulation results show that the proposed schemes achieve better long-term average and packet-level MSE performance.

Digital signatures are widely used for providing security of communications. At the same time, the security of currently deployed digital signature protocols is based on unproven computational assumptions. An efficient way to ensure an unconditional (information-theoretic) security of communication is to use quantum key distribution (QKD), whose security is based on laws of quantum mechanics. In this work, we develop an unconditionally secure signature scheme that guarantees authenticity and transferability of arbitrary length messages in a QKD network. In the proposed setup, the QKD network consists of two subnetworks: (i) an internal network that includes the signer and with limitation on the number of malicious nodes and (ii) an external network that has no assumptions on the number of malicious nodes. A consequence of the absence of the trust assumption in the external subnetwork is the necessity of assistance from internal subnetwork recipients for the verification of message-signature pairs by external subnetwork recipients. We provide a comprehensive security analysis of the developed scheme, perform an optimization of the scheme parameters with respect to the secret key consumption, and demonstrate that the developed scheme is compatible with the capabilities of currently available QKD devices.

The impact of device and circuit-level effects in mixed-signal Resistive Random Access Memory (RRAM) accelerators typically manifest as performance degradation of Deep Learning (DL) algorithms, but the degree of impact varies based on algorithmic features. These include network architecture, capacity, weight distribution, and the type of inter-layer connections. Techniques are continuously emerging to efficiently train sparse neural networks, which may have activation sparsity, quantization, and memristive noise. In this paper, we present an extended Design Space Exploration (DSE) methodology to quantify the benefits and limitations of dense and sparse mapping schemes for a variety of network architectures. While sparsity of connectivity promotes less power consumption and is often optimized for extracting localized features, its performance on tiled RRAM arrays may be more susceptible to noise due to under-parameterization, when compared to dense mapping schemes. Moreover, we present a case study quantifying and formalizing the trade-offs of typical non-idealities introduced into 1-Transistor-1-Resistor (1T1R) tiled memristive architectures and the size of modular crossbar tiles using the CIFAR-10 dataset.

Deep neural networks (DNNs) exploit many layers and a large number of parameters to achieve excellent performance. The training process of DNN models generally handles large-scale input data with many sparse features, which incurs high Input/Output (IO) cost, while some layers are compute-intensive. The training process generally exploits distributed computing resources to reduce training time. In addition, heterogeneous computing resources, e.g., CPUs, GPUs of multiple types, are available for the distributed training process. Thus, the scheduling of multiple layers to diverse computing resources is critical for the training process. To efficiently train a DNN model using the heterogeneous computing resources, we propose a distributed framework, i.e., Paddle-Heterogeneous Parameter Server (Paddle-HeterPS), composed of a distributed architecture and a Reinforcement Learning (RL)-based scheduling method. The advantages of Paddle-HeterPS are three-fold compared with existing frameworks. First, Paddle-HeterPS enables efficient training process of diverse workloads with heterogeneous computing resources. Second, Paddle-HeterPS exploits an RL-based method to efficiently schedule the workload of each layer to appropriate computing resources to minimize the cost while satisfying throughput constraints. Third, Paddle-HeterPS manages data storage and data communication among distributed computing resources. We carry out extensive experiments to show that Paddle-HeterPS significantly outperforms state-of-the-art approaches in terms of throughput (14.5 times higher) and monetary cost (312.3% smaller). The codes of the framework are publicly available at: //github.com/PaddlePaddle/Paddle.

We consider the problem of service hosting where an application provider can dynamically rent edge computing resources and serve user requests from the edge to deliver a better quality of service. A key novelty of this work is that we allow the service to be hosted partially at the edge which enables a fraction of the user query to be served by the edge. We model the total cost for (partially) hosting a service at the edge as a combination of the latency in serving requests, the bandwidth consumption, and the time-varying cost for renting edge resources. We propose an online policy called $\alpha$-RetroRenting ($\alpha$-RR) which dynamically determines the fraction of the service to be hosted at the edge in any time-slot, based on the history of the request arrivals and the rent cost sequence. As our main result, we derive an upper bound on $\alpha$-RR's competitive ratio with respect to the offline optimal policy that knows the entire request arrival and rent cost sequence in advance. We conduct extensive numerical evaluations to compare the performance of $\alpha$-RR with various benchmarks for synthetic and trace-based request arrival and rent cost processes, and find several parameter regimes where $\alpha$-RR's ability to store the service partially greatly improves cost-efficiency.

As an integral part of the decentralized finance (DeFi) ecosystem, decentralized exchanges (DEX) with automated market maker (AMM) protocols have gained massive traction with the recently revived interest in blockchain and distributed ledger technology (DLT) in general. Instead of matching the buy and sell sides, AMMs employ a peer-to-pool method and determine asset price algorithmically through a so-called conservation function. To facilitate the improvement and development of AMM-based DEX, we create the first systematization of knowledge in this area. We first establish a general AMM framework describing the economics and formalizing the system's state-space representation. We then employ our framework to systematically compare the top AMM protocols' mechanics, illustrating their conservation functions, as well as slippage and divergence loss functions. We further discuss security and privacy concerns, how they are enabled by AMM-based DEX's inherent properties, and explore mitigating solutions. Finally, we conduct a comprehensive literature review on related work covering both DeFi and conventional market microstructure.

The utilization of cloud environments to deploy scientific workflow applications is an emerging trend in scientific community. In this area, the main issue is the scheduling of workflows, which is known as an NP-complete problem. Apart from respecting user-defined deadline and budget, energy consumption is a major concern for cloud providers in implementing the scheduling strategy. The types and the number of virtual machines (VMs) used are determinant to handle those issues, and their determination is highly influenced by the structure of the workflow. In this paper, we propose two workflow scheduling algorithms that take advantage of the structural properties of the workflows. The first algorithm is called Structure-based Multi-objective Workflow Scheduling with an Optimal instance type (SMWSO). It introduces a new approach to determine the optimal instance type along with the optimal number of VMs to be provisioned. We also consider the use of heterogeneous VMs in the Structure-based Multi-objective Workflow Scheduling with Heterogeneous instance types (SMWSH), to highlight the algorithm's strength within the heterogeneous environment. The simulation results show that our proposal produces better energy-efficiency in 80% of workflow/workload scenarios, and save more than 50% overall energy compared to a recent state-of-the-art algorithm.

The current electricity networks were not initially designed for the high integration of variable generation technologies. They suffer significant losses due to the combustion of fossil fuels, the long-distance transmission, and distribution of the power to the network. Recently, \emph{prosumers}, both consumers and producers, emerge with the increasing affordability to invest in domestic solar systems. Prosumers may trade within their communities to better manage their demand and supply as well as providing social and economic benefits. In this paper, we explore the use of Blockchain technologies and auction mechanisms to facilitate autonomous peer-to-peer energy trading within microgrids. We design two frameworks that utilize the smart contract functionality in Ethereum and employ the continuous double auction and uniform-price double-sided auction mechanisms, respectively. We validate our design by conducting A/B tests to compare the performance of different frameworks on a real-world dataset. The key characteristics of the two frameworks and several cost analyses are presented for comparison. Our results demonstrate that a P2P trading platform that integrates the blockchain technologies and agent-based systems is promising to complement the current centralized energy grid. We also identify a number of limitations, alternative solutions, and directions for future work.

The development of safety applications for Connected Automated Vehicles requires testing in many different scenarios. However, the recreation of test scenarios for evaluating safety applications is a very challenging task. This is mainly due to the randomness in communication, difficulty in recreating vehicle movements precisely, and safety concerns for certain scenarios. We propose to develop a standalone Remote Vehicle Emulator that can reproduce V2V messages of remote vehicles from simulations or previous tests. This is expected to accelerate the development cycle significantly. Remote Vehicle Emulator is a unique and easily configurable emulation cum simulation setup to allow Software in the Loop (SIL) testing of connected vehicle applications realistically and safely. It will help in tailoring numerous test scenarios, expediting algorithm development and validation, and increasing the probability of finding failure modes. This, in turn, will help improve the quality of safety applications while saving testing time and reducing cost.

北京阿比特科技有限公司