亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Workflow scheduling is a long-studied problem in parallel and distributed computing (PDC), aiming to efficiently utilize compute resources to meet user's service requirements. Recently proposed scheduling methods leverage the low response times of edge computing platforms to optimize application Quality of Service (QoS). However, scheduling workflow applications in mobile edge-cloud systems is challenging due to computational heterogeneity, changing latencies of mobile devices and the volatile nature of workload resource requirements. To overcome these difficulties, it is essential, but at the same time challenging, to develop a long-sighted optimization scheme that efficiently models the QoS objectives. In this work, we propose MCDS: Monte Carlo Learning using Deep Surrogate Models to efficiently schedule workflow applications in mobile edge-cloud computing systems. MCDS is an Artificial Intelligence (AI) based scheduling approach that uses a tree-based search strategy and a deep neural network-based surrogate model to estimate the long-term QoS impact of immediate actions for robust optimization of scheduling decisions. Experiments on physical and simulated edge-cloud testbeds show that MCDS can improve over the state-of-the-art methods in terms of energy consumption, response time, SLA violations and cost by at least 6.13, 4.56, 45.09 and 30.71 percent respectively.

相關內容

Cell-free massive multiple-input multiple-output (MIMO) and intelligent reflecting surface (IRS) are considered as the prospective multiple antenna technologies for beyond the fifth-generation (5G) networks. Cell-free MIMO systems powered by IRSs, combining both technologies, can further improve the performance of cell-free MIMO systems at low cost and energy consumption. Prior works focused on instantaneous performance metrics and relied on alternating optimization algorithms, which impose huge computational complexity and signaling overhead. To address these challenges, we propose a novel two-step algorithm that provides the long-term passive beamformers at the IRSs using statistical channel state information (S-CSI) and short-term active precoders and long-term power allocation at the access points (APs) to maximize the minimum achievable rate. Simulation results verify that the proposed scheme outperforms benchmark schemes and brings a significant performance gain to the cell-free MIMO systems powered by IRSs.

Mobile Crowdsensing systems are vulnerable to various attacks as they build on non-dedicated and ubiquitous properties. Machine learning (ML)-based approaches are widely investigated to build attack detection systems and ensure MCS systems security. However, adversaries that aim to clog the sensing front-end and MCS back-end leverage intelligent techniques, which are challenging for MCS platform and service providers to develop appropriate detection frameworks against these attacks. Generative Adversarial Networks (GANs) have been applied to generate synthetic samples, that are extremely similar to the real ones, deceiving classifiers such that the synthetic samples are indistinguishable from the originals. Previous works suggest that GAN-based attacks exhibit more crucial devastation than empirically designed attack samples, and result in low detection rate at the MCS platform. With this in mind, this paper aims to detect intelligently designed illegitimate sensing service requests by integrating a GAN-based model. To this end, we propose a two-level cascading classifier that combines the GAN discriminator with a binary classifier to prevent adversarial fake tasks. Through simulations, we compare our results to a single-level binary classifier, and the numeric results show that proposed approach raises Adversarial Attack Detection Rate (AADR), from $0\%$ to $97.5\%$ by KNN/NB, from $45.9\%$ to $100\%$ by Decision Tree. Meanwhile, with two-levels classifiers, Original Attack Detection Rate (OADR) improves for the three binary classifiers, with comparison, such as NB from $26.1\%$ to $61.5\%$.

After the advent of the Internet of Things and 5G networks, edge computing became the center of attraction. The tasks demanding high computation are generally offloaded to the cloud since the edge is resource-limited. The Edge Cloud is a promising platform where the devices can offload delay-sensitive workloads. In this regard, scheduling holds great importance in offloading decisions in the Edge Cloud collaboration. The ultimate objectives of scheduling are the quality of experience, minimizing latency, and increasing performance. An abundance of efforts on scheduling has been done in the past. In this paper, we have surveyed proposed scheduling strategies in the context of edge cloud computing in various aspects such as advantages and demerits, QoS parameters, and fault tolerance. We have also surveyed such scheduling approaches to evaluate which one is feasible under what circumstances. We first classify all the algorithms into heuristic algorithms and meta-heuristics, and we subcategorize algorithms in each class further based on extracted attributes of algorithms. We hope that this survey will be very thoughtful in the development of new scheduling techniques. Issues, challenges, and future directions have also been examined.

This paper considers a mobile edge computing-enabled cell-free massive MIMO wireless network. An optimization problem for the joint allocation of uplink powers and remote computational resources is formulated, aimed at minimizing the total uplink power consumption under latency constraints, while simultaneously also maximizing the minimum SE throughout the network. Since the considered problem is non-convex, an iterative algorithm based on sequential convex programming is devised. A detailed performance comparison between the proposed distributed architecture and its co-located counterpart, based on a multi-cell massive MIMO deployment, is provided. Numerical results reveal the natural suitability of cell-free massive MIMO in supporting computation-offloading applications, with benefits over users' transmit power and energy consumption, the offloading latency experienced, and the total amount of allocated remote computational resources.

We consider a mobile edge computing scenario where a number of devices want to perform a linear inference $\boldsymbol{W}\boldsymbol{x}$ on some local data $\boldsymbol{x}$ given a network-side matrix $\boldsymbol{W}$. The computation is performed at the network edge over a number of edge servers. We propose a coding scheme that provides information-theoretic privacy against $z$ colluding (honest-but-curious) edge servers, while minimizing the overall latency\textemdash comprising upload, computation, download, and decoding latency\textemdash in the presence of straggling servers. The proposed scheme exploits Shamir's secret sharing to yield data privacy and straggler mitigation, combined with replication to provide spatial diversity for the download. We also propose two variants of the scheme that further reduce latency. For a considered scenario with $9$ edge servers, the proposed scheme reduces the latency by $8\%$ compared to the nonprivate scheme recently introduced by Zhang and Simeone, while providing privacy against an honest-but-curious edge server.

The emerging edge computing paradigm promises to provide low latency and ubiquitous computation to numerous mobile and Internet of Things (IoT) devices at the network edge. How to efficiently allocate geographically distributed heterogeneous edge resources to a variety of services is a challenging task. While this problem has been studied extensively in recent years, most of the previous work has largely ignored the preferences of the services when making edge resource allocation decisions. To this end, this paper introduces a novel bilevel optimization model, which explicitly takes the service preferences into consideration, to study the interaction between an EC platform and multiple services. The platform manages a set of edge nodes (ENs) and acts as the leader while the services are the followers. Given the service placement and resource pricing decisions of the leader, each service decides how to optimally divide its workload to different ENs. The proposed framework not only maximizes the profit of the platform but also minimizes the cost of every service. When there is a single EN, we derive a simple analytic solution for the underlying problem. For the general case with multiple ENs and multiple services, we present a Karush Kuhn Tucker based solution and a duality based solution, combining with a series of linearizations, to solve the bilevel problem. Extensive numerical results are shown to illustrate the efficacy of the proposed model.

The need to develop systems that exploit multi and many-core architectures to reduce wasteful heat generation is of utmost importance in compute-intensive applications. We propose an energy-conscious approach to multicore scheduling known as non-preemptive dynamic window (NPDW) scheduling that achieves effective load and temperature balancing over chip multiprocessors. NPDW utilizes the concept of dynamic time windows to accumulate tasks and find an optimal stable matching between accumulated tasks and available processor cores using a modified Gale-Shapely algorithm. The metrics of window and matching performance are defined to create a dynamic window heuristic to determine the next time window size based on the current and previous window sizes. Based on derived formulation and experimental results, we show that our NPDW scheduler is able to distribute the computational and thermal load throughout the processors in a multicore environment better than baseline schedulers. We believe that within multicore compute applications requiring temperature and energy-conscious system design, our scheduler may be employed to effectively disperse system load and prevent excess core heating.

Age of Information (AoI), which measures the time elapsed since the generation of the last received packet at the destination, is a new metric for real-time status update tracking applications. In this paper, we consider a status-update system in which a source node samples updates and sends them to an edge server over a delay channel. The received updates are processed by the server with an infinite buffer and then delivered to a destination. The channel can send only one update at a time, and the server can process one at a time as well. The source node applies generate-at-will model according to the state of the channel, the edge server, and the buffer. We aim to minimize the average AoI with \emph{independent and identically distributed} transmission time and processing time. We consider three online scheduling policies. The first one is the optimal long wait policy, under which the source node only transmits a new packet after the old one is delivered. Secondly, we propose a peak age threshold policy, under which the source node determines the sending time based on the estimated peak age of information (PAoI). Finally, we improve the peak age threshold policy by considering a postponed plan to reduce the waiting time in the buffer. The AoI performance under these policies is illustrated by numerical results with different parameters.

Augmented Reality (AR) is an emerging field ripe for experimentation, especially when it comes to developing the kinds of applications and experiences that will drive mass adoption of the technology. While we aren't aware of any current consumer product that realize a wearable, wide Field of View (FoV), AR Head Mounted Display (HMD), such devices will certainly come. In order for these sophisticated, likely high-cost hardware products to succeed, it is important they provide a high quality user experience. To that end, we prototyped 4 experimental applications for wide FoV displays that will likely exist in the future. Given current AR HMD limitations, we used a AR simulator built on web technology and VR headsets to demonstrate these applications, allowing users and designers to peer into the future.

Driven by the visions of Internet of Things and 5G communications, the edge computing systems integrate computing, storage and network resources at the edge of the network to provide computing infrastructure, enabling developers to quickly develop and deploy edge applications. Nowadays the edge computing systems have received widespread attention in both industry and academia. To explore new research opportunities and assist users in selecting suitable edge computing systems for specific applications, this survey paper provides a comprehensive overview of the existing edge computing systems and introduces representative projects. A comparison of open source tools is presented according to their applicability. Finally, we highlight energy efficiency and deep learning optimization of edge computing systems. Open issues for analyzing and designing an edge computing system are also studied in this survey.

北京阿比特科技有限公司