亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Edge computing brings several advantages, such as reduced latency, increased bandwidth, and improved locality of traffic. One aspect that is not sufficiently understood is the impact of the different communication latency experienced in the edge-cloud continuum on the energy consumption of clients. We studied how a request-response communication scheme is influenced by different placements of the server, when communication is based on LTE. Results show that by accurately selecting the operational parameters a significant amount of energy can be saved.

相關內容

邊緣計算(英語:Edge computing),又譯為邊緣計算,是一種分散式運算的架構,將應用程序、數據資料與服務的運算,由網絡中心節點,移往網絡邏輯上的邊緣節點來處理[1]。邊緣運算將原本完全由中心節點處理大型服務加以分解,切割成更小與更容易管理的部分,分散到邊緣節點去處理。邊緣節點更接近于用戶終端裝置,可以加快資料的處理與傳送速度,減少延遲。在這種架構下,資料的分析與知識的產生,更接近于數據資料的來源,因此更適合處理大數據。

Computational task offloading based on edge computing can deal with the performance bottleneck faced by traditional cloud-based systems for industrial Internet of things (IIoT). To further optimize computing efficiency and resource allocation, collaborative offloading has been put forward to enable the offloading from edge devices to IIoT terminal devices. However, there still lack incentive mechanisms to encourage participants to take over the tasks from others. To counter this situation, this paper proposes a distributed computational resource trading strategy considering multiple preferences of IIoT users. Unlike most existing works, the objective of our trading strategy comprehensively considers different satisfaction degrees with task delay, energy consumption, price, and user reputation of both requesters and collaborators. Our system uses blockchain to enhance the decentralization, security, and automation. Compared with the trading method based on classical double auction matching mechanism, our trading method will have more tasks offloaded and executed, and the trading results are more friendly to collaborators with higher reputation scores.

The next generation multibeam satellites open up a new way to design satellite communication channels with the full flexibility in bandwidth, transmit power and beam coverage management. In this paper, we exploit the flexible multibeam satellite capabilities and the geographical distribution of users to improve the performance of satellite-assisted edge caching systems. Our aim is to jointly optimize the bandwidth allocation in multibeam and caching decisions at the edge nodes to address two important problems: i) cache feeding time minimization and ii) cache hits maximization. To tackle the non-convexity of the joint optimization problem, we transform the original problem into a difference-of-convex (DC) form, which is then solved by the proposed iterative algorithm whose convergence to at least a local optimum is theoretically guaranteed. Furthermore, the effectiveness of the proposed design is evaluated under the realistic beams coverage of the satellite SES-14 and Movielens data set. Numerical results show that our proposed joint design can reduce the caching feeding time by 50\% and increase the cache hit ratio (CHR) by 10\% to 20\% compared to existing solutions. Furthermore, we examine the impact of multispot beam and multicarrier wide-beam on the joint design and discuss potential research directions.

With the wide penetration of smart robots in multifarious fields, Simultaneous Localization and Mapping (SLAM) technique in robotics has attracted growing attention in the community. Yet collaborating SLAM over multiple robots still remains challenging due to performance contradiction between the intensive graphics computation of SLAM and the limited computing capability of robots. While traditional solutions resort to the powerful cloud servers acting as an external computation provider, we show by real-world measurements that the significant communication overhead in data offloading prevents its practicability to real deployment. To tackle these challenges, this paper promotes the emerging edge computing paradigm into multi-robot SLAM and proposes RecSLAM, a multi-robot laser SLAM system that focuses on accelerating map construction process under the robot-edge-cloud architecture. In contrast to conventional multi-robot SLAM that generates graphic maps on robots and completely merges them on the cloud, RecSLAM develops a hierarchical map fusion technique that directs robots' raw data to edge servers for real-time fusion and then sends to the cloud for global merging. To optimize the overall pipeline, an efficient multi-robot SLAM collaborative processing framework is introduced to adaptively optimize robot-to-edge offloading tailored to heterogeneous edge resource conditions, meanwhile ensuring the workload balancing among the edge servers. Extensive evaluations show RecSLAM can achieve up to 39% processing latency reduction over the state-of-the-art. Besides, a proof-of-concept prototype is developed and deployed in real scenes to demonstrate its effectiveness.

Realizing edge intelligence consists of sensing, communication, training, and inference stages. Conventionally, the sensing and communication stages are executed sequentially, which results in excessive amount of dataset generation and uploading time. This paper proposes to accelerate edge intelligence via integrated sensing and communication (ISAC). As such, the sensing and communication stages are merged so as to make the best use of the wireless signals for the dual purpose of dataset generation and uploading. However, ISAC also introduces additional interference between sensing and communication functionalities. To address this challenge, this paper proposes a classification error minimization formulation to design the ISAC beamforming and time allocation. The globally optimal solution is derived via the rank-1 guaranteed semidefinite relaxation, and performance analysis is performed to quantify the ISAC gain over that of conventional edge intelligence. Simulation results are provided to verify the effectiveness of the proposed ISAC-assisted edge intelligence system. Interestingly, we find that ISAC is always beneficial, when the duration of generating a sample is more than the duration of uploading a sample. Otherwise, the ISAC gain can vanish or even be negative. Nevertheless, we still derive a sufficient condition, under which a positive ISAC gain is feasible.

In this paper, we investigate the performance of an RIS-aided wireless communication system subject to outdated channel state information that may operate in both the near- and far-field regions. In particular, we take two RIS deployment strategies into consideration: (i) the centralized deployment, where all the reflecting elements are installed on a single RIS and (ii) the distributed deployment, where the same number of reflecting elements are placed on multiple RISs. For both deployment strategies, we derive accurate closed-form approximations for the ergodic capacity, and we introduce tight upper and lower bounds for the ergodic capacity to obtain useful design insights. From this analysis, we unveil that an increase of the transmit power, the Rician-K factor, the accuracy of the channel state information and the number of reflecting elements help improve the system performance. Moreover, we prove that the centralized RIS-aided deployment may achieve a higher ergodic capacity as compared with the distributed RIS-aided deployment when the RIS is located near the base station or near the user. In different setups, on the other hand, we prove that the distributed deployment outperforms the centralized deployment. Finally, the analytical results are verified by using Monte Carlo simulations.

As a revolutionary paradigm for controlling wireless channels, reconfigurable intelligent surface (RIS) has emerged as a candidate technology for future 6G networks. However, due to the multiplicative fading effect, the existing passive RISs only achieve a negligible capacity gain in many scenarios with strong direct links. In this paper, the concept of active RISs is proposed to overcome this fundamental limitation. Unlike the existing passive RISs that reflect signals without amplification, active RISs can amplify the reflected signals actively through integrating amplifiers into their elements. To characterize the signal amplification and incorporate the noise introduced by active components, we develop a signal model for active RISs, which is validated through the experimental measurements on a fabricated active RIS element. Based on the developed signal model, we further analyze the asymptotic performance of active RISs to reveal its notable capacity gain for wireless communications. Finally, we formulate the sum-rate maximization problem for an active RIS aided multiple-input multiple-output (MIMO) system and a joint transmit beamforming and reflect precoding algorithm is proposed to solve this problem. Simulation results show that, in a typical wireless system, the existing passive RISs can realize only a negligible sum-rate gain of 3%, while the proposed active RISs can achieve a significant sum-rate gain of 108%, thus overcoming the multiplicative fading effect.

This letter studies a vertical federated edge learning (FEEL) system for collaborative objects/human motion recognition by exploiting the distributed integrated sensing and communication (ISAC). In this system, distributed edge devices first send wireless signals to sense targeted objects/human, and then exchange intermediate computed vectors (instead of raw sensing data) for collaborative recognition while preserving data privacy. To boost the spectrum and hardware utilization efficiency for FEEL, we exploit ISAC for both target sensing and data exchange, by employing dedicated frequency-modulated continuous-wave (FMCW) signals at each edge device. Under this setup, we propose a vertical FEEL framework for realizing the recognition based on the collected multi-view wireless sensing data. In this framework, each edge device owns an individual local L-model to transform its sensing data into an intermediate vector with relatively low dimensions, which is then transmitted to a coordinating edge device for final output via a common downstream S-model. By considering a human motion recognition task, experimental results show that our vertical FEEL based approach achieves recognition accuracy up to 98\% with an improvement up to 8\% compared to the benchmarks, including on-device training and horizontal FEEL.

Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis in locations close to where data is captured based on artificial intelligence. The aim of edge intelligence is to enhance the quality and speed of data processing and protect the privacy and security of the data. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this paper, we present a thorough and comprehensive survey on the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, namely edge caching, edge training, edge inference, and edge offloading, based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare and analyse the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, etc. This survey article provides a comprehensive introduction to edge intelligence and its application areas. In addition, we summarise the development of the emerging research field and the current state-of-the-art and discuss the important open issues and possible theoretical and technical solutions.

Federated learning has been showing as a promising approach in paving the last mile of artificial intelligence, due to its great potential of solving the data isolation problem in large scale machine learning. Particularly, with consideration of the heterogeneity in practical edge computing systems, asynchronous edge-cloud collaboration based federated learning can further improve the learning efficiency by significantly reducing the straggler effect. Despite no raw data sharing, the open architecture and extensive collaborations of asynchronous federated learning (AFL) still give some malicious participants great opportunities to infer other parties' training data, thus leading to serious concerns of privacy. To achieve a rigorous privacy guarantee with high utility, we investigate to secure asynchronous edge-cloud collaborative federated learning with differential privacy, focusing on the impacts of differential privacy on model convergence of AFL. Formally, we give the first analysis on the model convergence of AFL under DP and propose a multi-stage adjustable private algorithm (MAPA) to improve the trade-off between model utility and privacy by dynamically adjusting both the noise scale and the learning rate. Through extensive simulations and real-world experiments with an edge-could testbed, we demonstrate that MAPA significantly improves both the model accuracy and convergence speed with sufficient privacy guarantee.

Model quantization is a widely used technique to compress and accelerate deep neural network (DNN) inference. Emergent DNN hardware accelerators begin to support flexible bitwidth (1-8 bits) to further improve the computation efficiency, which raises a great challenge to find the optimal bitwidth for each layer: it requires domain experts to explore the vast design space trading off among accuracy, latency, power, and model size, which is both time-consuming and sub-optimal. Conventional quantization algorithm ignores the different hardware architectures and quantizes all the layers in an uniform way. In this paper, we introduce the Hardware-Aware Automated Quantization (HAQ) framework which leverages the reinforcement learning to automatically determine the quantization policy, and we take the hardware accelerator's feedback in the design loop. Rather than relying on proxy signals such as FLOPs and model size, we employ a hardware simulator to generate direct feedback signals to the RL agent. Compared with conventional methods, our framework is fully automated and can specialize the quantization policy for different neural network architectures and hardware architectures. Our framework effectively reduced the latency by 1.4-1.95x and the energy consumption by 1.9x with negligible loss of accuracy compared with the fixed bitwidth (8 bits) quantization. Our framework reveals that the optimal policies on different hardware architectures (i.e., edge and cloud architectures) under different resource constraints (i.e., latency, power and model size) are drastically different. We interpreted the implication of different quantization policies, which offer insights for both neural network architecture design and hardware architecture design.

北京阿比特科技有限公司