亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Multi-access Edge Computing (MEC) is expected to act as the enabler for the integration of 5G (and future 6G) communication technologies with cloud-computing-based capabilities at the edge of the network. This will enable low-latency and context-aware applications for users of such mobile networks. In this paper we describe the implementation of a MEC model for the Simu5G simulator and illustrate how to configure the environment to evaluate MEC applications in both simulation and real-time emulation modes.

相關內容

A few potential IoT communication protocols at the application layer have been proposed, including MQTT, CoAP and REST HTTP, with the latter being the protocol of choice for software developers due to its compatibility with the existing systems. We present a theoretical model of the expected buffer size on the REST HTTP client buffer in IoT devices under lossy wireless conditions, and validate the study experimentally. The results show that increasing the buffer size in IoT devices does not always improve performance in lossy environments, hence demonstrating the importance of benchmarking the buffer size in IoT systems deploying REST HTTP.

SDN and NFV have recently changed the way we operate networks. By decoupling control and data plane operations and virtualising their components, they have opened up new frontiers towards reducing network ownership costs and improving usability and efficiency. Recently, their applicability has moved towards public telecommunications networks, with concepts such as the cloud-CO that have pioneered its use in access and metro networks: an idea that has quickly attracted the interest of network operators. By merging mobile, residential and enterprise services into a common framework, built around commoditised data centre types of architectures, future embodiments of this CO virtualisation concept could achieve significant capital and operational cost savings, while providing customised network experience to high-capacity and low-latency future applications. This tutorial provides an overview of the various frameworks and architectures outlining current network disaggregation trends that are leading to the virtualisation/cloudification of central offices. It also provides insight on the virtualisation of the access-metro network, showcasing new software functionalities like the virtual \ac{DBA} mechanisms for \acp{PON}. In addition, we explore how it can bring together different network technologies to enable convergence of mobile and optical access networks and pave the way for the integration of disaggregated ROADM networks. Finally, this paper discusses some of the open challenges towards the realisation of networks capable of delivering guaranteed performance, while sharing resources across multiple operators and services.

The demand for large-scale deep learning is increasing, and distributed training is the current mainstream solution. Ring AllReduce is widely used as a data parallel decentralized algorithm. However, in a heterogeneous environment, each worker calculates the same amount of data, so that there is a lot of waiting time loss among different workers, which makes the algorithm unable to adapt well to heterogeneous clusters. Resources are not used as they should be. In this paper, we design an implementation of static allocation algorithm. The dataset is artificially allocated to each worker, and samples are drawn proportionally for training, thereby speeding up the training speed of the network in a heterogeneous environment. We verify the convergence and influence on training speed of the network model under this algorithm on one machine with multi-card and multi-machine with multi-card. On this basis of feasibility, we propose a self-adaptive allocation algorithm that allows each machine to find the data it needs to adapt to the current environment. The self-adaptive allocation algorithm can reduce the training time by nearly one-third to half compared to the same proportional allocation.In order to better show the applicability of the algorithm in heterogeneous clusters, We replace a poorly performing worker with a good performing worker or add a poorly performing worker to the heterogeneous cluster. Experimental results show that training time will decrease as the overall performance improves. Therefore, it means that resources are fully used. Further, this algorithm is not only suitable for straggler problems, but also for most heterogeneous situations. It can be used as a plug-in for AllReduce and its variant algorithms.

The MPI standard has long included one-sided communication abstractions through the MPI Remote Memory Access (RMA) interface. Unfortunately, the MPI RMA chapter in the 4.0 version of the MPI standard still contains both well-known and lesser known short-comings for both implementations and users, which lead to potentially non-optimal usage patterns. In this paper, we identify a set of issues and propose ways for applications to better express anticipated usage of RMA routines, allowing the MPI implementation to better adapt to the application's needs. In order to increase the flexibility of the RMA interface, we add the capability to duplicate windows, allowing access to the same resources encapsulated by a window using different configurations. In the same vein, we introduce the concept of MPI memory handles, meant to provide life-time guarantees on memory attached to dynamic windows, removing the overhead currently present in using dynamically exposed memory. We will show that our extensions provide improved accumulate latencies, reduced overheads for multi-threaded flushes, and allow for zero overhead dynamic memory window usage.

The evolution of connected and automated vehicles (CAVs) technology is boosting the development of innovative solutions for the sixth generation (6G) of Vehicular-to-Everything (V2X) networks. Lower frequency networks provide control of millimeter waves (mmWs) or sub-THz beam-based 6G communications. In CAVs, the mmW/Sub-THz guarantees a huge amount of bandwidth (>1GHz) and a high data rate (> 10 Gbit/s), enhancing the safety of CAVs applications. However, high-frequency is impaired by severe path-loss, and line of sight (LoS) propagation can be easily blocked. Static and dynamic blocking (e.g., by non-connected vehicles) heavily affects V2X links, and thus, in a multi-vehicular case, the knowledge of LoS (or visibility) mapping is mandatory for stable connections and proactive beam pointing that might involve relays whenever necessary. In this paper, we design a criterion for dynamic LoS-map estimation, and we propose a novel framework for relay of opportunity selection to enable high-quality and stable V2X links. Relay selection is based on cooperative sensing to cope with LoS blockage conditions. LoS-map is dynamically estimated on top of the static map of the environment by merging the perceptive sensors' data to achieve cooperative awareness of the surrounding scenario. Multiple relay selection architectures are based on centralized and decentralized strategies. 3GPP standard-compliant simulation is the framework methodology adopted herein to reproduce real-world urban vehicular environments and vehicles' mobility patterns.

Cooperative driving systems, such as platooning, rely on communication and information exchange to create situational awareness for each agent. Design and performance of control components are therefore tightly coupled with communication component performance. The information flow between vehicles can significantly affect the dynamics of a platoon. Therefore, both the performance and the stability of a platoon depend not only on the vehicle's controller but also on the information flow Topology (IFT). The IFT can cause limitations for certain platoon properties, i.e., stability and scalability. Cellular Vehicle-To-Everything (C-V2X) has emerged as one of the main communication technologies to support connected and automated vehicle applications. As a result of packet loss, wireless channels create random link interruption and changes in network topologies. In this paper, we model the communication links between vehicles with a first-order Markov model to capture the prevalent time correlations for each link. These models enable performance evaluation through better approximation of communication links during system design stages. Our approach is to use data from experiments to model the Inter-Packet Gap (IPG) using Markov chains and derive transition probability matrices for consecutive IPG states. Training data is collected from high fidelity simulations using models derived based on empirical data for a variety of different vehicle densities and communication rates. Utilizing the IPG models, we analyze the mean-square stability of a platoon of vehicles with the standard consensus protocol tuned for ideal communication and compare the degradation in performance for different scenarios.

The synthpop package for R //www.synthpop.org.uk provides tools to allow data custodians to create synthetic versions of confidential microdata that can be distributed with fewer restrictions than the original. The synthesis can be customized to ensure that relationships evident in the real data are reproduced in the synthetic data. A number of measures have been proposed to assess this aspect, commonly known as the utility of the synthetic data. We show that all these measures, including those calculated from tabulations, can be derived from a propensity score model. The measures will be reviewed and compared, and relations between them illustrated. All the measures compared are highly correlated and some are shown to be identical. The method used to define the propensity score model is more important than the choice of measure. These measures and methods are incorporated into utility modules in the synthpop package that include methods to visualize the results and thus provide immediate feedback to allow the person creating the synthetic data to improve its quality. The utility functions were originally designed to be used for synthetic data objects of class \code{synds}, created by the \pkg{synthpop} function syn() or syn.strata(), but they can now be used to compare one or more synthesised data sets with the original records, where the records are R data frames or lists of data frames.

Due to the huge surge in the traffic of IoT devices and applications, mobile networks require a new paradigm shift to handle such demand roll out. With the 5G economics, those networks should provide virtualized multi-vendor and intelligent systems that can scale and efficiently optimize the investment of the underlying infrastructure. Therefore, the market stakeholders have proposed the Open Radio Access Network (O-RAN) as one of the solutions to improve the network performance, agility, and time-to-market of new applications. O-RAN harnesses the power of artificial intelligence, cloud computing, and new network technologies (NFV and SDN) to allow operators to manage their infrastructure in a cost-efficient manner. Therefore, it is necessary to address the O-RAN performance and availability challenges autonomously while maintaining the quality of service. In this work, we propose an optimized deployment strategy for the virtualized O-RAN units in the O-Cloud to minimize the network's outage while complying with the performance and operational requirements. The model's evaluation provides an optimal deployment strategy that maximizes the network's overall availability and adheres to the O-RAN-specific requirements.

Smart services based on the Internet of Everything (IoE) are gaining considerable popularity due to the ever-increasing demands of wireless networks. This demands the appraisal of the wireless networks with enhanced properties as next-generation communication systems. Although 5G networks show great potential to support numerous IoE based services, it is not adequate to meet the complete requirements of the new smart applications. Therefore, there is an increased demand for envisioning the 6G wireless communication systems to overcome the major limitations in the existing 5G networks. Moreover, incorporating artificial intelligence in 6G will provide solutions for very complex problems relevant to network optimization. Furthermore, to add further value to the future 6G networks, researchers are investigating new technologies, such as THz and quantum communications. The requirements of future 6G wireless communications demand to support massive data-driven applications and the increasing number of users. This paper presents recent advances in the 6G wireless networks, including the evolution from 1G to 5G communications, the research trends for 6G, enabling technologies, and state-of-the-art 6G projects.

Data assimilation techniques are widely used to predict complex dynamical systems with uncertainties, based on time-series observation data. Error covariance matrices modelling is an important element in data assimilation algorithms which can considerably impact the forecasting accuracy. The estimation of these covariances, which usually relies on empirical assumptions and physical constraints, is often imprecise and computationally expensive especially for systems of large dimension. In this work, we propose a data-driven approach based on long short term memory (LSTM) recurrent neural networks (RNN) to improve both the accuracy and the efficiency of observation covariance specification in data assimilation for dynamical systems. Learning the covariance matrix from observed/simulated time-series data, the proposed approach does not require any knowledge or assumption about prior error distribution, unlike classical posterior tuning methods. We have compared the novel approach with two state-of-the-art covariance tuning algorithms, namely DI01 and D05, first in a Lorenz dynamical system and then in a 2D shallow water twin experiments framework with different covariance parameterization using ensemble assimilation. This novel method shows significant advantages in observation covariance specification, assimilation accuracy and computational efficiency.

北京阿比特科技有限公司