亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Internet of Things (IoT) comprises of a heterogeneous mix of smart devices which vary widely in their size, usage, energy capacity, computational power etc. IoT devices are typically connected to the Cloud via Fog nodes for fast processing and response times. In a rush to deploy devices quickly into the real-world and to maximize market share, the issue of security is often considered as an afterthought by the manufacturers of such devices. Some well-known security concerns of IoT are - data confidentiality, authentication of devices, location privacy, device integrity etc. We believe that the majority of security schemes proposed to date are too heavyweight for them to be of any practical value for the IoT. In this paper we propose a lightweight encryption scheme loosely based on the classic one-time pad, and make use of hash functions for the generation and management of keys. Our scheme imposes minimal computational and storage requirements on the network nodes, which makes it a viable candidate for the encryption of data transmitted by IoT devices in the Fog.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜(za)志。 Publisher:Elsevier。 SIT:

To ensure the performance of online service systems, their status is closely monitored with various software and system metrics. Performance anomalies represent the performance degradation issues (e.g., slow response) of the service systems. When performing anomaly detection over the metrics, existing methods often lack the merit of interpretability, which is vital for engineers and analysts to take remediation actions. Moreover, they are unable to effectively accommodate the ever-changing services in an online fashion. To address these limitations, in this paper, we propose ADSketch, an interpretable and adaptive performance anomaly detection approach based on pattern sketching. ADSketch achieves interpretability by identifying groups of anomalous metric patterns, which represent particular types of performance issues. The underlying issues can then be immediately recognized if similar patterns emerge again. In addition, an adaptive learning algorithm is designed to embrace unprecedented patterns induced by service updates or user behavior changes. The proposed approach is evaluated with public data as well as industrial data collected from a representative online service system in Huawei Cloud. The experimental results show that ADSketch outperforms state-of-the-art approaches by a significant margin, and demonstrate the effectiveness of the online algorithm in new pattern discovery. Furthermore, our approach has been successfully deployed in industrial practice.

Multi-access edge computing (MEC) is a key enabler to reduce the latency of vehicular network. Due to the vehicles mobility, their requested services (e.g., infotainment services) should frequently be migrated across different MEC servers to guarantee their stringent quality of service requirements. In this paper, we study the problem of service migration in a MEC-enabled vehicular network in order to minimize the total service latency and migration cost. This problem is formulated as a nonlinear integer program and is linearized to help obtaining the optimal solution using off-the-shelf solvers. Then, to obtain an efficient solution, it is modeled as a multi-agent Markov decision process and solved by leveraging deep Q learning (DQL) algorithm. The proposed DQL scheme performs a proactive services migration while ensuring their continuity under high mobility constraints. Finally, simulations results show that the proposed DQL scheme achieves close-to-optimal performance.

Fog computing is emerging as a new paradigm to deal with latency-sensitive applications, by making data processing and analysis close to their source. Due to the heterogeneity of devices in the fog, it is important to devise novel solutions which take into account the diverse physical resources available in each device to efficiently and dynamically distribute the processing. In this paper, we propose a resource representation scheme which allows exposing the resources of each device through Mobile Edge Computing Application Programming Interfaces (MEC APIs) in order to optimize resource allocation by the supervising entity in the fog. Then, we formulate the resource allocation problem as a Lyapunov optimization and we discuss the impact of our proposed approach on latency. Simulation results show that our proposed approach can minimize latency and improve the performance of the system.

Fog computing has emerged as a new paradigm in mobile network communications, aiming to equip the edge of the network with the computing and storing capabilities to deal with the huge amount of data and processing needs generated by the users' devices and sensors. Optimizing the assignment of users to fogs is, however, still an open issue. In this paper, we formulated the problem of users-fogs association, as a matching game with minimum and maximum quota constraints, and proposed a Multi-Stage Differed Acceptance (MSDA) in order to balance the use of fogs resources and offer a better response time for users. Simulations results show that the performance of the proposed model compared to a baseline matching of users, achieves lowers delays for users.

The 5G wireless networks are potentially revolutionizing future technologies. The 5G technologies are expected to foresee demands of diverse vertical applications with diverse requirements including high traffic volume, massive connectivity, high quality of service, and low latency. To fulfill such requirements in 5G and beyond, new emerging technologies such as SDN, NFV, MEC, and CC are being deployed. However, these technologies raise several issues regarding transparency, decentralization, and reliability. Furthermore, 5G networks are expected to connect many heterogeneous devices and machines which will raise several security concerns regarding users' confidentiality, data privacy, and trustworthiness. To work seamlessly and securely in such scenarios, future 5G networks need to deploy smarter and more efficient security functions. Motivated by the aforementioned issues, blockchain was proposed by researchers to overcome 5G issues because of its capacities to ensure transparency, data reliability, trustworthiness, immutability in a distributed environment. Indeed, blockchain has gained momentum as a novel technology that gives rise to a plethora of new decentralized technologies. In this chapter, we discuss the integration of the blockchain with 5G networks and beyond. We then present how blockchain applications in 5G networks and beyond could facilitate enabling various services at the edge and the core.

Security in the fifth generation (5G) networks has become one of the prime concerns in the telecommunication industry. 5G security challenges come from the fact that 5G networks involve different stakeholders using different security requirements and measures. Deficiencies in security management between these stakeholders can lead to security attacks. Therefore, security solutions should be conceived for the safe deployment of different 5G verticals (e.g., industry 4.0, Internet of Things (IoT), etc.). The interdependencies among 5G and fully connected systems, such as IoT, entail some standard security requirements, namely integrity, availability, and confidentiality. In this article, we propose a hierarchical architecture for securing 5G enabled IoT networks, and a security model for the prediction and detection of False Data Injection Attacks (FDIA) and Distributed Denial of Service attacks (DDoS). The proposed security model is based on a Markov stochastic process, which is used to observe the behavior of each network device, and employ a range-based behavior sifting policy. Simulation results demonstrate the effectiveness of the proposed architecture and model in detecting and predicting FDIA and DDoS attacks in the context of 5G enabled IoT.

Federated Edge Learning (FEEL) involves the collaborative training of machine learning models among edge devices, with the orchestration of a server in a wireless edge network. Due to frequent model updates, FEEL needs to be adapted to the limited communication bandwidth, scarce energy of edge devices, and the statistical heterogeneity of edge devices' data distributions. Therefore, a careful scheduling of a subset of devices for training and uploading models is necessary. In contrast to previous work in FEEL where the data aspects are under-explored, we consider data properties at the heart of the proposed scheduling algorithm. To this end, we propose a new scheduling scheme for non-independent and-identically-distributed (non-IID) and unbalanced datasets in FEEL. As the data is the key component of the learning, we propose a new set of considerations for data characteristics in wireless scheduling algorithms in FEEL. In fact, the data collected by the devices depends on the local environment and usage pattern. Thus, the datasets vary in size and distributions among the devices. In the proposed algorithm, we consider both data and resource perspectives. In addition to minimizing the completion time of FEEL as well as the transmission energy of the participating devices, the algorithm prioritizes devices with rich and diverse datasets. We first define a general framework for the data-aware scheduling and the main axes and requirements for diversity evaluation. Then, we discuss diversity aspects and some exploitable techniques and metrics. Next, we formulate the problem and present our FEEL scheduling algorithm. Evaluations in different scenarios show that our proposed FEEL scheduling algorithm can help achieve high accuracy in few rounds with a reduced cost.

In recent years, mobile devices have gained increasingly development with stronger computation capability and larger storage. Some of the computation-intensive machine learning and deep learning tasks can now be run on mobile devices. To take advantage of the resources available on mobile devices and preserve users' privacy, the idea of mobile distributed machine learning is proposed. It uses local hardware resources and local data to solve machine learning sub-problems on mobile devices, and only uploads computation results instead of original data to contribute to the optimization of the global model. This architecture can not only relieve computation and storage burden on servers, but also protect the users' sensitive information. Another benefit is the bandwidth reduction, as various kinds of local data can now participate in the training process without being uploaded to the server. In this paper, we provide a comprehensive survey on recent studies of mobile distributed machine learning. We survey a number of widely-used mobile distributed machine learning methods. We also present an in-depth discussion on the challenges and future directions in this area. We believe that this survey can demonstrate a clear overview of mobile distributed machine learning and provide guidelines on applying mobile distributed machine learning to real applications.

Smart services are an important element of the smart cities and the Internet of Things (IoT) ecosystems where the intelligence behind the services is obtained and improved through the sensory data. Providing a large amount of training data is not always feasible; therefore, we need to consider alternative ways that incorporate unlabeled data as well. In recent years, Deep reinforcement learning (DRL) has gained great success in several application domains. It is an applicable method for IoT and smart city scenarios where auto-generated data can be partially labeled by users' feedback for training purposes. In this paper, we propose a semi-supervised deep reinforcement learning model that fits smart city applications as it consumes both labeled and unlabeled data to improve the performance and accuracy of the learning agent. The model utilizes Variational Autoencoders (VAE) as the inference engine for generalizing optimal policies. To the best of our knowledge, the proposed model is the first investigation that extends deep reinforcement learning to the semi-supervised paradigm. As a case study of smart city applications, we focus on smart buildings and apply the proposed model to the problem of indoor localization based on BLE signal strength. Indoor localization is the main component of smart city services since people spend significant time in indoor environments. Our model learns the best action policies that lead to a close estimation of the target locations with an improvement of 23% in terms of distance to the target and at least 67% more received rewards compared to the supervised DRL model.

Network Virtualization is one of the most promising technologies for future networking and considered as a critical IT resource that connects distributed, virtualized Cloud Computing services and different components such as storage, servers and application. Network Virtualization allows multiple virtual networks to coexist on same shared physical infrastructure simultaneously. One of the crucial keys in Network Virtualization is Virtual Network Embedding, which provides a method to allocate physical substrate resources to virtual network requests. In this paper, we investigate Virtual Network Embedding strategies and related issues for resource allocation of an Internet Provider(InP) to efficiently embed virtual networks that are requested by Virtual Network Operators(VNOs) who share the same infrastructure provided by the InP. In order to achieve that goal, we design a heuristic Virtual Network Embedding algorithm that simultaneously embeds virtual nodes and virtual links of each virtual network request onto physic infrastructure. Through extensive simulations, we demonstrate that our proposed scheme improves significantly the performance of Virtual Network Embedding by enhancing the long-term average revenue as well as acceptance ratio and resource utilization of virtual network requests compared to prior algorithms.

北京阿比特科技有限公司