亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Internet of Things (IoT) comprises of a heterogeneous mix of smart devices which vary widely in their size, usage, energy capacity, computational power etc. IoT devices are typically connected to the Cloud via Fog nodes for fast processing and response times. In a rush to deploy devices quickly into the real-world and to maximize market share, the issue of security is often considered as an afterthought by the manufacturers of such devices. Some well-known security concerns of IoT are - data confidentiality, authentication of devices, location privacy, device integrity etc. We believe that the majority of security schemes proposed to date are too heavyweight for them to be of any practical value for the IoT. In this paper we propose a lightweight encryption scheme loosely based on the classic one-time pad, and make use of hash functions for the generation and management of keys. Our scheme imposes minimal computational and storage requirements on the network nodes, which makes it a viable candidate for the encryption of data transmitted by IoT devices in the Fog.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

The 5G wireless networks are potentially revolutionizing future technologies. The 5G technologies are expected to foresee demands of diverse vertical applications with diverse requirements including high traffic volume, massive connectivity, high quality of service, and low latency. To fulfill such requirements in 5G and beyond, new emerging technologies such as SDN, NFV, MEC, and CC are being deployed. However, these technologies raise several issues regarding transparency, decentralization, and reliability. Furthermore, 5G networks are expected to connect many heterogeneous devices and machines which will raise several security concerns regarding users' confidentiality, data privacy, and trustworthiness. To work seamlessly and securely in such scenarios, future 5G networks need to deploy smarter and more efficient security functions. Motivated by the aforementioned issues, blockchain was proposed by researchers to overcome 5G issues because of its capacities to ensure transparency, data reliability, trustworthiness, immutability in a distributed environment. Indeed, blockchain has gained momentum as a novel technology that gives rise to a plethora of new decentralized technologies. In this chapter, we discuss the integration of the blockchain with 5G networks and beyond. We then present how blockchain applications in 5G networks and beyond could facilitate enabling various services at the edge and the core.

The blockchain is a distributed technology which allows establishing trust among unreliable users who interact and perform transactions with each other. While blockchain technology has been mainly used for crypto-currency, it has emerged as an enabling technology for establishing trust in the realm of the Internet of Things (IoT). Nevertheless, a naive usage of the blockchain for IoT leads to high delays and extensive computational power. In this paper, we propose a blockchain architecture dedicated to being used in a supply chain which comprises different distributed IoT entities. We propose a lightweight consensus for this architecture, called LC4IoT. The consensus is evaluated through extensive simulations. The results show that the proposed consensus uses low computational power, storage capability and latency.

Recently, blockchain has gained momentum as a novel technology that gives rise to a plethora of new decentralized applications (e.g., Internet of Things (IoT)). However, its integration with the IoT is still facing several problems (e.g., scalability, flexibility). Provisioning resources to enable a large number of connected IoT devices implies having a scalable and flexible blockchain. To address these issues, we propose a scalable and trustworthy blockchain (STB) architecture that is suitable for the IoT; which uses blockchain sharding and oracles to establish trust among unreliable IoT devices in a fully distributed and trustworthy manner. In particular, we design a Peer-To-Peer oracle network that ensures data reliability, scalability, flexibility, and trustworthiness. Furthermore, we introduce a new lightweight consensus algorithm that scales the blockchain dramatically while ensuring the interoperability among participants of the blockchain. The results show that our proposed STB architecture achieves flexibility, efficiency, and scalability making it a promising solution that is suitable for the IoT context.

Industrial Internet of Things (IIoT) has exploded key revolutions in several leading industries, such as energy, agriculture, mining, transportation, and healthcare. Due to the nature of high capacity and fast transmission speed, 5G plays a pivot role in enhancing the industrial procedures, practices and guidelines, such as crowdsourcing, cloud outsourcing and platform subcontracting. Spatial crowdsourcing (SC)-servers (such as offered by DiDi, MeiTuan and Uber) assign different tasks based on workers' location information.However, SC-servers are often untrustworthy and have the threat of revealing workers' privacy. In this paper, we introduce a framework Geo-MOEA (Multi-Objective Evolutionary Algorithm) to protect location privacy of workers involved on SC platform in 5G environment. We propose an adaptive regionalized obfuscation mechanism with inference error bounds based on geo-indistinguishability (a strong notion of differential privacy), which is suitable for the context of large-scale location data and task allocations. This offers locally generated pseudo-locations of workers to be reported instead of their actual locations.Further, to optimize the trade-off between SC service availability and privacy protection, we utilize MOEA to improve the global applicability of the mechanism in 5G environment. Finally, by simulating the location scenario, the visual results on experiments show that the mechanism can not only protect location privacy, but also achieve high availability of services as desired.

Federated Edge Learning (FEEL) involves the collaborative training of machine learning models among edge devices, with the orchestration of a server in a wireless edge network. Due to frequent model updates, FEEL needs to be adapted to the limited communication bandwidth, scarce energy of edge devices, and the statistical heterogeneity of edge devices' data distributions. Therefore, a careful scheduling of a subset of devices for training and uploading models is necessary. In contrast to previous work in FEEL where the data aspects are under-explored, we consider data properties at the heart of the proposed scheduling algorithm. To this end, we propose a new scheduling scheme for non-independent and-identically-distributed (non-IID) and unbalanced datasets in FEEL. As the data is the key component of the learning, we propose a new set of considerations for data characteristics in wireless scheduling algorithms in FEEL. In fact, the data collected by the devices depends on the local environment and usage pattern. Thus, the datasets vary in size and distributions among the devices. In the proposed algorithm, we consider both data and resource perspectives. In addition to minimizing the completion time of FEEL as well as the transmission energy of the participating devices, the algorithm prioritizes devices with rich and diverse datasets. We first define a general framework for the data-aware scheduling and the main axes and requirements for diversity evaluation. Then, we discuss diversity aspects and some exploitable techniques and metrics. Next, we formulate the problem and present our FEEL scheduling algorithm. Evaluations in different scenarios show that our proposed FEEL scheduling algorithm can help achieve high accuracy in few rounds with a reduced cost.

FEderated Edge Learning (FEEL) has emerged as a leading technique for privacy-preserving distributed training in wireless edge networks, where edge devices collaboratively train machine learning (ML) models with the orchestration of a server. However, due to frequent communication, FEEL needs to be adapted to the limited communication bandwidth. Furthermore, the statistical heterogeneity of local datasets' distributions, and the uncertainty about the data quality pose important challenges to the training's convergence. Therefore, a meticulous selection of the participating devices and an analogous bandwidth allocation are necessary. In this paper, we propose a data-quality based scheduling (DQS) algorithm for FEEL. DQS prioritizes reliable devices with rich and diverse datasets. In this paper, we define the different components of the learning algorithm and the data-quality evaluation. Then, we formulate the device selection and the bandwidth allocation problem. Finally, we present our DQS algorithm for FEEL, and we evaluate it in different data poisoning scenarios.

With the growth of 5G, Internet of Things (IoT), edge computing and cloud computing technologies, the infrastructure (compute and network) available to emerging applications (AR/VR, autonomous driving, industry 4.0, etc.) has become quite complex. There are multiple tiers of computing (IoT devices, near edge, far edge, cloud, etc.) that are connected with different types of networking technologies (LAN, LTE, 5G, MAN, WAN, etc.). Deployment and management of applications in such an environment is quite challenging. In this paper, we propose ROMA, which performs resource orchestration for microservices-based 5G applications in a dynamic, heterogeneous, multi-tiered compute and network fabric. We assume that only application-level requirements are known, and the detailed requirements of the individual microservices in the application are not specified. As part of our solution, ROMA identifies and leverages the coupling relationship between compute and network usage for various microservices and solves an optimization problem in order to appropriately identify how each microservice should be deployed in the complex, multi-tiered compute and network fabric, so that the end-to-end application requirements are optimally met. We implemented two real-world 5G applications in video surveillance and intelligent transportation system (ITS) domains. Through extensive experiments, we show that ROMA is able to save up to 90%, 55% and 44% compute and up to 80%, 95% and 75% network bandwidth for the surveillance (watchlist) and transportation application (person and car detection), respectively. This improvement is achieved while honoring the application performance requirements, and it is over an alternative scheme that employs a static and overprovisioned resource allocation strategy by ignoring the resource coupling relationships.

In this paper, we design an efficient deep convolutional neural network (CNN) to improve and predict the performance of energy harvesting (EH) short-packet communications in multi-hop cognitive Internet-of-Things (IoT) networks. Specifically, we propose a Sum-EH scheme that allows IoT nodes to harvest energy from either a power beacon or primary transmitters to improve not only packet transmissions but also energy harvesting capabilities. We then build a novel deep CNN framework with feature enhancement-collection blocks based on the proposed Sum-EH scheme to simultaneously estimate the block error rate (BLER) and throughput with high accuracy and low execution time. Simulation results show that the proposed CNN framework achieves almost exactly the BLER and throughput of Sum-EH one, while it considerably reduces computational complexity, suggesting a real-time setting for IoT systems under complex scenarios. Moreover, the designed CNN model achieves the root-mean-square-error (RMSE) of ${1.33\times10^{-2}}$ on the considered dataset, which exhibits the lowest RMSE compared to the deep neural network and state-of-the-art machine learning approaches.

Being accurate, efficient, and compact is essential to a facial landmark detector for practical use. To simultaneously consider the three concerns, this paper investigates a neat model with promising detection accuracy under wild environments e.g., unconstrained pose, expression, lighting, and occlusion conditions) and super real-time speed on a mobile device. More concretely, we customize an end-to-end single stage network associated with acceleration techniques. During the training phase, for each sample, rotation information is estimated for geometrically regularizing landmark localization, which is then NOT involved in the testing phase. A novel loss is designed to, besides considering the geometrical regularization, mitigate the issue of data imbalance by adjusting weights of samples to different states, such as large pose, extreme lighting, and occlusion, in the training set. Extensive experiments are conducted to demonstrate the efficacy of our design and reveal its superior performance over state-of-the-art alternatives on widely-adopted challenging benchmarks, i.e., 300W (including iBUG, LFPW, AFW, HELEN, and XM2VTS) and AFLW. Our model can be merely 2.1Mb of size and reach over 140 fps per face on a mobile phone (Qualcomm ARM 845 processor) with high precision, making it attractive for large-scale or real-time applications. We have made our practical system based on PFLD 0.25X model publicly available at \url{//sites.google.com/view/xjguo/fld} for encouraging comparisons and improvements from the community.

Network Virtualization is one of the most promising technologies for future networking and considered as a critical IT resource that connects distributed, virtualized Cloud Computing services and different components such as storage, servers and application. Network Virtualization allows multiple virtual networks to coexist on same shared physical infrastructure simultaneously. One of the crucial keys in Network Virtualization is Virtual Network Embedding, which provides a method to allocate physical substrate resources to virtual network requests. In this paper, we investigate Virtual Network Embedding strategies and related issues for resource allocation of an Internet Provider(InP) to efficiently embed virtual networks that are requested by Virtual Network Operators(VNOs) who share the same infrastructure provided by the InP. In order to achieve that goal, we design a heuristic Virtual Network Embedding algorithm that simultaneously embeds virtual nodes and virtual links of each virtual network request onto physic infrastructure. Through extensive simulations, we demonstrate that our proposed scheme improves significantly the performance of Virtual Network Embedding by enhancing the long-term average revenue as well as acceptance ratio and resource utilization of virtual network requests compared to prior algorithms.

北京阿比特科技有限公司