亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Device-to-device (D2D) communications is expected to be a critical enabler of distributed computing in edge networks at scale. A key challenge in providing this capability is the requirement for judicious management of the heterogeneous communication and computation resources that exist at the edge to meet processing needs. In this paper, we develop an optimization methodology that considers the network topology jointly with device and network resource allocation to minimize total D2D overhead, which we quantify in terms of time and energy required for task processing. Variables in our model include task assignment, CPU allocation, subchannel selection, and beamforming design for multiple-input multiple-output (MIMO) wireless devices. We propose two methods to solve the resulting non-convex mixed integer program: semi-exhaustive search optimization, which represents a "best-effort" at obtaining the optimal solution, and efficient alternate optimization, which is more computationally efficient. As a component of these two methods, we develop a novel coordinated beamforming algorithm which we show obtains the optimal beamformer for a common receiver characteristic. Through numerical experiments, we find that our methodology yields substantial improvements in network overhead compared with local computation and partially optimized methods, which validates our joint optimization approach. Further, we find that the efficient alternate optimization scales well with the number of nodes, and thus can be a practical solution for D2D computing in large networks.

相關內容

The Internet of Things (IoT) brings connectivity to a massive number of devices that demand energy-efficient solutions to deal with limited battery capacities, uplink-dominant traffic, and channel impairments. In this work, we explore the use of Unmanned Aerial Vehicles (UAVs) equipped with configurable antennas as a flexible solution for serving low-power IoT networks. We formulate an optimization problem to set the position and antenna beamwidth of the UAV, and the transmit power of the IoT devices subject to average-Signal-to-average-Interference-plus-Noise Ratio ($\bar{\text{S}}\overline{\text{IN}}\text{R}$) Quality of Service (QoS) constraints. We minimize the worst-case average energy consumption of the latter, thus, targeting the fairest allocation of the energy resources. The problem is non-convex and highly non-linear; therefore, we re-formulate it as a series of three geometric programs that can be solved iteratively. Results reveal the benefits of planning the network compared to a random deployment in terms of reducing the worst-case average energy consumption. Furthermore, we show that the target $\bar{\text{S}}\overline{\text{IN}}\text{R}$ is limited by the number of IoT devices, and highlight the dominant impact of the UAV hovering height when serving wider areas. Our proposed algorithm outperforms other optimization benchmarks in terms of minimizing the average energy consumption at the most energy-demanding IoT device, and convergence time.

One of the most important aspects of moving forward to the next generation networks like 5G/6G, is to enable network slicing in an efficient manner. The most challenging issues are the uncertainties in computation and communication demand. Because the slices' arrive to the network in different times and their lifespans vary, the solution should dynamically react to online slice requests. The joint problem of online admission control and resource allocation considering the energy consumption is formulated mathematically. It is based on Binary Linear Programming (BLP), where, the $\Gamma$-Robustness concept is exploited to overcome Virtual Links (VL) bandwidths' and Virtual Network Functions (VNF) workloads' uncertainties. Then, an optimal algorithm is proposed. This optimal algorithm cannot be solved in a reasonable amount of time for real-world and large-scale networks. To find near-optimal solution efficiently, a new heuristic algorithm is developed. The assessments' results indicate that the efficiency of heuristic is vital in increasing the accepted requests' count, decreasing power consumption and providing adjustable tolerance vs. the VNFs workloads' and VLs traffics' uncertainties, separately. Considering the acceptance ratio and power consumption that constitute the two important components of the objective function, heuristic has about 7% and 10% optimality gaps, respectively, while being about 30X faster than that of optimal algorithm.

Device-to-device (D2D) links scheduling for avoiding excessive interference is critical to the success of wireless D2D communications. Most of the traditional scheduling schemes only consider the maximum throughput or fairness of the system and do not consider the freshness of information. In this paper, we propose a novel D2D links scheduling scheme to optimize an age of information (AoI) and throughput jointly scheduling problem when D2D links transmit packets under the last-come-first-serve policy with packet-replacement (LCFS-PR). It is motivated by the fact that the maximum throughput scheduling may reduce the activation probability of links with poor channel conditions, which results in terrible AoI performance. Specifically, We derive the expression of the overall average AoI and throughput of the network under the spatio-temporal interfering queue dynamics with the mean-field assumption. Moreover, a neural network structure is proposed to learn the mapping from the geographic location to the optimal scheduling parameters under a stationary randomized policy, where the scheduling decision can be made without estimating the channel state information(CSI) after the neural network is well-trained. To overcome the problem that implicit loss functions cannot be back-propagated, we derive a numerical solution of the gradient. Finally, numerical results reveal that the performance of the deep learning approach is close to that of a local optimal algorithm which has a higher computational complexity. The trade-off curve of AoI and throughput is also obtained, where the AoI tends to infinity when throughput is maximized.

This paper considers the design of optimal resource allocation policies in wireless communication systems which are generically modeled as a functional optimization problem with stochastic constraints. These optimization problems have the structure of a learning problem in which the statistical loss appears as a constraint, motivating the development of learning methodologies to attempt their solution. To handle stochastic constraints, training is undertaken in the dual domain. It is shown that this can be done with small loss of optimality when using near-universal learning parameterizations. In particular, since deep neural networks (DNN) are near-universal their use is advocated and explored. DNNs are trained here with a model-free primal-dual method that simultaneously learns a DNN parametrization of the resource allocation policy and optimizes the primal and dual variables. Numerical simulations demonstrate the strong performance of the proposed approach on a number of common wireless resource allocation problems.

Federated learning has been explored as a promising solution for training at the edge, where end devices collaborate to train models without sharing data with other entities. Since the execution of these learning models occurs at the edge, where resources are limited, new solutions must be developed. In this paper, we describe the recent work on resource management at the edge, and explore the challenges and future directions to allow the execution of federated learning at the edge. Some of the problems of this management, such as discovery of resources, deployment, load balancing, migration, and energy efficiency will be discussed in the paper.

The rapid development of virtual network architecture makes it possible for wireless network to be widely used. With the popularity of artificial intelligence (AI) industry in daily life, efficient resource allocation of wireless network has become a problem. Especially when network users request wireless network resources from different management domains, they still face many practical problems. From the perspective of virtual network embedding (VNE), this paper designs and implements a multi-objective optimization VNE algorithm for wireless network resource allocation. Resource allocation in virtual network is essentially a problem of allocating underlying resources for virtual network requests (VNRs). According to the proposed objective formula, we consider the optimization mapping cost, network delay and VNR acceptance rate. VNE is completed by node mapping and link mapping. In the experiment and simulation stage, it is compared with other VNE algorithms, the cross domain VNE algorithm proposed in this paper is optimal in the above three indicators. This shows the effectiveness of the algorithm in wireless network resource allocation.

The traditional Internet has encountered a bottleneck in allocating network resources for emerging technology needs. Network virtualization (NV) technology as a future network architecture, the virtual network embedding (VNE) algorithm it supports shows great potential in solving resource allocation problems. Combined with the efficient machine learning (ML) algorithm, a neural network model close to the substrate network environment is constructed to train the reinforcement learning agent. This paper proposes a two-stage VNE algorithm based on deep reinforcement learning (DRL) (TS-DRL-VNE) for the problem that the mapping result of existing heuristic algorithm is easy to converge to the local optimal solution. For the problem that the existing VNE algorithm based on ML often ignores the importance of substrate network representation and training mode, a DRL VNE algorithm based on full attribute matrix (FAM-DRL-VNE) is proposed. In view of the problem that the existing VNE algorithm often ignores the underlying resource changes between virtual network requests, a DRL VNE algorithm based on matrix perturbation theory (MPT-DRL-VNE) is proposed. Experimental results show that the above algorithm is superior to other algorithms.

Space-air-ground integrated network (SAGIN) is a new type of wireless network mode. The effective management of SAGIN resources is a prerequisite for high-reliability communication. However, the storage capacity of space-air network segment is extremely limited. The air servers also do not have sufficient storage resources to centrally accommodate the information uploaded by each edge server. So the problem of how to coordinate the storage resources of SAGIN has arisen. This paper proposes a SAGIN storage resource management algorithm based on distributed deep reinforcement learning (DRL). The resource management process is modeled as a Markov decision model. In each edge physical domain, we extract the network attributes represented by storage resources for the agent to build a training environment, so as to realize the distributed training. In addition, we propose a SAGIN resource management framework based on distributed DRL. Simulation results show that the agent has an ideal training effect. Compared with other algorithms, the resource allocation revenue and user request acceptance rate of the proposed algorithm are increased by about 18.15\% and 8.35\% respectively. Besides, the proposed algorithm has good flexibility in dealing with the changes of resource conditions.

The development of Intelligent Cyber-Physical Systems (ICPSs) in virtual network environment is facing severe challenges. On the one hand, the Internet of things (IoT) based on ICPSs construction needs a large amount of reasonable network resources support. On the other hand, ICPSs are facing severe network security problems. The integration of ICPSs and network virtualization (NV) can provide more efficient network resource support and security guarantees for IoT users. Based on the above two problems faced by ICPSs, we propose a virtual network embedded (VNE) algorithm with computing, storage resources and security constraints to ensure the rationality and security of resource allocation in ICPSs. In particular, we use reinforcement learning (RL) method as a means to improve algorithm performance. We extract the important attribute characteristics of underlying network as the training environment of RL agent. Agent can derive the optimal node embedding strategy through training, so as to meet the requirements of ICPSs for resource management and security. The embedding of virtual links is based on the breadth first search (BFS) strategy. Therefore, this is a comprehensive two-stage RL-VNE algorithm considering the constraints of computing, storage and security three-dimensional resources. Finally, we design a large number of simulation experiments from the perspective of typical indicators of VNE algorithms. The experimental results effectively illustrate the effectiveness of the algorithm in the application of ICPSs.

The training of deep residual neural networks (ResNets) with backpropagation has a memory cost that increases linearly with respect to the depth of the network. A way to circumvent this issue is to use reversible architectures. In this paper, we propose to change the forward rule of a ResNet by adding a momentum term. The resulting networks, momentum residual neural networks (Momentum ResNets), are invertible. Unlike previous invertible architectures, they can be used as a drop-in replacement for any existing ResNet block. We show that Momentum ResNets can be interpreted in the infinitesimal step size regime as second-order ordinary differential equations (ODEs) and exactly characterize how adding momentum progressively increases the representation capabilities of Momentum ResNets. Our analysis reveals that Momentum ResNets can learn any linear mapping up to a multiplicative factor, while ResNets cannot. In a learning to optimize setting, where convergence to a fixed point is required, we show theoretically and empirically that our method succeeds while existing invertible architectures fail. We show on CIFAR and ImageNet that Momentum ResNets have the same accuracy as ResNets, while having a much smaller memory footprint, and show that pre-trained Momentum ResNets are promising for fine-tuning models.

北京阿比特科技有限公司