亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Communication protocols are the languages used by network nodes. Before a user equipment (UE) can exchange data with a base station (BS), it must first negotiate the conditions and parameters for that transmission. This negotiation is supported by signaling messages at all layers of the protocol stack. Each year, the mobile communications industry defines and standardizes these messages, which are designed by humans during lengthy technical (and often political) debates. Following this standardization effort, the development phase begins, wherein the industry interprets and implements the resulting standards. But is this massive development undertaking the only way to implement a given protocol? We address the question of whether radios can learn a pre-given target protocol as an intermediate step towards evolving their own. Furthermore, we train cellular radios to emerge a channel access policy that performs optimally under the constraints of the target protocol. We show that multi-agent reinforcement learning (MARL) and learning-to-communicate (L2C) techniques achieve this goal with gains over expert systems. Finally, we provide insight into the transferability of these results to scenarios never seen during training.

相關內容

We present a framework for operating a self-adaptive RIS inside a fading rich-scattering wireless environment. We model the rich-scattering wireless channel as being double-parametrized by (i) the RIS, and (ii) dynamic perturbers (moving objects, etc.). Within each coherence time, first, the self-adaptive RIS estimates the status of the dynamic perturbers (e.g., the perturbers' orientations and locations) based on measurements with an auxiliary wireless channel. Then, second, using a learned surrogate forward model of the mapping from RIS configuration and perturber status to wireless channel, an optimized RIS configuration to achieve a desired functionality is obtained. We demonstrate our technique using a physics-based end-to-end model of RIS-parametrized communication with adjustable fading (PhysFad) for the example objective of maximizing the received signal strength indicator. Our results present a route toward convergence of RIS-empowered localization and sensing with RIS-empowered channel shaping beyond the simple case of operation in free space without fading.

Machine learning algorithms have recently been considered for many tasks in the field of wireless communications. Previously, we have proposed the use of a deep fully convolutional neural network (CNN) for receiver processing and shown it to provide considerable performance gains. In this study, we focus on machine learning algorithms for the transmitter. In particular, we consider beamforming and propose a CNN which, for a given uplink channel estimate as input, outputs downlink channel information to be used for beamforming. The CNN is trained in a supervised manner considering both uplink and downlink transmissions with a loss function that is based on UE receiver performance. The main task of the neural network is to predict the channel evolution between uplink and downlink slots, but it can also learn to handle inefficiencies and errors in the whole chain, including the actual beamforming phase. The provided numerical experiments demonstrate the improved beamforming performance.

In this work, we consider a remote monitoring scenario in which multiple sensors share a wireless channel to deliver their status updates to a process monitor via an access point (AP). Moreover, we consider that the sensors randomly arrive and depart from the network as they become active and inactive. The goal of the sensors is to devise a medium access strategy to collectively minimize the long-term mean network \ac{AoI} of their respective processes at the remote monitor. For this purpose, we propose specific modifications to ALOHA-QT algorithm, a distributed medium access algorithm that employs a policy tree (PT) and reinforcement learning (RL) to achieve high throughput. We provide the upper bound on the mean network Age of Information (AoI) for the proposed algorithm along with pointers for selecting its key parameter. The results reveal that the proposed algorithm reduces mean network \ac{AoI} by more than 50 percent for state of the art stationary randomized policies while successfully adjusting to a changing number of active users in the network. The algorithm needs less memory and computation than ALOHA-QT while performing better in terms of AoI.

In scalable machine learning systems, model training is often parallelized over multiple nodes that run without tight synchronization. Most analysis results for the related asynchronous algorithms use an upper bound on the information delays in the system to determine learning rates. Not only are such bounds hard to obtain in advance, but they also result in unnecessarily slow convergence. In this paper, we show that it is possible to use learning rates that depend on the actual time-varying delays in the system. We develop general convergence results for delay-adaptive asynchronous iterations and specialize these to proximal incremental gradient descent and block-coordinate descent algorithms. For each of these methods, we demonstrate how delays can be measured on-line, present delay-adaptive step-size policies, and illustrate their theoretical and practical advantages over the state-of-the-art.

Federated edge learning (FEEL) is a promising distributed machine learning (ML) framework to drive edge intelligence applications. However, due to the dynamic wireless environments and the resource limitations of edge devices, communication becomes a major bottleneck. In this work, we propose time-correlated sparsification with hybrid aggregation (TCS-H) for communication-efficient FEEL, which exploits jointly the power of model compression and over-the-air computation. By exploiting the temporal correlations among model parameters, we construct a global sparsification mask, which is identical across devices, and thus enables efficient model aggregation over-the-air. Each device further constructs a local sparse vector to explore its own important parameters, which are aggregated via digital communication with orthogonal multiple access. We further design device scheduling and power allocation algorithms for TCS-H. Experiment results show that, under limited communication resources, TCS-H can achieve significantly higher accuracy compared to the conventional top-K sparsification with orthogonal model aggregation, with both i.i.d. and non-i.i.d. data distributions.

Achieving an end-to-end low-latency for computations offloading, in Mobile Edge Computing (MEC) systems, is still a critical design problem. This is because the offloading of computational tasks via the MEC servers entails the use of uplink (UL) and downlink (DL) radio links that are usually assumed to be coupled to a single base station (BS). However, for heterogeneous networks, a new architectural paradigm whereby UL and DL are not associated with the same BS is proposed and seen to provide gains in network throughput due to the improved UL performance. Motivated by such gains, and by using typical results from stochastic geometry, we formulate the offloading latency for the MEC-based scheme with decoupled UL/DL association, or decoupled access, and compare its performance to the conventional coupled access scheme. Despite the backhaul delay necessary for the communication between the two serving BSs in UL and DL, the offloading scheme with decoupled access is still capable of providing a fairly lower offloading latency compared to the conventional offloading scheme with coupled access.

This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyze the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such models' predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterize the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behavior and lets us categorize networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths. With these tools, we can learn in detail about the inductive bias of architectures, hyperparameters, and optimizers.

Multi-task learning (MTL) aims to improve the generalization of several related tasks by learning them jointly. As a comparison, in addition to the joint training scheme, modern meta-learning allows unseen tasks with limited labels during the test phase, in the hope of fast adaptation over them. Despite the subtle difference between MTL and meta-learning in the problem formulation, both learning paradigms share the same insight that the shared structure between existing training tasks could lead to better generalization and adaptation. In this paper, we take one important step further to understand the close connection between these two learning paradigms, through both theoretical analysis and empirical investigation. Theoretically, we first demonstrate that MTL shares the same optimization formulation with a class of gradient-based meta-learning (GBML) algorithms. We then prove that for over-parameterized neural networks with sufficient depth, the learned predictive functions of MTL and GBML are close. In particular, this result implies that the predictions given by these two models are similar over the same unseen task. Empirically, we corroborate our theoretical findings by showing that, with proper implementation, MTL is competitive against state-of-the-art GBML algorithms on a set of few-shot image classification benchmarks. Since existing GBML algorithms often involve costly second-order bi-level optimization, our first-order MTL method is an order of magnitude faster on large-scale datasets such as mini-ImageNet. We believe this work could help bridge the gap between these two learning paradigms, and provide a computationally efficient alternative to GBML that also supports fast task adaptation.

Policy gradient methods are widely used in reinforcement learning algorithms to search for better policies in the parameterized policy space. They do gradient search in the policy space and are known to converge very slowly. Nesterov developed an accelerated gradient search algorithm for convex optimization problems. This has been recently extended for non-convex and also stochastic optimization. We use Nesterov's acceleration for policy gradient search in the well-known actor-critic algorithm and show the convergence using ODE method. We tested this algorithm on a scheduling problem. Here an incoming job is scheduled into one of the four queues based on the queue lengths. We see from experimental results that algorithm using Nesterov's acceleration has significantly better performance compared to algorithm which do not use acceleration. To the best of our knowledge this is the first time Nesterov's acceleration has been used with actor-critic algorithm.

Querying graph structured data is a fundamental operation that enables important applications including knowledge graph search, social network analysis, and cyber-network security. However, the growing size of real-world data graphs poses severe challenges for graph databases to meet the response-time requirements of the applications. Planning the computational steps of query processing - Query Planning - is central to address these challenges. In this paper, we study the problem of learning to speedup query planning in graph databases towards the goal of improving the computational-efficiency of query processing via training queries.We present a Learning to Plan (L2P) framework that is applicable to a large class of query reasoners that follow the Threshold Algorithm (TA) approach. First, we define a generic search space over candidate query plans, and identify target search trajectories (query plans) corresponding to the training queries by performing an expensive search. Subsequently, we learn greedy search control knowledge to imitate the search behavior of the target query plans. We provide a concrete instantiation of our L2P framework for STAR, a state-of-the-art graph query reasoner. Our experiments on benchmark knowledge graphs including DBpedia, YAGO, and Freebase show that using the query plans generated by the learned search control knowledge, we can significantly improve the speed of STAR with negligible loss in accuracy.

北京阿比特科技有限公司