亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the analysis and design of distributed wireless networks wherein remote radio heads (RRHs) coordinate transmissions to serve multiple users on the same resource block (RB). Specifically, we analyze two possible multiple-input multiple-output wireless fronthaul solutions: multicast and zero forcing (ZF) beamforming. We develop a statistical model for the fronthaul rate and, coupled with an analysis of the user access rate, we optimize the placement of the RRHs. This model allows us to formulate the location optimization problem with a statistical constraint on fronthaul outage. Our results are cautionary, showing that the fronthaul requires considerable bandwidth to enable joint service to users. This requirement can be relaxed by serving a low number of users on the same RB. Additionally, we show that, with a fixed number of antennas, for the multicast fronthaul, it is prudent to concentrate these antennas on a few RRHs. However, for the ZF beamforming fronthaul, it is better to distribute the antennas on more RRHs. For the parameters chosen, using a ZF beamforming fronthaul improves the typical access rate by approximately 8% compared to multicast. Crucially, our work quantifies the effect of these fronthaul solutions and provides an effective tool for the design of distributed networks.

相關內容

Modern wireless cellular networks use massive multiple-input multiple-output (MIMO) technology. This technology involves operations with an antenna array at a base station that simultaneously serves multiple mobile devices which also use multiple antennas on their side. For this, various precoding and detection techniques are used, allowing each user to receive the signal intended for him from the base station. There is an important class of linear precoding called Regularized Zero-Forcing (RZF). In this work, we propose Adaptive RZF (ARZF) with a special kind of regularization matrix with different coefficients for each layer of multi-antenna users. These regularization coefficients are defined by explicit formulas based on SVD decompositions of user channel matrices. We study the optimization problem, which is solved by the proposed algorithm, with the connection to other possible problem statements. We also compare the proposed algorithm with state-of-the-art linear precoding algorithms on simulations with the Quadriga channel model. The proposed approach provides a significant increase in quality with the same computation time as in the reference methods.

This paper presents an algorithm for iterative joint channel parameter (carrier phase, Doppler shift and Doppler rate) estimation and decoding of transmission over channels affected by Doppler shift and Doppler rate using a distributed receiver. This algorithm is derived by applying the sum-product algorithm (SPA) to a factor graph representing the joint a posteriori distribution of the information symbols and channel parameters given the channel output. In this paper, we present two methods for dealing with intractable messages of the sum-product algorithm. In the first approach, we use particle filtering with sequential importance sampling (SIS) for the estimation of the unknown parameters. We also propose a method for fine-tuning of particles for improved convergence. In the second approach, we approximate our model with a random walk phase model, followed by a phase tracking algorithm and polynomial regression algorithm to estimate the unknown parameters. We derive the Weighted Bayesian Cramer-Rao Bounds (WBCRBs) for joint carrier phase, Doppler shift and Doppler rate estimation, which take into account the prior distribution of the estimation parameters and are accurate lower bounds for all considered Signal to Noise Ratio (SNR) values. Numerical results (of bit error rate (BER) and the mean-square error (MSE) of parameter estimation) suggest that phase tracking with the random walk model slightly outperforms particle filtering. However, particle filtering has a lower computational cost than the random walk model based method.

We consider the task of learning causal structures from data stored on multiple machines, and propose a novel structure learning method called distributed annealing on regularized likelihood score (DARLS) to solve this problem. We model causal structures by a directed acyclic graph that is parameterized with generalized linear models, so that our method is applicable to various types of data. To obtain a high-scoring causal graph, DARLS simulates an annealing process to search over the space of topological sorts, where the optimal graphical structure compatible with a sort is found by a distributed optimization method. This distributed optimization relies on multiple rounds of communication between local and central machines to estimate the optimal structure. We establish its convergence to a global optimizer of the overall score that is computed on all data across local machines. To the best of our knowledge, DARLS is the first distributed method for learning causal graphs with such theoretical guarantees. Through extensive simulation studies, DARLS has shown competing performance against existing methods on distributed data, and achieved comparable structure learning accuracy and test-data likelihood with competing methods applied to pooled data across all local machines. In a real-world application for modeling protein-DNA binding networks with distributed ChIP-Sequencing data, DARLS also exhibits higher predictive power than other methods, demonstrating a great advantage in estimating causal networks from distributed data.

In this paper, we study a distributed learning problem constrained by constant communication bits. Specifically, we consider the distributed hypothesis testing (DHT) problem where two distributed nodes are constrained to transmit a constant number of bits to a central decoder. In such cases, we show that in order to achieve the optimal error exponents, it suffices to consider the empirical distributions of observed data sequences and encode them to the transmission bits. With such a coding strategy, we develop a geometric approach in the distribution spaces and establish an inner bound of error exponent regions. In particular, we show the optimal achievable error exponents and coding schemes for the following cases: (i) both nodes can transmit $\log_23$ bits; (ii) one of the nodes can transmit $1$ bit, and the other node is not constrained; (iii) the joint distribution of the nodes are conditionally independent given one hypothesis. Furthermore, we provide several numerical examples for illustrating the theoretical results. Our results provide theoretical guidance for designing practical distributed learning rules, and the developed approach also reveals new potentials for establishing error exponents for DHT with more general communication constraints.

In wireless network, the optimization problems generally have complex constraints, and are usually solved via utilizing the traditional optimization methods that have high computational complexity and need to be executed repeatedly with the change of network environments. In this paper, to overcome these shortcomings, an unsupervised deep unrolling framework based on projection gradient descent, i.e., unrolled PGD network (UPGDNet), is designed to solve a family of constrained optimization problems. The set of constraints is divided into two categories according to the coupling relations among optimization variables and the convexity of constraints. One category of constraints includes convex constraints with decoupling among optimization variables, and the other category of constraints includes non-convex or convex constraints with coupling among optimization variables. Then, the first category of constraints is directly projected onto the feasible region, while the second category of constraints is projected onto the feasible region using neural network. Finally, an unrolled sum rate maximization network (USRMNet) is designed based on UPGDNet to solve the weighted SR maximization problem for the multiuser ultra-reliable low latency communication system. Numerical results show that USRMNet has a comparable performance with low computational complexity and an acceptable generalization ability in terms of the user distribution.

This paper presents a novel approach to conduct highly efficient federated learning (FL) over a massive wireless edge network, where an edge server and numerous mobile devices (clients) jointly learn a global model without transporting the huge amount of data collected by the mobile devices to the edge server. The proposed FL approach is referred to as spatio-temporal FL (STFL), which jointly exploits the spatial and temporal correlations between the learning updates from different mobile devices scheduled to join STFL in various training epochs. The STFL model not only represents the realistic intermittent learning behavior from the edge server to the mobile devices due to data delivery outage, but also features a mechanism of compensating loss learning updates in order to mitigate the impacts of intermittent learning. An analytical framework of STFL is proposed and employed to study the learning capability of STFL via its convergence performance. In particular, we have assessed the impact of data delivery outage, intermittent learning mitigation, and statistical heterogeneity of datasets on the convergence performance of STFL. The results provide crucial insights into the design and analysis of STFL-based wireless networks.

Distributed Artificial Intelligence (DAI) is regarded as one of the most promising techniques to provide intelligent services under strict privacy protection regulations for multiple clients. By applying DAI, training on raw data is carried out locally, while the trained outputs, e.g., model parameters, from multiple local clients, are sent back to a central server for aggregation. Recently, for achieving better practicality, DAI is studied in conjunction with wireless communication networks, incorporating various random effects brought by wireless channels. However, because of the complex and case-dependent nature of wireless channels, a generic simulator for applying DAI in wireless communication networks is still lacking. To accelerate the development of DAI applied in wireless communication networks, we propose a generic system design in this paper as well as an associated simulator that can be set according to wireless channels and system-level configurations. Details of the system design and analysis of the impacts of wireless environments are provided to facilitate further implementations and updates. We employ a series of experiments to verify the effectiveness and efficiency of the proposed system design and reveal its superior scalability.

The aim of this work is to develop a fully-distributed algorithmic framework for training graph convolutional networks (GCNs). The proposed method is able to exploit the meaningful relational structure of the input data, which are collected by a set of agents that communicate over a sparse network topology. After formulating the centralized GCN training problem, we first show how to make inference in a distributed scenario where the underlying data graph is split among different agents. Then, we propose a distributed gradient descent procedure to solve the GCN training problem. The resulting model distributes computation along three lines: during inference, during back-propagation, and during optimization. Convergence to stationary solutions of the GCN training problem is also established under mild conditions. Finally, we propose an optimization criterion to design the communication topology between agents in order to match with the graph describing data relationships. A wide set of numerical results validate our proposal. To the best of our knowledge, this is the first work combining graph convolutional neural networks with distributed optimization.

In recent years, mobile devices have gained increasingly development with stronger computation capability and larger storage. Some of the computation-intensive machine learning and deep learning tasks can now be run on mobile devices. To take advantage of the resources available on mobile devices and preserve users' privacy, the idea of mobile distributed machine learning is proposed. It uses local hardware resources and local data to solve machine learning sub-problems on mobile devices, and only uploads computation results instead of original data to contribute to the optimization of the global model. This architecture can not only relieve computation and storage burden on servers, but also protect the users' sensitive information. Another benefit is the bandwidth reduction, as various kinds of local data can now participate in the training process without being uploaded to the server. In this paper, we provide a comprehensive survey on recent studies of mobile distributed machine learning. We survey a number of widely-used mobile distributed machine learning methods. We also present an in-depth discussion on the challenges and future directions in this area. We believe that this survey can demonstrate a clear overview of mobile distributed machine learning and provide guidelines on applying mobile distributed machine learning to real applications.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司