亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper considers a cell-free massive multiple-input multiple-output (MIMO) system that consists of a large number of geographically distributed access points (APs) serving multiple users via coherent joint transmission. The downlink performance of the system is evaluated, with maximum ratio and regularized zero-forcing precoding, under two optimization objectives for power allocation: sum spectral efficiency (SE) maximization and proportional fairness. We present iterative centralized algorithms for solving these problems. Aiming at a less computationally complex and also distributed scalable solution, we train a deep neural network (DNN) to approximate the same network-wide power allocation. Instead of training our DNN to mimic the actual optimization procedure, we use a heuristic power allocation, based on large-scale fading (LSF) parameters, as the pre-processed input to the DNN. We train the DNN to refine the heuristic scheme, thereby providing higher SE, using only local information at each AP. Another distributed DNN that exploits side information assumed to be available at the central processing unit is designed for improved performance. Further, we develop a clustered DNN model where the LSF parameters of a small number of APs, forming a cluster within a relatively large network, are used to jointly approximate the power coefficients of the cluster.

相關內容

In this paper, we consider a Linear Program (LP)-based online resource allocation problem where a decision maker accepts or rejects incoming customer requests irrevocably in order to maximize expected revenue given limited resources. At each time, a new order/customer/bid is revealed with a request of some resource(s) and a reward. We consider a stochastic setting where all the orders are i.i.d. sampled from an unknown distribution. Such formulation contains many classic applications such as the canonical (quantity-based) network revenue management problem and the Adwords problem. Specifically, we study the objective of providing fairness guarantees while maintaining low regret. Our definition of fairness is that a fair online algorithm should treat similar agents/customers similarly and the decision made for similar individuals should be consistent over time. We define a fair offline solution as the analytic center of the offline optimal solution set, and propose a fair algorithm that uses an interior-point LP solver and dynamically detects unfair resource spending. Our algorithm can control cumulative unfairness (the cumulative deviation from the online solutions to the offline fair solution) on the scale of order $O(\log(T))$, while maintaining the regret to be bounded with dependency on $T$. Our approach do not formulate the fairness requirement as a constrain in the optimization instance, and instead we address the problem from the perspective of algorithm design. We get the desirable fairness guarantee without imposing any fairness constraint, and our regret result is strong for the reason that we evaluate the regret by comparing to the original objective value.

Message-passing algorithms based on the Belief Propagation (BP) equations constitute a well-known distributed computational scheme. It is exact on tree-like graphical models and has also proven to be effective in many problems defined on graphs with loops (from inference to optimization, from signal processing to clustering). The BP-based scheme is fundamentally different from stochastic gradient descent (SGD), on which the current success of deep networks is based. In this paper, we present and adapt to mini-batch training on GPUs a family of BP-based message-passing algorithms with a reinforcement field that biases distributions towards locally entropic solutions. These algorithms are capable of training multi-layer neural networks with discrete weights and activations with performance comparable to SGD-inspired heuristics (BinaryNet) and are naturally well-adapted to continual learning. Furthermore, using these algorithms to estimate the marginals of the weights allows us to make approximate Bayesian predictions that have higher accuracy than point-wise solutions.

In this paper we study the maximum degree of interaction which may emerge in distributed systems. It is assumed that a distributed system is represented by a graph of nodes interacting over edges. Each node has some amount of data. The intensity of interaction over an edge is proportional to the product of the amounts of data in each node at either end of the edge. The maximum sum of interactions over the edges is searched for. This model can be extended to other interacting entities. For bipartite graphs and odd-length cycles we prove that the greatest degree of interaction emerge when the whole data is concentrated in an arbitrary pair of neighbors. Equal partitioning of the load is shown to be optimum for complete graphs. Finally, we show that in general graphs for maximum interaction the data should be distributed equally between the nodes of the largest clique in the graph. We also present in this context a result of Motzkin and Straus from 1965 for the maximal interaction objective.

Much stringent reliability and processing latency requirements in ultra-reliable-low-latency-communication (URLLC) traffic make the design of linear massive multiple-input-multiple-output (M-MIMO) receivers becomes very challenging. Recently, Bayesian concept has been used to increase the detection reliability in minimum-mean-square-error (MMSE) linear receivers. However, the latency processing time is a major concern due to the exponential complexity of matrix inversion operations in MMSE schemes. This paper proposes an iterative M-MIMO receiver that is developed by using a Bayesian concept and a parallel interference cancellation (PIC) scheme, referred to as a linear Bayesian learning (LBL) receiver. PIC has a linear complexity as it uses a combination of maximum ratio combining (MRC) and decision statistic combining (DSC) schemes to avoid matrix inversion operations. Simulation results show that the bit-error-rate (BER) and latency processing performances of the proposed receiver outperform the ones of MMSE and best Bayesian-based receivers by minimum $2$ dB and $19$ times for various M-MIMO system configurations.

The paper describes an online deep learning algorithm for the adaptive modulation and coding in massive MIMO. The algorithm is based on a fully connected neural network, which is initially trained on the output of the traditional algorithm and then is incrementally retrained by the service feedback of its output. We show the advantage of our solution over the state-of-the-art Q-Learning approach. We provide system-level simulation results to support this conclusion in various scenarios with different channel characteristics and different user speeds. Compared with traditional OLLA our algorithm shows 10\% to 20\% improvement of user throughput in the full buffer case of continuous traffic. This is a very valuable result that allows us to significantly improve the quality of wireless MIMO communications.

The paper studies the multi-user precoding problem as a non-convex optimization problem for wireless MIMO systems. In our work, we approximate the target Spectral Efficiency function with a novel computationally simpler function. Then, we reduce the precoding problem to an unconstrained optimization task using a special differential projection method and solve it by the Quasi-Newton L-BFGS iterative procedure to achieve gains in capacity. We are testing the proposed approach in several scenarios generated using Quadriga -- open-source software for generating realistic radio channel impulse response. Our method shows monotonic improvement over heuristic methods with reasonable computation time. The proposed L-BFGS optimization scheme is novel in this area and shows a significant advantage over the standard approaches. Proposed method has a simple implementation and can be a good reference for other heuristic algorithms in this field.

Training datasets for machine learning often have some form of missingness. For example, to learn a model for deciding whom to give a loan, the available training data includes individuals who were given a loan in the past, but not those who were not. This missingness, if ignored, nullifies any fairness guarantee of the training procedure when the model is deployed. Using causal graphs, we characterize the missingness mechanisms in different real-world scenarios. We show conditions under which various distributions, used in popular fairness algorithms, can or can not be recovered from the training data. Our theoretical results imply that many of these algorithms can not guarantee fairness in practice. Modeling missingness also helps to identify correct design principles for fair algorithms. For example, in multi-stage settings where decisions are made in multiple screening rounds, we use our framework to derive the minimal distributions required to design a fair algorithm. Our proposed algorithm decentralizes the decision-making process and still achieves similar performance to the optimal algorithm that requires centralization and non-recoverable distributions.

Model-based methods for recommender systems have been studied extensively in recent years. In systems with large corpus, however, the calculation cost for the learnt model to predict all user-item preferences is tremendous, which makes full corpus retrieval extremely difficult. To overcome the calculation barriers, models such as matrix factorization resort to inner product form (i.e., model user-item preference as the inner product of user, item latent factors) and indexes to facilitate efficient approximate k-nearest neighbor searches. However, it still remains challenging to incorporate more expressive interaction forms between user and item features, e.g., interactions through deep neural networks, because of the calculation cost. In this paper, we focus on the problem of introducing arbitrary advanced models to recommender systems with large corpus. We propose a novel tree-based method which can provide logarithmic complexity w.r.t. corpus size even with more expressive models such as deep neural networks. Our main idea is to predict user interests from coarse to fine by traversing tree nodes in a top-down fashion and making decisions for each user-node pair. We also show that the tree structure can be jointly learnt towards better compatibility with users' interest distribution and hence facilitate both training and prediction. Experimental evaluations with two large-scale real-world datasets show that the proposed method significantly outperforms traditional methods. Online A/B test results in Taobao display advertising platform also demonstrate the effectiveness of the proposed method in production environments.

Latent Dirichlet Allocation(LDA) is a popular topic model. Given the fact that the input corpus of LDA algorithms consists of millions to billions of tokens, the LDA training process is very time-consuming, which may prevent the usage of LDA in many scenarios, e.g., online service. GPUs have benefited modern machine learning algorithms and big data analysis as they can provide high memory bandwidth and computation power. Therefore, many frameworks, e.g. Ten- sorFlow, Caffe, CNTK, support to use GPUs for accelerating the popular machine learning data-intensive algorithms. However, we observe that LDA solutions on GPUs are not satisfying. In this paper, we present CuLDA_CGS, a GPU-based efficient and scalable approach to accelerate large-scale LDA problems. CuLDA_CGS is designed to efficiently solve LDA problems at high throughput. To it, we first delicately design workload partition and synchronization mechanism to exploit the benefits of mul- tiple GPUs. Then, we offload the LDA sampling process to each individual GPU by optimizing from the sampling algorithm, par- allelization, and data compression perspectives. Evaluations show that compared with state-of-the-art LDA solutions, CuLDA_CGS outperforms them by a large margin (up to 7.3X) on a single GPU. CuLDA_CGS is able to achieve extra 3.0X speedup on 4 GPUs. The source code is publicly available on //github.com/cuMF/ CuLDA_CGS.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司