亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Downlink joint transmission by a cluster of remote radio heads (RRHs) is an essential technique for enhancing throughput in future cellular networks. This method requires global channel state information (CSI) at the processing unit that designs the joint precoder. To this end, a large amount of CSI must be shared between the RRHs and that unit. This paper proposes two contributions. The first is a new upper bound on the rate loss, which implies a lower bound on the achievable rate, obtained by a cluster of RRHs that employ joint zero-forcing (ZF) with incomplete CSI. The second contribution, which follows insights from the bound, is a new CSI sharing scheme that drastically reduces the large overhead associated with acquiring global CSI for joint transmission. In a nutshell, each RRH applies a local precoding matrix that creates low-dimensional effective channels that can be quantized more accurately with fewer bits, thereby reducing the overhead of sharing CSI. In addition to the CSI sharing-overhead, this scheme reduces the data rate that must be delivered to each RRH in the cluster.

相關內容

A long line of research on fixed parameter tractability of integer programming culminated with showing that integer programs with n variables and a constraint matrix with dual tree-depth d and largest entry D are solvable in time g(d,D)poly(n) for some function g. However, the dual tree-depth of a constraint matrix is not preserved by row operations, i.e., a given integer program can be equivalent to another with a smaller dual tree-depth, and thus does not reflect its geometric structure. We prove that the minimum dual tree-depth of a row-equivalent matrix is equal to the branch-depth of the matroid defined by the columns of the matrix. We design a fixed parameter algorithm for computing branch-depth of matroids represented over a finite field and a fixed parameter algorithm for computing a row-equivalent matrix with minimum dual tree-depth. Finally, we use these results to obtain an algorithm for integer programming running in time g(d*,D)poly(n) where d* is the branch-depth of the constraint matrix; the branch-depth cannot be replaced by the more permissive notion of branch-width.

Modern wireless cellular networks use massive multiple-input multiple-output (MIMO) technology. This technology involves operations with an antenna array at a base station that simultaneously serves multiple mobile devices which also use multiple antennas on their side. For this, various precoding and detection techniques are used, allowing each user to receive the signal intended for him from the base station. There is an important class of linear precoding called Regularized Zero-Forcing (RZF). In this work, we propose Adaptive RZF (ARZF) with a special kind of regularization matrix with different coefficients for each layer of multi-antenna users. These regularization coefficients are defined by explicit formulas based on SVD decompositions of user channel matrices. We study the optimization problem, which is solved by the proposed algorithm, with the connection to other possible problem statements. We also compare the proposed algorithm with state-of-the-art linear precoding algorithms on simulations with the Quadriga channel model. The proposed approach provides a significant increase in quality with the same computation time as in the reference methods.

We introduce vector optimization problems with stochastic bandit feedback, which extends the best arm identification problem to vector-valued rewards. We consider $K$ designs, with multi-dimensional mean reward vectors, which are partially ordered according to a polyhedral ordering cone $C$. This generalizes the concept of Pareto set in multi-objective optimization and allows different sets of preferences of decision-makers to be encoded by $C$. Different than prior work, we define approximations of the Pareto set based on direction-free covering and gap notions. We study the setting where an evaluation of each design yields a noisy observation of the mean reward vector. Under subgaussian noise assumption, we investigate the sample complexity of the na\"ive elimination algorithm in an ($\epsilon,\delta$)-PAC setting, where the goal is to identify an ($\epsilon,\delta$)-PAC Pareto set with the minimum number of design evaluations. In order to characterize the difficulty of learning the Pareto set, we introduce the concept of ordering complexity, i.e., geometric conditions on the deviations of empirical reward vectors from their mean under which the Pareto front can be approximated accurately. We show how to compute the ordering complexity of any polyhedral ordering cone. We run experiments to verify our theoretical results and illustrate how $C$ and sampling budget affect the Pareto set, returned ($\epsilon,\delta$)-PAC Pareto set and the success of identification.

In this paper, the problem of training federated learning (FL) algorithms over a realistic wireless network is studied. In particular, in the considered model, wireless users execute an FL algorithm while training their local FL models using their own data and transmitting the trained local FL models to a base station (BS) that will generate a global FL model and send it back to the users. Since all training parameters are transmitted over wireless links, the quality of the training will be affected by wireless factors such as packet errors and the availability of wireless resources. Meanwhile, due to the limited wireless bandwidth, the BS must select an appropriate subset of users to execute the FL algorithm so as to build a global FL model accurately. This joint learning, wireless resource allocation, and user selection problem is formulated as an optimization problem whose goal is to minimize an FL loss function that captures the performance of the FL algorithm. To address this problem, a closed-form expression for the expected convergence rate of the FL algorithm is first derived to quantify the impact of wireless factors on FL. Then, based on the expected convergence rate of the FL algorithm, the optimal transmit power for each user is derived, under a given user selection and uplink resource block (RB) allocation scheme. Finally, the user selection and uplink RB allocation is optimized so as to minimize the FL loss function. Simulation results show that the proposed joint federated learning and communication framework can reduce the FL loss function value by up to 10% and 16%, respectively, compared to: 1) An optimal user selection algorithm with random resource allocation and 2) a standard FL algorithm with random user selection and resource allocation.

In this paper, a novel full-duplex non-coherent (FD-NC) transmission scheme is developed for massive multiple-input multiple-output (mMIMO) systems using analog beamforming (ABF). We propose to use a structured Grassmannian constellation for the non-coherent communications that does not require channel estimation. Then, we design the transmit and receive ABF via the slow time-varying angle-of-departure (AoD) and angle-of-arrival (AoA) information, respectively. The ABF design targets maximizing the intended signal power while suppressing the strong self-interference (SI) occurred in the FD transmission. Also, the proposed ABF technique only needs a single transmit and receive RF chain to support large antenna arrays, thus, it reduces hardware cost/complexity in the mMIMO systems. It is shown that the proposed FD-NC offers a great improvement in bit error rate (BER) in comparison to both half-duplex non-coherent (HD-NC) and HD coherent schemes. We also observe that the proposed FD-NC both reduces the error floor resulted from the residual SI in FD transmission, and provides lower BER compared to the FD coherent transmission.

To support the unprecedented growth of the Internet of Things (IoT) applications and the access of tremendous IoT devices, two new technologies emerge recently to overcome the shortage of spectrum resources. The first one, known as integrated sensing and communication (ISAC), aims to share the spectrum bandwidth for both radar sensing and data communication. The second one, called over-the-air computation (AirComp), enables simultaneous transmission and computation of data from multiple IoT devices in the same frequency. The promising performance of ISAC and AirComp motivates the current work on developing a framework that combines the merits of both called integrated sensing and AirComp (ISAA). Two schemes are designed to support multiple-input-multiple-output (MIMO) ISAA simultaneously, namely the shared and separated schemes. The performance metrics of radar sensing and AirComp are evaluated by the mean square errors of the estimated target response matrix and the received computation results, respectively. The design challenge of MIMO ISAA lies in the joint optimization of radar sensing beamformers and data transmission beamformers at the IoT devices, and data aggregation beamformer at the server, which results in complex non-convex problem. To solve this problem, an algorithmic solution based on the technique of semidefinite relaxation is proposed. The results reveal that the beamformer at each sensor needs to account for supporting dual-functional signals in the shared scheme, while dedicated beamformers for sensing and AirComp are needed to mitigate the mutual interference between the two functionalities in the separated scheme. The use case of target location estimation based on ISAA is demonstrated in simulation to show the performance superiority.

Field of view (FoV) prediction is critical in 360-degree video multicast, which is a key component of the emerging Virtual Reality (VR) and Augmented Reality (AR) applications. Most of the current prediction methods combining saliency detection and FoV information neither take into account that the distortion of projected 360-degree videos can invalidate the weight sharing of traditional convolutional networks, nor do they adequately consider the difficulty of obtaining complete multi-user FoV information, which degrades the prediction performance. This paper proposes a spherical convolution-empowered FoV prediction method, which is a multi-source prediction framework combining salient features extracted from 360-degree video with limited FoV feedback information. A spherical convolution neural network (CNN) is used instead of a traditional two-dimensional CNN to eliminate the problem of weight sharing failure caused by video projection distortion. Specifically, salient spatial-temporal features are extracted through a spherical convolution-based saliency detection model, after which the limited feedback FoV information is represented as a time-series model based on a spherical convolution-empowered gated recurrent unit network. Finally, the extracted salient video features are combined to predict future user FoVs. The experimental results show that the performance of the proposed method is better than other prediction methods.

The use of a large excess of service antennas brings a variety of performance benefits to distributed MIMO C-RAN, but the corresponding high fronthaul data loads can be problematic in practical systems with limited fronthaul capacity. In this work we propose the use of lossy dimension reduction, applied locally at each remote radio head (RRH), to reduce this fronthaul traffic. We first consider the uplink, and the case where each RRH applies a linear dimension reduction filter to its multi-antenna received signal vector. It is shown that under a joint mutual information criteria, the optimal dimension reduction filters are given by a variant of the conditional Karhunen-Loeve transform, with a stationary point found using block co-ordinate ascent. These filters are then modified such that each RRH can calculate its own dimension reduction filter in a decentralised manner, using knowledge only of its own instantaneous channel and network slow fading coefficients. We then show that in TDD systems these dimension reduction filters can be re-used as part of a two-stage reduced dimension downlink precoding scheme. Analysis and numerical results demonstrate that the proposed approach can significantly reduce both uplink and downlink fronthaul traffic whilst incurring very little loss in MIMO performance.

Energy efficiency (EE) plays a key role in future wireless communication network and it is easily to achieve high EE performance in low SNR regime. In this paper, a new high EE scheme is proposed for a MIMO wireless communication system working in the low SNR regime by using two dimension resource allocation. First, we define the high EE area based on the relationship between the transmission power and the SNR. To meet the constraint of the high EE area, both frequency and space dimension are needed. Besides analysing them separately, we decided to consider frequency and space dimensions as a unit and proposed a two-dimension scheme. Furthermore, considering communication in the high EE area may cause decline of the communication quality, we add quality-of-service(QoS) constraint into the consideration and derive the corresponding EE performance based on the effective capacity. We also derive an approximate expression to simplify the complex EE performance. Finally, our numerical results demonstrate the effectiveness of the proposed scheme.

One of the difficulties of conversion rate (CVR) prediction is that the conversions can delay and take place long after the clicks. The delayed feedback poses a challenge: fresh data are beneficial to continuous training but may not have complete label information at the time they are ingested into the training pipeline. To balance model freshness and label certainty, previous methods set a short waiting window or even do not wait for the conversion signal. If conversion happens outside the waiting window, this sample will be duplicated and ingested into the training pipeline with a positive label. However, these methods have some issues. First, they assume the observed feature distribution remains the same as the actual distribution. But this assumption does not hold due to the ingestion of duplicated samples. Second, the certainty of the conversion action only comes from the positives. But the positives are scarce as conversions are sparse in commercial systems. These issues induce bias during the modeling of delayed feedback. In this paper, we propose DElayed FEedback modeling with Real negatives (DEFER) method to address these issues. The proposed method ingests real negative samples into the training pipeline. The ingestion of real negatives ensures the observed feature distribution is equivalent to the actual distribution, thus reducing the bias. The ingestion of real negatives also brings more certainty information of the conversion. To correct the distribution shift, DEFER employs importance sampling to weigh the loss function. Experimental results on industrial datasets validate the superiority of DEFER. DEFER have been deployed in the display advertising system of Alibaba, obtaining over 6.0% improvement on CVR in several scenarios. The code and data in this paper are now open-sourced {//github.com/gusuperstar/defer.git}.

北京阿比特科技有限公司