亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we present the delay-constrained performance analysis of a multi-antenna-assisted multiuser non-orthogonal multiple access (NOMA) based spectrum sharing system over Rayleigh fading channels. We derive analytical expressions for the sum effective rate (ER) for the downlink NOMA system under a peak interference constraint. In particular, we show the effect of the availability of different levels of channel state information (instantaneous and statistical) on the system performance. We also show the effect of different parameters of interest, including the peak tolerable interference power, the delay exponent, the number of antennas and the number of users, on the sum ER of the system under consideration. An excellent agreement between simulation and theoretical results confirms the accuracy of the analysis.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · 塊坐標下降 · 可約的 · 優化器 · 坐標下降 ·
2021 年 10 月 27 日

5G technology allows the presence of heterogeneous services in the same physical network. On the radio access network (RAN), the spectrum slicing of the shared radio resources is a critical task to guarantee the performance of each service. In this paper, we analyze a downlink communication in which a base station (BS) should serve two types of traffic, enhanced mobile broadband (eMBB) and ultra-reliable low-latency communication (URLLC), respectively. Due to the nature of low-latency traffic, the BS knows the channel state information (CSI) of the eMBB users only. In this setting, we study the power minimization problem employing orthogonal multiple access (OMA) and non-orthogonal multiple access (NOMA) schemes. We analyze the impact of resource sharing, showing that the knowledge of eMBB CSI can be used also in resource allocation for URLLC users. Based on this analysis, we propose two algorithms: a feasible and a block coordinated descent approach (BCD). We show that the BCD is optimal for the URLLC power allocation. The numerical results show that NOMA leads to a lower power consumption compared to OMA, except when the URLLC user is very close to the BS. For the last case, the optimal approach depends on the channel condition of the eMBB user. In any case, even when the OMA paradigm attains the best performance, the gap with NOMA is negligible, proving the NOMA capacity in exploiting the shared resources to reduce the power consumption in every condition.

Spectrum slicing of the shared radio resources is a critical task in 5G networks with heterogeneous services, through which each service gets performance guarantees. In this paper, we consider a setup in which a Base Station (BS) should serve two types of traffic in the downlink, enhanced mobile broadband (eMBB) and ultra-reliable low-latency communication (URLLC), respectively. Two resource allocation strategies are compared: non-orthogonal multiple access (NOMA) and orthogonal multiple access (OMA). A framework for power minimization is presented, in which the BS knows the channel state information (CSI) of the eMBB users only. Nevertheless, due to the resource sharing, it is shown that this knowledge can be used also to the benefit of the URLLC users. The numerical results show that NOMA leads to a lower power consumption compared to OMA for every simulation parameter under test.

In this article, wavelet OFDM based non-orthogonal-multiple-access (NOMA) combined with massive MIMO system for 6G networks is proposed. For mMIMO transmissions, the proposed system could enhance the performance by utilizing wavelets to compensate for channel impairments on the transmitted signal. Performance measures include spectral efficiency, symbol error rate (SER), and peak to average ratio (PAPR). Simulation results prove that the proposed system outperforms the conventional OFDM based NOMA systems.

The paper describes an online deep learning algorithm for the adaptive modulation and coding in massive MIMO. The algorithm is based on a fully connected neural network, which is initially trained on the output of the traditional algorithm and then is incrementally retrained by the service feedback of its output. We show the advantage of our solution over the state-of-the-art Q-Learning approach. We provide system-level simulation results to support this conclusion in various scenarios with different channel characteristics and different user speeds. Compared with traditional OLLA our algorithm shows 10\% to 20\% improvement of user throughput in the full buffer case of continuous traffic. This is a very valuable result that allows us to significantly improve the quality of wireless MIMO communications.

Non-orthogonal multiple access (NOMA) has been identified as one of the promising technologies to enhance the spectral efficiency and throughput for the 5G and beyond cellular networks. Alternatively, coordinated multi-point (CoMP) improves the cell edge users coverage. Thus, CoMP and NOMA can be used together to improve the overall coverage and throughput of the cell edge users. However, user grouping and pairing for CoMP-NOMA based cellular networks has not been suitably addressed in the existing literature. Motivated by this, we propose two user grouping and pairing schemes for a CoMP-NOMA based system. Both the schemes are compared in terms of overall throughput and coverage. Numerical results are presented for various densities of users, base stations, and CoMP thresholds. Moreover, the results are compared with the purely OMA-based benchmark system, NOMA only, and CoMP only systems. We show through simulation results that the proposed schemes offer a trade-off between throughput and coverage as compared to a purely NOMA or CoMP based system.

This paper presents a new joint radar and communication technique based on the classical stepped frequency radar waveform. The randomization in the waveform, which is achieved by using permutations of the sequence of frequency tones, is utilized for data transmission. A new signaling scheme is proposed in which the mapping between incoming data and waveforms is performed based on an efficient combinatorial transform called the Lehmer code. Considering the optimum maximum likelihood (ML) detection, the union bound and the nearest neighbour approximation on the communication block error probability is derived for communication in an additive white Gaussian noise (AWGN) channel. The results are further extended to incorporate the Rician fading channel model, of which the Rayleigh fading channel model is presented as a special case. Furthermore, an efficient communication receiver implementation is discussed based on the Hungarian algorithm which achieves optimum performance with much less operational complexity when compared to an exhaustive search. From the radar perspective, two key analytical tools, namely, the ambiguity function (AF) and the Fisher information matrix are derived. Furthermore, accurate approximations to the Cramer-Rao lower bounds (CRLBs) on the delay and Doppler estimation errors are derived based on which the range and velocity estimation accuracy of the waveform is analysed. Numerical examples are used to highlight the accuracy of the analysis and to illustrate the performance of the proposed waveform.

In this paper, we investigate covert communication over millimeter-wave (mmWave) frequencies. In particular, a mmWave transmitter, referred to as Alice, attempts to reliably communicate to a receiver, referred to as Bob, while hiding the existence of communication from a warden, referred to as Willie. In this regard, operating over the mmWave bands not only increases the covertness thanks to directional beams, but also increases the transmission data rates given much more available bandwidths and enables ultra-low form factor transceivers due to the lower wavelengths used compared to the conventional radio frequency (RF) counterpart. We first assume that the transmitter Alice employs two independent antenna arrays in which one of the arrays is to form a directive beam for data transmission to Bob. The other antenna array is used by Alice to generate another beam toward Willie as a jamming signal while changing the transmit power independently across the transmission blocks in order to achieve the desired covertness. For this dual-beam setup, we characterize Willie's detection error rate with the optimal detector and the closed-form of its expected value from Alice's perspective. We then derive the closed-form expression for the outage probability of the Alice-Bob link, which enables characterizing the optimal covert rate that can be achieved using the proposed setup. We further obtain tractable forms for the ergodic capacity of the Alice-Bob link involving only one-dimensional integrals that can be computed in closed forms for most ranges of the channel parameters. Finally, we highlight how the results can be extended to more practical scenarios, particularly to the cases where perfect information about the location of the passive warden is not available.

In this paper, we focus on the throughput of random access with power-domain non-orthogonal multiple access (NOMA) and derive bounds on the throughput. In particular, we demonstrate that the expression for the throughput derived in [1] is an upper-bound and derive a new lower-bound as a closed-form expression. This expression allows to find the traffic intensity that maximizes the lower-bound, which is shown to be the square root of the number of power levels in NOMA. Furthermore, with this expression, for a large number of power levels, we obtain the asymptotic maximum throughput that is increased at a rate of the square root of the number of power levels.

In this letter, we introduce over-the-air computation into the communication design of federated multi-task learning (FMTL), and propose an over-the-air federated multi-task learning (OA-FMTL) framework, where multiple learning tasks deployed on edge devices share a non-orthogonal fading channel under the coordination of an edge server (ES). Specifically, the model updates for all the tasks are transmitted and superimposed concurrently over a non-orthogonal uplink fading channel, and the model aggregations of all the tasks are reconstructed at the ES through a modified version of the turbo compressed sensing algorithm (Turbo-CS) that overcomes inter-task interference. Both convergence analysis and numerical results show that the OA-FMTL framework can significantly improve the system efficiency in terms of reducing the number of channel uses without causing substantial learning performance degradation.

Multi-task learning (MTL) aims to improve the generalization of several related tasks by learning them jointly. As a comparison, in addition to the joint training scheme, modern meta-learning allows unseen tasks with limited labels during the test phase, in the hope of fast adaptation over them. Despite the subtle difference between MTL and meta-learning in the problem formulation, both learning paradigms share the same insight that the shared structure between existing training tasks could lead to better generalization and adaptation. In this paper, we take one important step further to understand the close connection between these two learning paradigms, through both theoretical analysis and empirical investigation. Theoretically, we first demonstrate that MTL shares the same optimization formulation with a class of gradient-based meta-learning (GBML) algorithms. We then prove that for over-parameterized neural networks with sufficient depth, the learned predictive functions of MTL and GBML are close. In particular, this result implies that the predictions given by these two models are similar over the same unseen task. Empirically, we corroborate our theoretical findings by showing that, with proper implementation, MTL is competitive against state-of-the-art GBML algorithms on a set of few-shot image classification benchmarks. Since existing GBML algorithms often involve costly second-order bi-level optimization, our first-order MTL method is an order of magnitude faster on large-scale datasets such as mini-ImageNet. We believe this work could help bridge the gap between these two learning paradigms, and provide a computationally efficient alternative to GBML that also supports fast task adaptation.

北京阿比特科技有限公司