亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this article, wavelet OFDM based non-orthogonal-multiple-access (NOMA) combined with massive MIMO system for 6G networks is proposed. For mMIMO transmissions, the proposed system could enhance the performance by utilizing wavelets to compensate for channel impairments on the transmitted signal. Performance measures include spectral efficiency, symbol error rate (SER), and peak to average ratio (PAPR). Simulation results prove that the proposed system outperforms the conventional OFDM based NOMA systems.

相關內容

With the explosive growth of data and wireless devices, federated learning (FL) has emerged as a promising technology for large-scale intelligent systems. Utilizing the analog superposition of electromagnetic waves, over-the-air computation is an appealing approach to reduce the burden of communication in the FL model aggregation. However, with the urgent demand for intelligent systems, the training of multiple tasks with over-the-air computation further aggravates the scarcity of communication resources. This issue can be alleviated to some extent by training multiple tasks simultaneously with shared communication resources, but the latter inevitably brings about the problem of inter-task interference. In this paper, we study over-the-air multi-task FL (OA-MTFL) over the multiple-input multiple-output (MIMO) interference channel. We propose a novel model aggregation method for the alignment of local gradients for different devices, which alleviates the straggler problem that exists widely in over-the-air computation due to the channel heterogeneity. We establish a unified communication-computation analysis framework for the proposed OA-MTFL scheme by considering the spatial correlation between devices, and formulate an optimization problem of designing transceiver beamforming and device selection. We develop an algorithm by using alternating optimization (AO) and fractional programming (FP) to solve this problem, which effectively relieves the impact of inter-task interference on the FL learning performance. We show that due to the use of the new model aggregation method, device selection is no longer essential to our scheme, thereby avoiding the heavy computational burden caused by implementing device selection. The numerical results demonstrate the correctness of the analysis and the outstanding performance of the proposed scheme.

In this paper, we study a general multi-cluster wireless powered communication network (WPCN) with user cooperation under harvest-then-transmit (HTT) protocol where the hybrid access point (HAP) as well as each user is equipped with multiple antennas. In the downlink phase of HTT, the HAP employs beamforming to transfer energy to the users. In the uplink phase, users in each cluster transmit their signals to the HAP and to their cluster heads (CHs). Afterward, the CHs first relay the signals of their cluster users and then transmit their own information signals to the HAP. The aim is to design the energy beamforming (EB) matrix, transmit covariance matrices of the users and time allocations among energy transfer and cooperation phases in order to optimize the max-min and sum throughputs of the network. The corresponding maximization problems are non-convex and NP-hard in general. We devise iterative algorithm based on alternating optimization (AO) and then the minorization-maximization (MM) technique is used to deal with the non-convex sub-problems with respect to (w.r.t.) the EB and covariance matrices in each iteration. We recast the resulting sub-problems as a convex second order cone programming (SOCP) and quadratic constraint quadratic programming (QCQP) for the max-min and sum throughput maximization problems, respectively. We also consider imperfect channel state information (CSI) at the HAP and CHs and non-linearity in energy harvesting (EH) circuits. Numerical examples show that the proposed cooperative method can effectively improve the achievable throughput in the multi-cluster wireless powered communication under various setups.

The reconfigurable intelligent surface (RIS) has arose an upsurging research interest recently due to its promising outlook in 5G-and-beyond wireless networks. With the assistance of RIS, the wireless propagation environment is no longer static and could be customized to support diverse service requirements. In this paper, we will approach the rate maximization problems in RIS-aided wireless networks by considering the beamforming and reflecting design jointly. Three representative design problems from different system settings are investigated based on a proposed unified algorithmic framework via the block minorization-maximization (BMM) method. Extensions and generalizations of the proposed framework in dealing with some other related problems are further presented. Merits of the proposed algorithms are demonstrated through numerical simulations in comparison with the state-of-the-art methods.

Cell-free massive MIMO is one of the key technologies for future wireless communications, in which users are simultaneously and jointly served by all access points (APs). In this paper, we investigate the minimum mean square error (MMSE) estimation of effective channel coefficients in cell-free massive MIMO systems with massive connectivity. To facilitate the theoretical analysis, only single measurement vector (SMV) based MMSE estimation is considered in this paper, i.e., the MMSE estimation is performed based on the received pilot signals at each AP separately. Inspired by the decoupling principle of replica symmetric postulated MMSE estimation of sparse signal vectors with independent and identically distributed (i.i.d.) non-zero components, we develop the corresponding decoupling principle for the SMV based MMSE estimation of sparse signal vectors with independent and non-identically distributed (i.n.i.d.) non-zero components, which plays a key role in the theoretical analysis of SMV based MMSE estimation of the effective channel coefficients in cell-free massive MIMO systems with massive connectivity. Subsequently, based on the obtained decoupling principle of MMSE estimation, likelihood ratio test and the optimal fusion rule, we perform user activity detection based on the received pilot signals at only one AP, or cooperation among the entire set of APs for centralized or distributed detection. Via theoretical analysis, we show that the error probabilities of both centralized and distributed detection tend to zero when the number of APs tends to infinity while the asymptotic ratio between the number of users and pilots is kept constant. We also investigate the asymptotic behavior of oracle estimation in cell-free massive MIMO systems with massive connectivity via random matrix theory.

Orthogonal frequency division multiplexing (OFDM) has been widely applied in current communication systems. The artificial intelligence (AI)-aided OFDM receivers are currently brought to the forefront to replace and improve the traditional OFDM receivers. In this study, we first compare two AI-aided OFDM receivers, namely, data-driven fully connected deep neural network and model-driven ComNet, through extensive simulation and real-time video transmission using a 5G rapid prototyping system for an over-the-air (OTA) test. We find a performance gap between the simulation and the OTA test caused by the discrepancy between the channel model for offline training and the real environment. We develop a novel online training system, which is called SwitchNet receiver, to address this issue. This receiver has a flexible and extendable architecture and can adapt to real channels by training only several parameters online. From the OTA test, the AI-aided OFDM receivers, especially the SwitchNet receiver, are robust to real environments and promising for future communication systems. We discuss potential challenges and future research inspired by our initial study in this paper.

This paper analyzes how the distortion created by hardware impairments in a multiple-antenna base station affects the uplink spectral efficiency (SE), with focus on Massive MIMO. This distortion is correlated across the antennas, but has been often approximated as uncorrelated to facilitate (tractable) SE analysis. To determine when this approximation is accurate, basic properties of distortion correlation are first uncovered. Then, we separately analyze the distortion correlation caused by third-order non-linearities and by quantization. Finally, we study the SE numerically and show that the distortion correlation can be safely neglected in Massive MIMO when there are sufficiently many users. Under i.i.d. Rayleigh fading and equal signal-to-noise ratios (SNRs), this occurs for more than five transmitting users. Other channel models and SNR variations have only minor impact on the accuracy. We also demonstrate the importance of taking the distortion characteristics into account in the receive combining.

In this paper, we derive asymptotic expressions for the ergodic capacity of the multiple-input multiple-output (MIMO) keyhole channel at low SNR in independent and identically distributed (i.i.d.) Nakagami-$m$ fading conditions with perfect channel state information available at both the transmitter (CSI-T) and the receiver (CSI-R). We show that the low-SNR capacity of this keyhole channel scales proportionally as $\frac{\mathrm{SNR}}{4} \log^2 \left(1/{\mathrm{SNR}}\right)$. With this asymptotic low-SNR capacity formula, we find a very surprising result that contrary to popular belief, the capacity of the MIMO fading channel at low SNR increases in the presence of keyhole degenerate condition. Additionally, we show that a simple one-bit CSI-T based On-Off power scheme achieves this low-SNR capacity; surprisingly, it is robust against both moderate and severe fading conditions for a wide range of low SNR values. These results also extend to the Rayleigh keyhole MIMO channel as a special case.

Optical wireless communication (OWC) over atmospheric turbulence and pointing errors is a well-studied topic. Still, there is limited research on signal fading due to random fog in an outdoor environment for terrestrial wireless communications. In this paper, we analyze the performance of a decode-and-forward (DF) relaying under the combined effect of random fog, pointing errors, and atmospheric turbulence with a negligible line-of-sight (LOS) direct link. We consider a generalized model for the end-to-end channel with independent and not identically distributed (i.ni.d.) pointing errors, random fog with Gamma distributed attenuation coefficient, double generalized gamma (DGG) atmospheric turbulence, and asymmetrical distance between the source and destination. We develop density and distribution functions of signal-to-noise ratio (SNR) under the combined effect of random fog, pointing errors, and atmospheric turbulence (FPT) channel and distribution function for the combined channel with random fog and pointing errors (FP). Using the derived statistical results, we present analytical expressions of the outage probability, average SNR, ergodic rate, and average bit error rate (BER) for both FP and FPT channels in terms of OWC system parameters. We also develop simplified and asymptotic performance analysis to provide insight on the system behavior analytically under various practically relevant scenarios. We demonstrate the mutual effects of channel impairments and pointing errors on the OWC performance, and show that the relaying system provides significant performance improvement compared with the direct transmissions, especially when pointing errors and fog becomes more pronounced.

Retrosynthetic planning is a fundamental problem in chemistry for finding a pathway of reactions to synthesize a target molecule. Recently, search algorithms have shown promising results for solving this problem by using deep neural networks (DNNs) to expand their candidate solutions, i.e., adding new reactions to reaction pathways. However, the existing works on this line are suboptimal; the retrosynthetic planning problem requires the reaction pathways to be (a) represented by real-world reactions and (b) executable using "building block" molecules, yet the DNNs expand reaction pathways without fully incorporating such requirements. Motivated by this, we propose an end-to-end framework for directly training the DNNs towards generating reaction pathways with the desirable properties. Our main idea is based on a self-improving procedure that trains the model to imitate successful trajectories found by itself. We also propose a novel reaction augmentation scheme based on a forward reaction model. Our experiments demonstrate that our scheme significantly improves the success rate of solving the retrosynthetic problem from 86.84% to 96.32% while maintaining the performance of DNN for predicting valid reactions.

Importance sampling is one of the most widely used variance reduction strategies in Monte Carlo rendering. In this paper, we propose a novel importance sampling technique that uses a neural network to learn how to sample from a desired density represented by a set of samples. Our approach considers an existing Monte Carlo rendering algorithm as a black box. During a scene-dependent training phase, we learn to generate samples with a desired density in the primary sample space of the rendering algorithm using maximum likelihood estimation. We leverage a recent neural network architecture that was designed to represent real-valued non-volume preserving ('Real NVP') transformations in high dimensional spaces. We use Real NVP to non-linearly warp primary sample space and obtain desired densities. In addition, Real NVP efficiently computes the determinant of the Jacobian of the warp, which is required to implement the change of integration variables implied by the warp. A main advantage of our approach is that it is agnostic of underlying light transport effects, and can be combined with many existing rendering techniques by treating them as a black box. We show that our approach leads to effective variance reduction in several practical scenarios.

北京阿比特科技有限公司