亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Reconfigurable Intelligent Surfaces (RIS) are planar structures connected to electronic circuitry, which can be employed to steer the electromagnetic signals in a controlled manner. Through this, the signal quality and the effective data rate can be substantially improved. While the benefits of RIS-assisted wireless communications have been investigated for various scenarios, some aspects of the network design, such as coverage, optimal placement of RIS, etc., often require complex optimization and numerical simulations, since the achievable effective rate is difficult to predict. This problem becomes even more difficult in the presence of phase estimation errors or location uncertainty, which can lead to substantial performance degradation if neglected. Considering randomly distributed receivers within a ring-shaped RIS-assisted wireless network, this paper mainly investigates the effective rate by taking into account the above-mentioned impairments. Furthermore, exact closed-form expressions for the effective rate are derived in terms of Meijer's $G$-function, which (i) reveals that the location and phase estimation uncertainty should be well considered in the deployment of RIS in wireless networks; and (ii) facilitates future network design and performance prediction.

相關內容

Intelligent reflecting surface (IRS) is a promising technology for beyond 5G wireless communications. In fully passive IRS-assisted systems, channel estimation is challenging and should be carried out only at the base station or at the terminals since the elements of the IRS are incapable of processing signals. In this letter, we formulate a tensor-based semi-blind receiver that solves the joint channel and symbol estimation problem in an IRS-assisted multi-user multiple-input multiple-output system. The proposed approach relies on a generalized PARATUCK tensor model of the signals reflected by the IRS, based on a two-stage closed-form semi-blind receiver using Khatri-Rao and Kronecker factorizations. Simulation results demonstrate the superior performance of the proposed semi-blind receiver, in terms of the normalized mean squared error and symbol error rate, as well as a lower computational complexity, compared to recently proposed parallel factor analysis-based receivers.

Many descent methods for multiobjective optimization problems have been developed in recent years. In 2000, the steepest descent method was proposed for differentiable multiobjective optimization problems. Afterward, the proximal gradient method, which can solve composite problems, was also considered. However, the accelerated versions are not sufficiently studied. In this paper, we propose a multiobjective accelerated proximal gradient algorithm, in which we solve subproblems with terms that only appear in the multiobjective case. We also show the proposed method's global convergence rate ($O(1/k^2)$) under reasonable assumptions, using a merit function to measure the complexity. Moreover, we present an efficient way to solve the subproblem via its dual, and we confirm the validity of the proposed method through preliminary numerical experiments.

This work studies an experimental design problem where {the values of a predictor variable, denoted by $x$}, are to be determined with the goal of estimating a function $m(x)$, which is observed with noise. A linear model is fitted to $m(x)$ but it is not assumed that the model is correctly specified. It follows that the quantity of interest is the best linear approximation of $m(x)$, which is denoted by $\ell(x)$. It is shown that in this framework the ordinary least squares estimator typically leads to an inconsistent estimation of $\ell(x)$, and rather weighted least squares should be considered. An asymptotic minimax criterion is formulated for this estimator, and a design that minimizes the criterion is constructed. An important feature of this problem is that the $x$'s should be random, rather than fixed. Otherwise, the minimax risk is infinite. It is shown that the optimal random minimax design is different from its deterministic counterpart, which was studied previously, and a simulation study indicates that it generally performs better when $m(x)$ is a quadratic or a cubic function. Another finding is that when the variance of the noise goes to infinity, the random and deterministic minimax designs coincide. The results are illustrated for polynomial regression models and the general case is also discussed.

We present a framework for operating a self-adaptive RIS inside a fading rich-scattering wireless environment. We model the rich-scattering wireless channel as being double-parametrized by (i) the RIS, and (ii) dynamic perturbers (moving objects, etc.). Within each coherence time, first, the self-adaptive RIS estimates the status of the dynamic perturbers (e.g., the perturbers' orientations and locations) based on measurements with an auxiliary wireless channel. Then, second, using a learned surrogate forward model of the mapping from RIS configuration and perturber status to wireless channel, an optimized RIS configuration to achieve a desired functionality is obtained. We demonstrate our technique using a physics-based end-to-end model of RIS-parametrized communication with adjustable fading (PhysFad) for the example objective of maximizing the received signal strength indicator. Our results present a route toward convergence of RIS-empowered localization and sensing with RIS-empowered channel shaping beyond the simple case of operation in free space without fading.

The reconfigurable intelligent surface (RIS) technology is a promising enabler for millimeter wave (mmWave) wireless communications, as it can potentially provide spectral efficiency comparable to the conventional massive multiple-input multiple-output (MIMO) but with significantly lower hardware complexity. In this paper, we focus on the estimation and projection of the uplink RIS-aided massive MIMO channel, which can be time-varying. We propose to let the user equipments (UE) transmit Zadoff-Chu (ZC) sequences and let the base station (BS) conduct maximum likelihood (ML) estimation of the uplink channel. The proposed scheme is computationally efficient: it uses ZC sequences to decouple the estimation of the frequency and time offsets; it uses the space-alternating generalized expectation-maximization (SAGE) method to reduce the high-dimensional problem due to the multipaths to multiple lower-dimensional ones per path. Owing to the estimation of the Doppler frequency offsets, the time-varying channel state can be projected, which can significantly lower the overhead of the pilots for channel estimation. The numerical simulations verify the effectiveness of the proposed scheme.

Machine learning algorithms have recently been considered for many tasks in the field of wireless communications. Previously, we have proposed the use of a deep fully convolutional neural network (CNN) for receiver processing and shown it to provide considerable performance gains. In this study, we focus on machine learning algorithms for the transmitter. In particular, we consider beamforming and propose a CNN which, for a given uplink channel estimate as input, outputs downlink channel information to be used for beamforming. The CNN is trained in a supervised manner considering both uplink and downlink transmissions with a loss function that is based on UE receiver performance. The main task of the neural network is to predict the channel evolution between uplink and downlink slots, but it can also learn to handle inefficiencies and errors in the whole chain, including the actual beamforming phase. The provided numerical experiments demonstrate the improved beamforming performance.

Reconfigurable intelligent surface (RIS) is considered as an extraordinarily promising technology to solve the blockage problem of millimeter wave (mmWave) communications owing to its capable of establishing a reconfigurable wireless propagation. In this paper, we focus on a RIS-assisted mmWave communication network consisting of multiple base stations (BSs) serving a set of user equipments (UEs). Considering the BS-RIS-UE association problem which determines that the RIS should assist which BS and UEs, we joint optimize BS-RIS-UE association and passive beamforming at RIS to maximize the sum-rate of the system. To solve this intractable non-convex problem, we propose a soft actor-critic (SAC) deep reinforcement learning (DRL)-based joint beamforming and BS-RIS-UE association design algorithm, which can learn the best policy by interacting with the environment using less prior information and avoid falling into the local optimal solution by incorporating with the maximization of policy information entropy. The simulation results demonstrate that the proposed SAC-DRL algorithm can achieve significant performance gains compared with benchmark schemes.

Ensemble methods based on subsampling, such as random forests, are popular in applications due to their high predictive accuracy. Existing literature views a random forest prediction as an infinite-order incomplete U-statistic to quantify its uncertainty. However, these methods focus on a small subsampling size of each tree, which is theoretically valid but practically limited. This paper develops an unbiased variance estimator based on incomplete U-statistics, which allows the tree size to be comparable with the overall sample size, making statistical inference possible in a broader range of real applications. Simulation results demonstrate that our estimators enjoy lower bias and more accurate confidence interval coverage without additional computational costs. We also propose a local smoothing procedure to reduce the variation of our estimator, which shows improved numerical performance when the number of trees is relatively small. Further, we investigate the ratio consistency of our proposed variance estimator under specific scenarios. In particular, we develop a new "double U-statistic" formulation to analyze the Hoeffding decomposition of the estimator's variance.

Due to their increasing spread, confidence in neural network predictions became more and more important. However, basic neural networks do not deliver certainty estimates or suffer from over or under confidence. Many researchers have been working on understanding and quantifying uncertainty in a neural network's prediction. As a result, different types and sources of uncertainty have been identified and a variety of approaches to measure and quantify uncertainty in neural networks have been proposed. This work gives a comprehensive overview of uncertainty estimation in neural networks, reviews recent advances in the field, highlights current challenges, and identifies potential research opportunities. It is intended to give anyone interested in uncertainty estimation in neural networks a broad overview and introduction, without presupposing prior knowledge in this field. A comprehensive introduction to the most crucial sources of uncertainty is given and their separation into reducible model uncertainty and not reducible data uncertainty is presented. The modeling of these uncertainties based on deterministic neural networks, Bayesian neural networks, ensemble of neural networks, and test-time data augmentation approaches is introduced and different branches of these fields as well as the latest developments are discussed. For a practical application, we discuss different measures of uncertainty, approaches for the calibration of neural networks and give an overview of existing baselines and implementations. Different examples from the wide spectrum of challenges in different fields give an idea of the needs and challenges regarding uncertainties in practical applications. Additionally, the practical limitations of current methods for mission- and safety-critical real world applications are discussed and an outlook on the next steps towards a broader usage of such methods is given.

Click-through rate (CTR) estimation plays as a core function module in various personalized online services, including online advertising, recommender systems, and web search etc. From 2015, the success of deep learning started to benefit CTR estimation performance and now deep CTR models have been widely applied in many industrial platforms. In this survey, we provide a comprehensive review of deep learning models for CTR estimation tasks. First, we take a review of the transfer from shallow to deep CTR models and explain why going deep is a necessary trend of development. Second, we concentrate on explicit feature interaction learning modules of deep CTR models. Then, as an important perspective on large platforms with abundant user histories, deep behavior models are discussed. Moreover, the recently emerged automated methods for deep CTR architecture design are presented. Finally, we summarize the survey and discuss the future prospects of this field.

北京阿比特科技有限公司