亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The multi-user Holographic Multiple-Input and Multiple-Output Surface (MU-HMIMOS) paradigm, which is capable of realizing large continuous apertures with minimal power consumption, has been recently considered as an energyefficient solution for future wireless networks, offering the increased flexibility in impacting electromagnetic wave propagation according to the desired communication, localization, and sensing objectives. The tractable channel modeling of MU-HMIMOS systems is one of the most critical challenges, mainly due to the coupling effect induced by the excessively large number of closely spaced patch antennas. In this paper, we focus on this challenge for downlink multi-user communications and model the electromagnetic channel in the wavenumber domain using the Fourier plane wave representation. Based on the proposed channel model, we devise the maximum-ratio transmission and Zero-Forcing (ZF) precoding schemes capitalizing on the sampled channel variance that depends on the number and spacing of the patch antennas in MU-HMIMOS, and present their analytical spectral efficiency performance. Moreover, we propose a low computational ZF precoding scheme leveraging Neumann series expansion to replace the matrix inversion, since it is practically impossible to perform direct matrix inversion when the number of patch antennas is extremely large. Our extensive simulation results showcase the impact of the number of patch antennas and their spacing on the spectral efficiency of the considered systems. It is shown that the more patch antennas and larger spacing results in improved performance due to the decreased correlation among the patches.

相關內容

Massive multiple-input multiple-output (MIMO) is believed to deliver unrepresented spectral efficiency gains for 5G and beyond. However, a practical challenge arises during its commercial deployment, which is known as the "curse of mobility". The performance of massive MIMO drops alarmingly when the velocity level of user increases. In this paper, we tackle the problem in frequency division duplex (FDD) massive MIMO with a novel Channel State Information (CSI) acquisition framework. A joint angle-delay-Doppler (JADD) wideband beamformer is proposed for channel training. Our idea consists in the exploitation of the partial channel reciprocity of FDD and the angle-delay-Doppler channel structure. More precisely, the base station (BS) estimates the angle-delay-Doppler information of the UL channel based on UL pilots using Matrix Pencil method. It then computes the wideband JADD beamformers according to the extracted parameters. Afterwards, the user estimates and feeds back some scalar coefficients for the BS to reconstruct the predicted DL channel. Asymptotic analysis shows that the CSI prediction error converges to zero when the number of BS antennas and the bandwidth increases. Numerical results with industrial channel model demonstrate that our framework can well adapt to high speed (350 km/h), large CSI delay (10 ms) and channel sample noise.

In this study, we generalize a problem of sampling a scalar Gauss Markov Process, namely, the Ornstein-Uhlenbeck (OU) process, where the samples are sent to a remote estimator and the estimator makes a causal estimate of the observed realtime signal. In recent years, the problem is solved for stable OU processes. We present solutions for the optimal sampling policy that exhibits a smaller estimation error for both stable and unstable cases of the OU process along with a special case when the OU process turns to a Wiener process. The obtained optimal sampling policy is a threshold policy. However, the thresholds are different for all three cases. Later, we consider additional noise with the sample when the sampling decision is made beforehand. The estimator utilizes noisy samples to make an estimate of the current signal value. The mean-square error (mse) is changed from previous due to noise and the additional term in the mse is solved which provides performance upper bound and room for a pursuing further investigation on this problem to find an optimal sampling strategy that minimizes the estimation error when the observed samples are noisy. Numerical results show performance degradation caused by the additive noise.

Non-homogeneous Poisson processes are used in a wide range of scientific disciplines, ranging from the environmental sciences to the health sciences. Often, the central object of interest in a point process is the underlying intensity function. Here, we present a general model for the intensity function of a non-homogeneous Poisson process using measure transport. The model is built from a flexible bijective mapping that maps from the underlying intensity function of interest to a simpler reference intensity function. We enforce bijectivity by modeling the map as a composition of multiple simple bijective maps, and show that the model exhibits an important approximation property. Estimation of the flexible mapping is accomplished within an optimization framework, wherein computations are efficiently done using recent technological advances in deep learning and a graphics processing unit. Although we find that intensity function estimates obtained with our method are not necessarily superior to those obtained using conventional methods, the modeling representation brings with it other advantages such as facilitated point process simulation and uncertainty quantification. Modeling point processes in higher dimensions is also facilitated using our approach. We illustrate the use of our model on both simulated data, and a real data set containing the locations of seismic events near Fiji since 1964.

The system-level performance of multi-gateway downlink long-range (LoRa) networks is investigated in the present paper. Specifically, we first compute the active probability of a channel and the selection probability of an active end-device (ED) in the closed-form expressions. We then derive the coverage probability (Pcov) and the area spectral efficiency (ASE) under the impact of the capture effects and different spreading factor (SF) allocation schemes. Our findings show that both the Pcov and the ASE of the considered networks can be enhanced significantly by increasing both the duty cycle and the transmit power. Finally, Monte-Carlo simulations are provided to verify the accuracy of the proposed mathematical frameworks.

Bayesian estimation of short-time spectral amplitude is one of the most predominant approaches for the enhancement of the noise corrupted speech. The performance of these estimators are usually significantly improved when any perceptually relevant cost function is considered. On the other hand, the recent progress in the phase-based speech signal processing have shown that the phase-only enhancement based on spectral phase estimation methods can also provide joint improvement in the perceived speech quality and intelligibility, even in low SNR conditions. In this paper, to take advantage of both the perceptually motivated cost function involving STSAs of estimated and true clean speech and utilizing the prior spectral phase information, we have derived a phase-aware Bayesian STSA estimator. The parameters of the cost function are chosen based on the characteristics of the human auditory system, namely, the dynamic compressive nonlinearity of the cochlea, the perceived loudness theory and the simultaneous masking properties of the ear. This type of parameter selection scheme results in more noise reduction while limiting the speech distortion. The derived STSA estimator is optimal in the MMSE sense if the prior phase information is available. In practice, however, typically only an estimate of the clean speech phase can be obtained via employing different types of spectral phase estimation techniques which have been developed throughout the last few years. In a blind setup, we have evaluated the proposed Bayesian STSA estimator with different types of standard phase estimation methods available in the literature. Experimental results have shown that the proposed estimator can achieve substantial improvement in performance than the traditional phase-blind approaches.

In applications of remote sensing, estimation, and control, timely communication is not always ensured by high-rate communication. This work proposes distributed age-efficient transmission policies for random access channels with $M$ transmitters. In the first part of this work, we analyze the age performance of stationary randomized policies by relating the problem of finding age to the absorption time of a related Markov chain. In the second part of this work, we propose the notion of \emph{age-gain} of a packet to quantify how much the packet will reduce the instantaneous age of information at the receiver side upon successful delivery. We then utilize this notion to propose a transmission policy in which transmitters act in a distributed manner based on the age-gain of their available packets. In particular, each transmitter sends its latest packet only if its corresponding age-gain is beyond a certain threshold which could be computed adaptively using the collision feedback or found as a fixed value analytically in advance. Both methods improve age of information significantly compared to the state of the art. In the limit of large $M$, we prove that when the arrival rate is small (below $\frac{1}{eM}$), slotted ALOHA-type algorithms are asymptotically optimal. As the arrival rate increases beyond $\frac{1}{eM}$, while age increases under slotted ALOHA, it decreases significantly under the proposed age-based policies. For arrival rates $\theta$, $\theta=\frac{1}{o(M)}$, the proposed algorithms provide a multiplicative factor of at least two compared to the minimum age under slotted ALOHA (minimum over all arrival rates). We conclude that, as opposed to the common practice, it is beneficial to increase the sampling rate (and hence the arrival rate) and transmit packets selectively based on their age-gain.

Modern wireless channels are increasingly dense and mobile making the channel highly non-stationary. The time-varying distribution and the existence of joint interference across multiple degrees of freedom (e.g., users, antennas, frequency and symbols) in such channels render conventional precoding sub-optimal in practice, and have led to historically poor characterization of their statistics. The core of our work is the derivation of a high-order generalization of Mercer's Theorem to decompose the non-stationary channel into constituent fading sub-channels (2-D eigenfunctions) that are jointly orthogonal across its degrees of freedom. Consequently, transmitting these eigenfunctions with optimally derived coefficients eventually mitigates any interference across these dimensions and forms the foundation of the proposed joint spatio-temporal precoding. The precoded symbols directly reconstruct the data symbols at the receiver upon demodulation, thereby significantly reducing its computational burden, by alleviating the need for any complementary decoding. These eigenfunctions are paramount to extracting the second-order channel statistics, and therefore completely characterize the underlying channel. Theory and simulations show that such precoding leads to ${>}10^4{\times}$ BER improvement (at 20dB) over existing methods for non-stationary channels.

Optical wireless communication (OWC) has the potential to provide high communication speeds that support the massive use of the Internet that is expected in the near future. In OWC, optical access points (APs) are deployed on the celling to serve multiple users. In this context, efficient multiple access schemes are required to share the resources among the users and align multi-user interference. Recently, non-orthogonal multiple access (NOMA) has been studied to serve multiple users simultaneously using the same resources, while a different power level is allocated to each user. Despite the acceptable performance of NOMA, users might experience a high packet loss due to high noise, which results from the use of successive interference cancelation (SIC). In this work, random linear network coding (RLNC) is proposed to enhance the performance of NOMA in an optical wireless network where users are divided into multicast groups, and each group contains users that slightly differ in their channel gains. Moreover, a fixed power allocation (FPA) strategy is considered among these groups to avoid complexity. The performance of the proposed scheme is evaluated in terms of total packet success probability. The results show that the proposed scheme is more suitable for the network considered compared to other benchmark schemes such as traditional NOMA and orthogonal transmission schemes. Moreover, the total packet success probability is highly affected by the level of power allocated to each group in all the scenarios.

Neural implicit representations, which encode a surface as the level set of a neural network applied to spatial coordinates, have proven to be remarkably effective for optimizing, compressing, and generating 3D geometry. Although these representations are easy to fit, it is not clear how to best evaluate geometric queries on the shape, such as intersecting against a ray or finding a closest point. The predominant approach is to encourage the network to have a signed distance property. However, this property typically holds only approximately, leading to robustness issues, and holds only at the conclusion of training, inhibiting the use of queries in loss functions. Instead, this work presents a new approach to perform queries directly on general neural implicit functions for a wide range of existing architectures. Our key tool is the application of range analysis to neural networks, using automatic arithmetic rules to bound the output of a network over a region; we conduct a study of range analysis on neural networks, and identify variants of affine arithmetic which are highly effective. We use the resulting bounds to develop geometric queries including ray casting, intersection testing, constructing spatial hierarchies, fast mesh extraction, closest-point evaluation, evaluating bulk properties, and more. Our queries can be efficiently evaluated on GPUs, and offer concrete accuracy guarantees even on randomly-initialized networks, enabling their use in training objectives and beyond. We also show a preliminary application to inverse rendering.

《Auto-Sizing the Transformer Network: Improving Speed, Efficiency, and Performance for Low-Resource Machine Translation》K Murray, J Kinnison, T Q. Nguyen, W Scheirer, D Chiang [University of Notre Dame] (2019)

付費5元查看完整內容
北京阿比特科技有限公司