亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Being capable of enhancing the spectral efficiency (SE), faster-than-Nyquist (FTN) signaling is a promising approach for wireless communication systems. This paper investigates the doubly-selective (i.e., time- and frequency-selective) channel estimation and data detection of FTN signaling. We consider the intersymbol interference (ISI) resulting from both the FTN signaling and the frequency-selective channel and adopt an efficient frame structure with reduced overhead. We propose a novel channel estimation technique of FTN signaling based on the least sum of squared errors (LSSE) approach to estimate the complex channel coefficients at the pilot locations within the frame. In particular, we find the optimal pilot sequence that minimizes the mean square error (MSE) of the channel estimation. To address the time-selective nature of the channel, we use a low-complexity linear interpolation to track the complex channel coefficients at the data symbols locations within the frame. To detect the data symbols of FTN signaling, we adopt a turbo equalization technique based on a linear soft-input soft-output (SISO) minimum mean square error (MMSE) equalizer. Simulation results show that the MSE of the proposed FTN signaling channel estimation employing the designed optimal pilot sequence is lower than its counterpart designed for conventional Nyquist transmission. The bit error rate (BER) of the FTN signaling employing the proposed optimal pilot sequence shows improvement compared to the FTN signaling employing the conventional Nyquist pilot sequence. Additionally, for the same SE, the proposed FTN signaling channel estimation employing the designed optimal pilot sequence shows better performance when compared to competing techniques from the literature.

相關內容

We study channel estimation for a beyond diagonal reconfigurable intelligent surface (BD-RIS) aided multiple input single output system. We first describe the channel estimation strategy based on the least square (LS) method, derive the mean square error (MSE) of the LS estimator, and formulate the BD-RIS design problem that minimizes the estimation MSE with unique constraints induced by group-connected architectures of BD-RIS. Then, we propose an efficient BD-RIS design which theoretically guarantees to achieve the MSE lower bound. Finally, we provide simulation results to verify the effectiveness of the proposed channel estimation scheme.

In this paper, we investigate the uplink signal detection approaches in the cell-free massive MIMO systems with unmanned aerial vehicles (UAVs) serving as aerial access points (APs). The ground users are equipped with multiple antennas and the ground-to-air propagation channels are subject to correlated Rician fading. To overcome huge signaling overhead in the fully-centralized detection, we propose a two-layer distributed uplink detection scheme, where the uplink signals are first detected in the AP-UAVs by using the minimum mean-squared error (MMSE) detector depending on local channel state information (CSI), and then collected and weighted combined at the CPU-UAV to obtain the refined detection. By using the operator-valued free probability theory, the asymptotic expressions of the combining weights are obtained, which only depend on the statistical CSI and show excellent accuracy. Based on the proposed distributed scheme, we further investigate the impacts of different distributed deployments on the achieved spectral efficiency (SE). Numerical results show that in urban and dense urban environments, it is more beneficial to deploy more AP-UAVs to achieve higher SE. On the other hand, in suburban environment, an optimal ratio between the number of deployed UAVs and the number of antennas per UAV exists to maximize the SE.

We study a fundamental problem in optimization under uncertainty. There are $n$ boxes; each box $i$ contains a hidden reward $x_i$. Rewards are drawn i.i.d. from an unknown distribution $\mathcal{D}$. For each box $i$, we see $y_i$, an unbiased estimate of its reward, which is drawn from a Normal distribution with known standard deviation $\sigma_i$ (and an unknown mean $x_i$). Our task is to select a single box, with the goal of maximizing our reward. This problem captures a wide range of applications, e.g. ad auctions, where the hidden reward is the click-through rate of an ad. Previous work in this model [BKMR12] proves that the naive policy, which selects the box with the largest estimate $y_i$, is suboptimal, and suggests a linear policy, which selects the box $i$ with the largest $y_i - c \cdot \sigma_i$, for some $c > 0$. However, no formal guarantees are given about the performance of either policy (e.g., whether their expected reward is within some factor of the optimal policy's reward). In this work, we prove that both the naive policy and the linear policy are arbitrarily bad compared to the optimal policy, even when $\mathcal{D}$ is well-behaved, e.g. has monotone hazard rate (MHR), and even under a "small tail" condition, which requires that not too many boxes have arbitrarily large noise. On the flip side, we propose a simple threshold policy that gives a constant approximation to the reward of a prophet (who knows the realized values $x_1, \dots, x_n$) under the same "small tail" condition. We prove that when this condition is not satisfied, even an optimal clairvoyant policy (that knows $\mathcal{D}$) cannot get a constant approximation to the prophet, even for MHR distributions, implying that our threshold policy is optimal against the prophet benchmark, up to constants.

In this paper, a technique for the Berlekamp-Massey(BM) algorithm is provided to reduce the latency of decoding and save decoding power by early termination or early-stopped checking. We investigate the consecutive zero discrepancies during the decoding iteration and decide to early stop the decoding process. This technique is subject to decoding failure in exchange for the decoding latency. We analyze our proposed technique by considering the weight distribution of BCH code and estimating the bounds of undetected error probability as the event of enormous stop checking. The proposed method is effective in numerical results and the probability of decoding failure is lower than $10^{-119}$ for decoding 16383 code length of BCH codes. Furthermore, the complexity compared the conventional early termination method with the proposed approach for decoding the long BCH code. The proposed approach reduces the complexity of the conventional approach by up to 80\%. As a result, the FPGA testing on a USB device validates the reliability of the proposed method.

In hearing aid applications, an important objective is to accurately estimate the direction of arrival (DOA) of multiple speakers in noisy and reverberant environments. Recently, we proposed a binaural DOA estimation method, where the DOAs of the speakers are estimated by selecting the directions for which the so-called Hermitian angle spectrum between the estimated relative transfer function (RTF) vector and a database of prototype anechoic RTF vectors is maximized. The RTF vector is estimated using the covariance whitening (CW) method, which requires a computationally complex generalized eigenvalue decomposition. The spatial spectrum is obtained by only considering frequencies where it is likely that one speaker dominates over the other speakers, noise and reverberation. In this contribution, we exploit the availability of an external microphone that is spatially separated from the hearing aid microphones and consider a low-complexity RTF vector estimation method that assumes a low spatial coherence between the undesired components in the external microphone and the hearing aid microphones. Using recordings of two speakers and diffuse-like babble noise in acoustic environments with mild reverberation and low signal-to-noise ratio, simulation results show that the proposed method yields a comparable DOA estimation performance as the CW method at a lower computational complexity.

Due to the increasing complexity of technical systems, accurate first principle models can often not be obtained. Supervised machine learning can mitigate this issue by inferring models from measurement data. Gaussian process regression is particularly well suited for this purpose due to its high data-efficiency and its explicit uncertainty representation, which allows the derivation of prediction error bounds. These error bounds have been exploited to show tracking accuracy guarantees for a variety of control approaches, but their direct dependency on the training data is generally unclear. We address this issue by deriving a Bayesian prediction error bound for GP regression, which we show to decay with the growth of a novel, kernel-based measure of data density. Based on the prediction error bound, we prove time-varying tracking accuracy guarantees for learned GP models used as feedback compensation of unknown nonlinearities, and show to achieve vanishing tracking error with increasing data density. This enables us to develop an episodic approach for learning Gaussian process models, such that an arbitrary tracking accuracy can be guaranteed. The effectiveness of the derived theory is demonstrated in several simulations.

Coded distributed computing (CDC) was introduced to greatly reduce the communication load for MapReduce computing systems. Such a system has $K$ nodes, $N$ input files, and $Q$ Reduce functions. Each input file is mapped by $r$ nodes and each Reduce function is computed by $s$ nodes. The architecture must allow for coding techniques that achieve the maximum multicast gain. Some CDC schemes that achieve optimal communication load have been proposed before. The parameters $N$ and $Q$ in those schemes, however, grow too fast with respect to $K$ to be of great practical value. To improve the situation, researchers have come up with some asymptotically optimal cascaded CDC schemes with $s+r=K$ from symmetric designs. In this paper, we propose new asymptotically optimal cascaded CDC schemes. Akin to known schemes, ours have $r+s=K$ and make use of symmetric designs as construction tools. Unlike previous schemes, ours have much smaller communication loads, given the same set of parameters $K$, $r$, $N$, and $Q$. We also expand the construction tools to include almost difference sets. Using them, we have managed to construct a new asymptotically optimal cascaded CDC scheme.

The logistic regression model is one of the most popular data generation model in noisy binary classification problems. In this work, we study the sample complexity of estimating the parameters of the logistic regression model up to a given $\ell_2$ error, in terms of the dimension and the inverse temperature, with standard normal covariates. The inverse temperature controls the signal-to-noise ratio of the data generation process. While both generalization bounds and asymptotic performance of the maximum-likelihood estimator for logistic regression are well-studied, the non-asymptotic sample complexity that shows the dependence on error and the inverse temperature for parameter estimation is absent from previous analyses. We show that the sample complexity curve has two change-points (or critical points) in terms of the inverse temperature, clearly separating the low, moderate, and high temperature regimes.

We consider the problem of uncertainty quantification in change point regressions, where the signal can be piecewise polynomial of arbitrary but fixed degree. That is we seek disjoint intervals which, uniformly at a given confidence level, must each contain a change point location. We propose a procedure based on performing local tests at a number of scales and locations on a sparse grid, which adapts to the choice of grid in the sense that by choosing a sparser grid one explicitly pays a lower price for multiple testing. The procedure is fast as its computational complexity is always of the order $\mathcal{O} (n \log (n))$ where $n$ is the length of the data, and optimal in the sense that under certain mild conditions every change point is detected with high probability and the widths of the intervals returned match the mini-max localisation rates for the associated change point problem up to log factors. A detailed simulation study shows our procedure is competitive against state of the art algorithms for similar problems. Our procedure is implemented in the R package ChangePointInference which is available via //github.com/gaviosha/ChangePointInference.

Doubly-selective channel estimation represents a key element in ensuring communication reliability in wireless systems. Due to the impact of multi-path propagation and Doppler interference in dynamic environments, doubly-selective channel estimation becomes challenging. Conventional symbol-by-symbol (SBS) and frame-by-frame (FBF) channel estimation schemes encounter performance degradation in high mobility scenarios due to the usage of limited training pilots. Recently, deep learning (DL) has been utilized for doubly-selective channel estimation, where long short-term memory (LSTM) and convolutional neural network (CNN) networks are employed in the SBS and FBF, respectively. However, their usage is not optimal, since LSTM suffers from long-term memory problem, whereas, CNN-based estimators require high complexity. For this purpose, we overcome these issues by proposing an optimized recurrent neural network (RNN)-based channel estimation schemes, where gated recurrent unit (GRU) and Bi-GRU units are used in SBS and FBF channel estimation, respectively. The proposed estimators are based on the average correlation of the channel in different mobility scenarios, where several performance-complexity trade-offs are provided. Moreover, the performance of several RNN networks is analyzed. The performance superiority of the proposed estimators against the recently proposed DL-based SBS and FBF estimators is demonstrated for different scenarios while recording a significant reduction in complexity.

北京阿比特科技有限公司