亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper introduces a Compressed Sensing (CS) estimation scheme for Orthogonal Time Frequency Space (OTFS) channels with sparse multipath. The OTFS waveform represents signals in a two dimensional Delay-Doppler (DD) orthonormal basis. The proposed model does not require the assumption that the delays are integer multiples of the sampling period. The analysis shows that non-integer delay and Doppler shifts in the channel cannot be accurately modelled by integer approximations. An Orthogonal Matching Pursuit with Binary-division Refinement (OMPBR) estimation algorithm is proposed. The proposed estimator finds the best channel approximation over a continuous DD dictionary without integer approximations. This results in a significant reduction of the estimation normalized mean squared error with reasonable computational complexity.

相關內容

This paper considers the problem of measure estimation under the barycentric coding model (BCM), in which an unknown measure is assumed to belong to the set of Wasserstein-2 barycenters of a finite set of known measures. Estimating a measure under this model is equivalent to estimating the unknown barycenteric coordinates. We provide novel geometrical, statistical, and computational insights for measure estimation under the BCM, consisting of three main results. Our first main result leverages the Riemannian geometry of Wasserstein-2 space to provide a procedure for recovering the barycentric coordinates as the solution to a quadratic optimization problem assuming access to the true reference measures. The essential geometric insight is that the parameters of this quadratic problem are determined by inner products between the optimal displacement maps from the given measure to the reference measures defining the BCM. Our second main result then establishes an algorithm for solving for the coordinates in the BCM when all the measures are observed empirically via i.i.d. samples. We prove precise rates of convergence for this algorithm -- determined by the smoothness of the underlying measures and their dimensionality -- thereby guaranteeing its statistical consistency. Finally, we demonstrate the utility of the BCM and associated estimation procedures in three application areas: (i) covariance estimation for Gaussian measures; (ii) image processing; and (iii) natural language processing.

Optimal $k$-thresholding algorithms are a class of sparse signal recovery algorithms that overcome the shortcomings of traditional hard thresholding algorithms caused by the oscillation of the residual function. In this paper, a novel convergence analysis for optimal $k$-thresholding algorithms is established, which reveals the data-time tradeoffs of these algorithms. Both the analysis and numerical results demonstrate that when the number of measurements is small, the algorithms cannot converge; when the number of measurements is suitably large, the number of iterations required for successful recovery has a negative correlation with the number of measurements, and the algorithms can achieve linear convergence. Furthermore, the main theorems indicate that the number of measurements required for successful recovery is on the order of $k \log({n}/{k})$, where $n$ is the dimension of the target signal.

In this paper, we are interested in the performance of a variable-length stop-feedback (VLSF) code with $m$ optimal decoding times for the binary-input additive white Gaussian noise (BI-AWGN) channel. We first develop tight approximations on the tail probability of length-$n$ cumulative information density. Building on the work of Yavas \emph{et al.}, we formulate the problem of minimizing the upper bound on average blocklength subject to the error probability, minimum gap, and integer constraints. For this integer program, we show that for a given error constraint, a VLSF code that decodes after every symbol attains the maximum achievable rate. We also present a greedy algorithm that yields possibly suboptimal integer decoding times. By allowing a positive real-valued decoding time, we develop the gap-constrained sequential differential optimization (SDO) procedure. Numerical evaluation shows that the gap-constrained SDO can provide a good estimate on achievable rate of VLSF codes with $m$ optimal decoding times and that a finite $m$ suffices to attain Polyanskiy's bound for VLSF codes with $m = \infty$.

The optical fiber multiple-input multiple-output (MIMO) channel with intensity modulation and direct detection (IM/DD) per spatial path is treated. The spatial dimensions represent the multiple modes employed for transmission and the cross-talk between them originates in the multiplexers and demultiplexers, which are polarization dependent and thus timevarying. The upper bounds from free-space IM/DD MIMO channels are adapted to the fiber case, and the constellation constrained capacity is constructively estimated using the Blahut-Arimoto algorithm. An autoencoder is then proposed to optimize a practical MIMO transmission in terms of pre-coder and detector assuming channel distribution knowledge at the transmitter. The pre-coders are shown to be robust to changes in the channel.

Downlink precoding is considered for multi-path multi-user multi-input single-output (MU-MISO) channels where the base station uses orthogonal frequency-division multiplexing and low-resolution signaling. A quantized coordinate minimization (QCM) algorithm is proposed and its performance is compared to other precoding algorithms including squared infinity-norm relaxation (SQUID), multi-antenna greedy iterative quantization (MAGIQ), and maximum safety margin precoding. MAGIQ and QCM achieve the highest information rates and QCM has the lowest complexity measured in the number of multiplications. The information rates are computed for pilot-aided channel estimation and a blind detector that performs joint data and channel estimation. Bit error rates for a 5G low-density parity-check code confirm the information-theoretic calculations. Simulations with imperfect channel knowledge at the transmitter show that the performance of QCM and SQUID degrades in a similar fashion as zero-forcing precoding with high resolution quantizers.

This paper investigates the achievability of the interference channel coding. It is clarified that the rate-splitting technique is unnecessary to achieve Han-Kobayashi and Jian-Xin-Garg inner regions. Codes are constructed by using sparse matrices (with logarithmic column degree) and the constrained-random-number generators. By extending the problem, we can establish a possible extension of known inner regions.

Statistical divergences (SDs), which quantify the dissimilarity between probability distributions, are a basic constituent of statistical inference and machine learning. A modern method for estimating those divergences relies on parametrizing an empirical variational form by a neural network (NN) and optimizing over parameter space. Such neural estimators are abundantly used in practice, but corresponding performance guarantees are partial and call for further exploration. We establish non-asymptotic absolute error bounds for a neural estimator realized by a shallow NN, focusing on four popular $\mathsf{f}$-divergences -- Kullback-Leibler, chi-squared, squared Hellinger, and total variation. Our analysis relies on non-asymptotic function approximation theorems and tools from empirical process theory to bound the two sources of error involved: function approximation and empirical estimation. The bounds characterize the effective error in terms of NN size and the number of samples, and reveal scaling rates that ensure consistency. For compactly supported distributions, we further show that neural estimators of the first three divergences above with appropriate NN growth-rate are minimax rate-optimal, achieving the parametric convergence rate.

We consider the matrix least squares problem of the form $\| \mathbf{A} \mathbf{X}-\mathbf{B} \|_F^2$ where the design matrix $\mathbf{A} \in \mathbb{R}^{N \times r}$ is tall and skinny with $N \gg r$. We propose to create a sketched version $\| \tilde{\mathbf{A}}\mathbf{X}-\tilde{\mathbf{B}} \|_F^2$ where the sketched matrices $\tilde{\mathbf{A}}$ and $\tilde{\mathbf{B}}$ contain weighted subsets of the rows of $\mathbf{A}$ and $\mathbf{B}$, respectively. The subset of rows is determined via random sampling based on leverage score estimates for each row. We say that the sketched problem is $\epsilon$-accurate if its solution $\tilde{\mathbf{X}}_{\rm \text{opt}} = \text{argmin } \| \tilde{\mathbf{A}}\mathbf{X}-\tilde{\mathbf{B}} \|_F^2$ satisfies $\|\mathbf{A}\tilde{\mathbf{X}}_{\rm \text{opt}}-\mathbf{B} \|_F^2 \leq (1+\epsilon) \min \| \mathbf{A}\mathbf{X}-\mathbf{B} \|_F^2$ with high probability. We prove that the number of samples required for an $\epsilon$-accurate solution is $O(r/(\beta \epsilon))$ where $\beta \in (0,1]$ is a measure of the quality of the leverage score estimates.

Most of the existing neural video compression methods adopt the predictive coding framework, which first generates the predicted frame and then encodes its residue with the current frame. However, as for compression ratio, predictive coding is only a sub-optimal solution as it uses simple subtraction operation to remove the redundancy across frames. In this paper, we propose a deep contextual video compression framework to enable a paradigm shift from predictive coding to conditional coding. In particular, we try to answer the following questions: how to define, use, and learn condition under a deep video compression framework. To tap the potential of conditional coding, we propose using feature domain context as condition. This enables us to leverage the high dimension context to carry rich information to both the encoder and the decoder, which helps reconstruct the high-frequency contents for higher video quality. Our framework is also extensible, in which the condition can be flexibly designed. Experiments show that our method can significantly outperform the previous state-of-the-art (SOTA) deep video compression methods. When compared with x265 using veryslow preset, we can achieve 26.0% bitrate saving for 1080P standard test videos.

In this paper we introduce a covariance framework for the analysis of EEG and MEG data that takes into account observed temporal stationarity on small time scales and trial-to-trial variations. We formulate a model for the covariance matrix, which is a Kronecker product of three components that correspond to space, time and epochs/trials, and consider maximum likelihood estimation of the unknown parameter values. An iterative algorithm that finds approximations of the maximum likelihood estimates is proposed. We perform a simulation study to assess the performance of the estimator and investigate the influence of different assumptions about the covariance factors on the estimated covariance matrix and on its components. Apart from that, we illustrate our method on real EEG and MEG data sets. The proposed covariance model is applicable in a variety of cases where spontaneous EEG or MEG acts as source of noise and realistic noise covariance estimates are needed for accurate dipole localization, such as in evoked activity studies, or where the properties of spontaneous EEG or MEG are themselves the topic of interest, such as in combined EEG/fMRI experiments in which the correlation between EEG and fMRI signals is investigated.

北京阿比特科技有限公司