亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We show that Gallager's ensemble of Low-Density Parity Check (LDPC) codes achieves list-decoding capacity with high probability. These are the first graph-based codes shown to have this property. This result opens up a potential avenue towards truly linear-time list-decodable codes that achieve list-decoding capacity. Our result on list decoding follows from a much more general result: any $\textit{local}$ property satisfied with high probability by a random linear code is also satisfied with high probability by a random LDPC code from Gallager's distribution. Local properties are properties characterized by the exclusion of small sets of codewords, and include list-decodability, list-recoverability and average-radius list-decodability. In order to prove our results on LDPC codes, we establish sharp thresholds for when local properties are satisfied by a random linear code. More precisely, we show that for any local property $\mathcal{P}$, there is some $R^*$ so that random linear codes of rate slightly less than $R^*$ satisfy $\mathcal{P}$ with high probability, while random linear codes of rate slightly more than $R^*$, with high probability, do not. We also give a characterization of the threshold rate $R^*$.

相關內容

We study classical and quantum LDPC codes of constant rate obtained by the lifted product construction over non-abelian groups. We show that the obtained families of quantum LDPC codes are asymptotically good, which proves the qLDPC conjecture. Moreover, we show that the produced classical LDPC codes are also asymptotically good and locally testable with constant query and soundness parameters, which proves a well-known conjecture in the field of locally testable codes.

The decoding performance of product codes (PCs) and staircase codes (SCCs) based on iterative bounded-distance decoding (iBDD) can be improved with the aid of a moderate amount of soft information, maintaining a low decoding complexity. One promising approach is error-and-erasure (EaE) decoding, whose performance can be reliably estimated with density evolution (DE). However, the extrinsic message passing (EMP) decoder required by the DE analysis entails a much higher complexity than the simple intrinsic message passing (IMP) decoder. In this paper, we simplify the EMP decoding algorithm for the EaE channel for two commonly-used EaE decoders by deriving the EMP decoding results from the IMP decoder output and some additional logical operations based on the algebraic structure of the component codes and the EaE decoding rule. Simulation results show that the number of BDD steps is reduced to being comparable with IMP. Furthermore, we propose a heuristic modification of the EMP decoder that reduces the complexity further. In numerical simulations, the decoding performance of the modified decoder yields up to 0.25 dB improvement compared to standard EMP decoding.

The tensor rank of some Gabidulin codes of small dimension is investigated. In particular, we determine the tensor rank of any rank metric code equivalent to an $8$-dimensional $\mathbb{F}_q$-linear generalized Gabidulin code in $\mathbb{F}_{q}^{4\times4}$. This shows that such a code is never minimum tensor rank. In this way, we detect the first infinite family of Gabidulin codes which are not minimum tensor rank.

This paper explores list decoding of convolutional and polar codes for short messages such as those found in the 5G physical broadcast channel. A cyclic redundancy check (CRC) is used to select a codeword from a list of likely codewords. One example in the 5G standard encodes a 32-bit message with a 24-bit CRC and a 512-bit polar code with additional bits added by repetition to achieve a very low rate of 32/864. This paper shows that optimizing the CRC length improves the $E_b/N_0$ performance of this polar code, where $E_b/N_0$ is the ratio of the energy per data bit to the noise power spectral density. Furthermore, even better $E_b/N_0$ performance is achieved by replacing the polar code with a tail-biting convolutional code (TBCC) with a distance-spectrum-optimal (DSO) CRC. This paper identifies the optimal CRC length to minimize the frame error rate (FER) of a rate-1/5 TBCC at a specific value of $E_b/N_0$. We also show that this optimized TBCC/CRC can attain the same excellent $E_b/N_0$ performance with the very low rate of 32/864 of the 5G polar code, where the low rate is achieved through repetition. We show that the proposed TBCC/CRC concatenated code outperforms the PBCH polar code described in the 5G standard both in terms of FER and decoding run time. We also explore the tradeoff between undetected error rate and erasure rate as the CRC size varies.

We prove a lower bound on the probability of Shor's order-finding algorithm successfully recovering the order $r$ in a single run. The bound implies that by performing two limited searches in the classical post-processing part of the algorithm, a high success probability can be guaranteed, for any $r$, without re-running the quantum part or increasing the exponent length compared to Shor. Asymptotically, in the limit as $r$ tends to infinity, the probability of successfully recovering $r$ in a single run tends to one. Already for moderate $r$, a high success probability exceeding e.g. $1 - 10^{-4}$ can be guaranteed. As corollaries, we prove analogous results for the probability of completely factoring any integer $N$ in a single run of the order-finding algorithm.

We consider a coded compressed sensing approach for the unsourced random access and replace the outer tree code proposed by Amalladinne et al. with the list recoverable code capable of correcting t errors. A finite-length random coding bound for such codes is derived. The numerical experiments in the single antenna quasi-static Rayleigh fading MAC show that transition to list recoverable codes correcting t errors improves the performance of coded compressed sensing scheme by 7-10 dB compared to the tree code-based scheme. We propose two practical constructions of outer codes. The first is a modification of the tree code. It utilizes the same code structure, and a key difference is a decoder capable of correcting up to t errors. The second is based on the Reed-Solomon codes and Guruswami-Sudan list decoding algorithm. The first scheme provides an energy efficiency very close to the random coding bound when the decoding complexity is unbounded. But for the practical parameters, the second scheme is better and improves the performance of a tree code-based scheme when the number of active users is less than 200.

The recently proposed statistical finite element (statFEM) approach synthesises measurement data with finite element models and allows for making predictions about the true system response. We provide a probabilistic error analysis for a prototypical statFEM setup based on a Gaussian process prior under the assumption that the noisy measurement data are generated by a deterministic true system response function that satisfies a second-order elliptic partial differential equation for an unknown true source term. In certain cases, properties such as the smoothness of the source term may be misspecified by the Gaussian process model. The error estimates we derive are for the expectation with respect to the measurement noise of the $L^2$-norm of the difference between the true system response and the mean of the statFEM posterior. The estimates imply polynomial rates of convergence in the numbers of measurement points and finite element basis functions and depend on the Sobolev smoothness of the true source term and the Gaussian process model. A numerical example for Poisson's equation is used to illustrate these theoretical results.

We determine the exact minimax rate of a Gaussian sequence model under bounded convex constraints, purely in terms of the local geometry of the given constraint set $K$. Our main result shows that the minimax risk (up to constant factors) under the squared $L_2$ loss is given by $\epsilon^{*2} \wedge \operatorname{diam}(K)^2$ with \begin{align*} \epsilon^* = \sup \bigg\{\epsilon : \frac{\epsilon^2}{\sigma^2} \leq \log M^{\operatorname{loc}}(\epsilon)\bigg\}, \end{align*} where $\log M^{\operatorname{loc}}(\epsilon)$ denotes the local entropy of the set $K$, and $\sigma^2$ is the variance of the noise. We utilize our abstract result to re-derive known minimax rates for some special sets $K$ such as hyperrectangles, ellipses, and more generally quadratically convex orthosymmetric sets. Finally, we extend our results to the unbounded case with known $\sigma^2$ to show that the minimax rate in that case is $\epsilon^{*2}$.

The recent explosive interest on transformers has suggested their potential to become powerful "universal" models for computer vision tasks, such as classification, detection, and segmentation. However, how further transformers can go - are they ready to take some more notoriously difficult vision tasks, e.g., generative adversarial networks (GANs)? Driven by that curiosity, we conduct the first pilot study in building a GAN \textbf{completely free of convolutions}, using only pure transformer-based architectures. Our vanilla GAN architecture, dubbed \textbf{TransGAN}, consists of a memory-friendly transformer-based generator that progressively increases feature resolution while decreasing embedding dimension, and a patch-level discriminator that is also transformer-based. We then demonstrate TransGAN to notably benefit from data augmentations (more than standard GANs), a multi-task co-training strategy for the generator, and a locally initialized self-attention that emphasizes the neighborhood smoothness of natural images. Equipped with those findings, TransGAN can effectively scale up with bigger models and high-resolution image datasets. Specifically, our best architecture achieves highly competitive performance compared to current state-of-the-art GANs based on convolutional backbones. Specifically, TransGAN sets \textbf{new state-of-the-art} IS score of 10.10 and FID score of 25.32 on STL-10. It also reaches competitive 8.64 IS score and 11.89 FID score on Cifar-10, and 12.23 FID score on CelebA $64\times64$, respectively. We also conclude with a discussion of the current limitations and future potential of TransGAN. The code is available at \url{//github.com/VITA-Group/TransGAN}.

The Transformer is widely used in natural language processing tasks. To train a Transformer however, one usually needs a carefully designed learning rate warm-up stage, which is shown to be crucial to the final performance but will slow down the optimization and bring more hyper-parameter tunings. In this paper, we first study theoretically why the learning rate warm-up stage is essential and show that the location of layer normalization matters. Specifically, we prove with mean field theory that at initialization, for the original-designed Post-LN Transformer, which places the layer normalization between the residual blocks, the expected gradients of the parameters near the output layer are large. Therefore, using a large learning rate on those gradients makes the training unstable. The warm-up stage is practically helpful for avoiding this problem. On the other hand, our theory also shows that if the layer normalization is put inside the residual blocks (recently proposed as Pre-LN Transformer), the gradients are well-behaved at initialization. This motivates us to remove the warm-up stage for the training of Pre-LN Transformers. We show in our experiments that Pre-LN Transformers without the warm-up stage can reach comparable results with baselines while requiring significantly less training time and hyper-parameter tuning on a wide range of applications.

北京阿比特科技有限公司