亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In a recent paper, Brakensiek, Gopi and Makam introduced higher order MDS codes as a generalization of MDS codes. An order-$\ell$ MDS code, denoted by $\operatorname{MDS}(\ell)$, has the property that any $\ell$ subspaces formed from columns of its generator matrix intersect as minimally as possible. An independent work by Roth defined a different notion of higher order MDS codes as those achieving a generalized singleton bound for list-decoding. In this work, we show that these two notions of higher order MDS codes are (nearly) equivalent. We also show that generic Reed-Solomon codes are $\operatorname{MDS}(\ell)$ for all $\ell$, relying crucially on the GM-MDS theorem which shows that generator matrices of generic Reed-Solomon codes achieve any possible zero pattern. As a corollary, this implies that generic Reed-Solomon codes achieve list decoding capacity. More concretely, we show that, with high probability, a random Reed-Solomon code of rate $R$ over an exponentially large field is list decodable from radius $1-R-\epsilon$ with list size at most $\frac{1-R-\epsilon}{\epsilon}$, resolving a conjecture of Shangguan and Tamo.

相關內容

Quantum error correction is fundamentally important for quantum information processing and computation. Quantum error correction codes have been studied and constructed since the pioneering work of Shor and Steane. Optimal (called MDS) $q$-qubit quantum codes attaining the quantum Singleton bound were constructed for very restricted lengths $n \leq q^2+1$. Entanglement-assisted quantum error correction (EAQEC) code was proposed to use the pre-shared maximally entangled state for the purpose of enhancing error correction capability. Recently there have been a lot of constructions of such MDS EAQEC codes attaining the quantum Singleton bound for very restricted lengths. In this paper we construct such MDS EAQEC $[[n, k, d, c]]_q$ codes for arbitrary $n$ satisfying $n \leq q^2+1$ and arbitrary distance $d\leq \frac{n+2}{2}$. It is proved that for any given length $n$ satisfying $O(q^2)=n \leq q^2+1$ and any given distance $ O(q^2)=d \leq \frac{n+2}{2}$, there exist at least $O(q^2)$ MDS EAQEC $[[n, k, d, c]]_q$ codes with different $c$ parameters. Our results show that there are much more MDS entanglement-assisted quantum codes than MDS quantum codes without consumption of the maximally entangled state. This is natural from the physical point of view.

Adapting Deep Learning (DL) techniques to automate non-trivial coding activities, such as code documentation and defect detection, has been intensively studied recently. Learning to predict code changes is one of the popular and essential investigations. Prior studies have shown that DL techniques such as Neural Machine Translation (NMT) can benefit meaningful code changes, including bug fixing and code refactoring. However, NMT models may encounter bottleneck when modeling long sequences, thus are limited in accurately predicting code changes. In this work, we design a Transformer-based approach, considering that Transformer has proven effective in capturing long-term dependencies. Specifically, we propose a novel model named DTrans. For better incorporating the local structure of code, i.e., statement-level information in this paper, DTrans is designed with dynamically relative position encoding in the multi-head attention of Transformer. Experiments on benchmark datasets demonstrate that DTrans can more accurately generate patches than the state-of-the-art methods, increasing the performance by at least 5.45\%-46.57\% in terms of the exact match metric on different datasets. Moreover, DTrans can locate the lines to change with 1.75\%-24.21\% higher accuracy than the existing methods.

Image captioning is the process of automatically generating a description of an image in natural language. Image captioning is one of the significant challenges in image understanding since it requires not only recognizing salient objects in the image but also their attributes and the way they interact. The system must then generate a syntactically and semantically correct caption that describes the image content in natural language. With the significant progress in deep learning models and their ability to effectively encode large sets of images and generate correct sentences, several neural-based captioning approaches have been proposed recently, each trying to achieve better accuracy and caption quality. This paper introduces an encoder-decoder-based image captioning system in which the encoder extracts spatial features from the image using ResNet-101. This stage is followed by a refining model, which uses an attention-on-attention mechanism to extract the visual features of the target image objects, then determine their interactions. The decoder consists of an attention-based recurrent module and a reflective attention module, which collaboratively apply attention to the visual and textual features to enhance the decoder's ability to model long-term sequential dependencies. Extensive experiments performed on Flickr30K, show the effectiveness of the proposed approach and the high quality of the generated captions.

Any reinforcement learning system must be able to identify which past events contributed to observed outcomes, a problem known as credit assignment. A common solution to this problem is to use an eligibility trace to assign credit to recency-weighted set of experienced events. However, in many realistic tasks, the set of recently experienced events are only one of the many possible action events that could have preceded the current outcome. This suggests that reinforcement learning can be made more efficient by allowing credit assignment to any viable preceding state, rather than only those most recently experienced. Accordingly, we examine ``Predecessor Features'', the fully bootstrapped version of van Hasselt's ``Expected Trace'', an algorithm that achieves this richer form of credit assignment. By maintaining a representation that approximates the expected sum of past occupancies, this algorithm allows temporal difference (TD) errors to be propagated accurately to a larger number of predecessor states than conventional methods, greatly improving learning speed. The algorithm can also be naturally extended from tabular state representation to feature representations allowing for increased performance on a wide range of environments. We demonstrate several use cases for Predecessor Features and compare its performance with other approaches.

Sorting operation is one of the main bottlenecks for the successive-cancellation list (SCL) decoding. This paper introduces an improvement to the SCL decoding for polar and pre-transformed polar codes that reduces the number of sorting operations without degrading the code's error-correction performance. In an SCL decoding with an optimum metric function we show that, on average, the correct branch's bit-metric value must be equal to the bit-channel capacity, and on the other hand, the average bit-metric value of a wrong branch can be at most zero. This implies that a wrong path's partial path metric value deviates from the bit-channel capacity's partial summation. For relatively reliable bit-channels, the bit metric for a wrong branch becomes very large negative number, which enables us to detect and prune such paths. We prove that, for a threshold lower than the bit-channel cutoff rate, the probability of pruning the correct path decreases exponentially by the given threshold. Based on these findings, we presented a pruning technique, and the experimental results demonstrate a substantial decrease in the amount of sorting procedures required for SCL decoding. In the stack algorithm, a similar technique is used to significantly reduce the average number of paths in the stack.

MDS codes and self-dual codes are important families of classical codes in coding theory. Therefore, it is of interest to investigate MDS self-dual codes. The existence of MDS self-dual codes over finite field $F_q$ is completely solved for $q$ is even. In this paper, for finite field with odd characteristic, we construct some new classes of MDS self-dual codes over $F_q$ by (extended) generalized Reed-Solomon codes.

In this work, we consider $q$-ary signature codes of length $k$ and size $n$ for a noisy adder multiple access channel. A signature code in this model has the property that any subset of codewords can be uniquely reconstructed based on any vector that is obtained from the sum (over integers) of these codewords. We show that there exists an algorithm to construct a signature code of length $k = \frac{2n\log{3}}{(1-2\tau)\left(\log{n} + (q-1)\log{\frac{\pi}{2}}\right)} +\mathcal{O}\left(\frac{n}{\log{n}(q+\log{n})}\right)$ capable of correcting $\tau k$ errors at the channel output, where $0\le \tau < \frac{q-1}{2q}$. Furthermore, we present an explicit construction of signature codewords with polynomial complexity being able to correct up to $\left( \frac{q-1}{8q} - \epsilon\right)k$ errors for a codeword length $k = \mathcal{O} \left ( \frac{n}{\log \log n} \right )$, where $\epsilon$ is a small non-negative number. Moreover, we prove several non-existence results (converse bounds) for $q$-ary signature codes enabling error correction.

Decoding algorithms for Reed--Solomon (RS) codes are of great interest for both practical and theoretical reasons. In this paper, an efficient algorithm, called the modular approach (MA), is devised for solving the Welch--Berlekamp (WB) key equation. By taking the MA as the key equation solver, we propose a new decoding algorithm for systematic RS codes. For $(n,k)$ RS codes, where $n$ is the code length and $k$ is the code dimension, the proposed decoding algorithm has both the best asymptotic computational complexity $O(n\log(n-k) + (n-k)\log^2(n-k))$ and the smallest constant factor achieved to date. By comparing the number of field operations required, we show that when decoding practical RS codes, the new algorithm is significantly superior to the existing methods in terms of computational complexity. When decoding the $(4096, 3584)$ RS code defined over $\mathbb{F}_{2^{12}}$, the new algorithm is 10 times faster than a conventional syndrome-based method. Furthermore, the new algorithm has a regular architecture and is thus suitable for hardware implementation.

Existing approaches to image captioning usually generate the sentence word-by-word from left to right, with the constraint of conditioned on local context including the given image and history generated words. There have been many studies target to make use of global information during decoding, e.g., iterative refinement. However, it is still under-explored how to effectively and efficiently incorporate the future context. To respond to this issue, inspired by that Non-Autoregressive Image Captioning (NAIC) can leverage two-side relation with modified mask operation, we aim to graft this advance to the conventional Autoregressive Image Captioning (AIC) model while maintaining the inference efficiency without extra time cost. Specifically, AIC and NAIC models are first trained combined with shared visual encoders, forcing the visual encoder to contain sufficient and valid future context; then the AIC model is encouraged to capture the causal dynamics of cross-layer interchanging from NAIC model on its unconfident words, which follows a teacher-student paradigm and optimized with the distribution calibration training objective. Empirical evidences demonstrate that our proposed approach clearly surpass the state-of-the-art baselines in both automatic metrics and human evaluations on the MS COCO benchmark. The source code is available at: //github.com/feizc/Future-Caption.

Many real-world applications require the prediction of long sequence time-series, such as electricity consumption planning. Long sequence time-series forecasting (LSTF) demands a high prediction capacity of the model, which is the ability to capture precise long-range dependency coupling between output and input efficiently. Recent studies have shown the potential of Transformer to increase the prediction capacity. However, there are several severe issues with Transformer that prevent it from being directly applicable to LSTF, such as quadratic time complexity, high memory usage, and inherent limitation of the encoder-decoder architecture. To address these issues, we design an efficient transformer-based model for LSTF, named Informer, with three distinctive characteristics: (i) a $ProbSparse$ Self-attention mechanism, which achieves $O(L \log L)$ in time complexity and memory usage, and has comparable performance on sequences' dependency alignment. (ii) the self-attention distilling highlights dominating attention by halving cascading layer input, and efficiently handles extreme long input sequences. (iii) the generative style decoder, while conceptually simple, predicts the long time-series sequences at one forward operation rather than a step-by-step way, which drastically improves the inference speed of long-sequence predictions. Extensive experiments on four large-scale datasets demonstrate that Informer significantly outperforms existing methods and provides a new solution to the LSTF problem.

北京阿比特科技有限公司