亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

List versions of recursive decoding are known to approach maximum-likelihood (ML) performance for the short length Reed-Muller (RM) codes. The recursive decoder employs the Plotkin construction to split the original code into two shorter RM codes, with a lower-rate RM code being decoded first. In this paper, we consider non-iterative soft-input decoders for RM codes that, unlike recursive decoding, start decoding with a higher-rate constituent code. Although the error-rate performance of these algorithms is limited, it can be significantly improved by applying permutations from the automorphism group of the code, with permutations being selected on the fly based on soft information from a channel. Simulation results show that the error-rate performance of the proposed algorithms enhanced by a permutation selection technique is close to that of the automorphism-based recursive decoding algorithm with similar complexity for short length $(\leq 256)$ RM codes, while our decoders perform better for longer RM codes. In particular, it is demonstrated that the proposed algorithms achieve near-ML performance for RM codes of length $2^m$ and order $m - 3$ with reasonable complexity.

相關內容

In this work, a sequential decoder for convolutional codes over channels that are vulnerable to insertion, deletion, and substitution errors, is described and analyzed. The decoder expands the code trellis by introducing a new channel state variable, called drift state, as proposed by Davey-MacKay. A suitable decoding metric on that trellis for sequential decoding is derived, in a manner that generalizes the original Fano metric. Under low-noise environments, this approach reduces the decoding complexity by a couple orders of magnitude in comparison to Viterbi's algorithm, albeit at relatively higher frame error rates. An analytical method to determine the computational cutoff rate is also suggested. This analysis is supported with numerical evaluations of frame error rates and computational complexity, which are compared with respect to optimal Viterbi decoding.

We show NP-completeness for various problems about the existence of arithmetic expression trees. When given a set of operations, inputs, and a target value does there exist an expression tree with those inputs and operations that evaluates to the target? We consider the variations where the structure of the tree is also given and the variation where no parentheses are allowed in the expression.

A toric code, introduced by Hansen to extend the Reed-Solomon code as a $k$-dimensional subspace of $\mathbb{F}_q^n$, is determined by a toric variety or its associated integral convex polytope $P \subseteq [0,q-2]^n$, where $k=|P \cap \mathbb{Z}^n|$ (the number of integer lattice points of $P$). There are two relevant parameters that determine the quality of a code: the information rate, which measures how much information is contained in a single bit of each codeword; and the relative minimum distance, which measures how many errors can be corrected relative to how many bits each codeword has. Soprunov and Soprunova defined a good infinite family of codes to be a sequence of codes of unbounded polytope dimension such that neither the corresponding information rates nor relative minimum distances go to 0 in the limit. We examine different ways of constructing families of codes by considering polytope operations such as the join and direct sum. In doing so, we give conditions under which no good family can exist and strong evidence that there is no such good family of codes.

In this paper, we are interested in the performance of a variable-length stop-feedback (VLSF) code with $m$ optimal decoding times for the binary-input additive white Gaussian noise (BI-AWGN) channel. We first develop tight approximations on the tail probability of length-$n$ cumulative information density. Building on the work of Yavas \emph{et al.}, we formulate the problem of minimizing the upper bound on average blocklength subject to the error probability, minimum gap, and integer constraints. For this integer program, we show that for a given error constraint, a VLSF code that decodes after every symbol attains the maximum achievable rate. We also present a greedy algorithm that yields possibly suboptimal integer decoding times. By allowing a positive real-valued decoding time, we develop the gap-constrained sequential differential optimization (SDO) procedure. Numerical evaluation shows that the gap-constrained SDO can provide a good estimate on achievable rate of VLSF codes with $m$ optimal decoding times and that a finite $m$ suffices to attain Polyanskiy's bound for VLSF codes with $m = \infty$.

Gradient coding is a coding theoretic framework to provide robustness against slow or unresponsive machines, known as stragglers, in distributed machine learning applications. Recently, Kadhe et al. proposed a gradient code based on a combinatorial design, called balanced incomplete block design (BIBD), which is shown to outperform many existing gradient codes in worst-case adversarial straggling scenarios. However, parameters for which such BIBD constructions exist are very limited. In this paper, we aim to overcome such limitations and construct gradient codes which exist for a wide range of system parameters while retaining the superior performance of BIBD gradient codes. Two such constructions are proposed, one based on a probabilistic construction that relax the stringent BIBD gradient code constraints, and the other based on taking the Kronecker product of existing gradient codes. The proposed gradient codes allow flexible choices of system parameters while retaining comparable error performance.

This paper is concerned with list decoding of $2$-interleaved binary alternant codes. The principle of the proposed algorithm is based on a combination of a list decoding algorithm for (interleaved) Reed-Solomon codes and an algorithm for (non-interleaved) alternant codes. The decoding radius exceeds the binary Johnson radius and the newly derived upper bound on the returned list size scales polynomially in the code parameters. Further, we provide simulation results on the probability of successful decoding by the proposed algorithm.

This paper studies the adversarial torn-paper channel. This problem is motivated by applications in DNA data storage where the DNA strands that carry the information may break into smaller pieces that are received out of order. Our model extends the previously researched probabilistic setting to the worst-case. We develop code constructions for any parameters of the channel for which non-vanishing asymptotic rate is possible and show our constructions achieve optimal asymptotic rate while allowing for efficient encoding and decoding. Finally, we extend our results to related settings included multi-strand storage, presence of substitution errors, or incomplete coverage.

In this paper, we present an explicit construction of list-decodable codes for single-deletion and single-substitution with list size two and redundancy 3log n+4, where n is the block length of the code. Our construction has lower redundancy than the best known explicit construction by Gabrys et al. (arXiv 2021), whose redundancy is 4log n+O(1).

Named entity recognition (NER) is a well-studied task in natural language processing. Traditional NER research only deals with flat entities and ignores nested entities. The span-based methods treat entity recognition as a span classification task. Although these methods have the innate ability to handle nested NER, they suffer from high computational cost, ignorance of boundary information, under-utilization of the spans that partially match with entities, and difficulties in long entity recognition. To tackle these issues, we propose a two-stage entity identifier. First we generate span proposals by filtering and boundary regression on the seed spans to locate the entities, and then label the boundary-adjusted span proposals with the corresponding categories. Our method effectively utilizes the boundary information of entities and partially matched spans during training. Through boundary regression, entities of any length can be covered theoretically, which improves the ability to recognize long entities. In addition, many low-quality seed spans are filtered out in the first stage, which reduces the time complexity of inference. Experiments on nested NER datasets demonstrate that our proposed method outperforms previous state-of-the-art models.

Deep reinforcement learning (RL) algorithms have shown an impressive ability to learn complex control policies in high-dimensional environments. However, despite the ever-increasing performance on popular benchmarks such as the Arcade Learning Environment (ALE), policies learned by deep RL algorithms often struggle to generalize when evaluated in remarkably similar environments. In this paper, we assess the generalization capabilities of DQN, one of the most traditional deep RL algorithms in the field. We provide evidence suggesting that DQN overspecializes to the training environment. We comprehensively evaluate the impact of traditional regularization methods, $\ell_2$-regularization and dropout, and of reusing the learned representations to improve the generalization capabilities of DQN. We perform this study using different game modes of Atari 2600 games, a recently introduced modification for the ALE which supports slight variations of the Atari 2600 games traditionally used for benchmarking. Despite regularization being largely underutilized in deep RL, we show that it can, in fact, help DQN learn more general features. These features can then be reused and fine-tuned on similar tasks, considerably improving the sample efficiency of DQN.

北京阿比特科技有限公司