亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

High-rate product codes (PCs) and staircase codes (SCs) are ubiquitous codes in high-speed optical communication achieving near-capacity performance on the binary symmetric channel. Their success is mostly due to very efficient iterative decoding algorithms that require very little complexity. In this paper, we extend the density evolution (DE) analysis for PCs and SCs to a channel with ternary output and ternary message passing, where the third symbol marks an erasure. We investigate the performance of a standard error-and-erasure decoder and of its simplification using DE. The proposed analysis can be used to find component code configurations and quantizer levels for the channel output. We also show how the use of even-weight BCH subcodes as component codes can improve the decoding performance at high rates. The DE results are verified by Monte-Carlo simulations, which show that additional coding gains of up to 0.6 dB are possible by ternary decoding, at only a small additional increase in complexity compared to traditional binary message passing.

相關內容

Decoding sequences that stem from multiple transmissions of a codeword over an insertion, deletion, and substitution channel is a critical component of efficient deoxyribonucleic acid (DNA) data storage systems. In this paper, we consider a concatenated coding scheme with an outer nonbinary low-density parity-check code or a polar code and either an inner convolutional code or a time-varying block code. We propose two novel decoding algorithms for inference from multiple received sequences, both combining the inner code and channel to a joint hidden Markov model to infer symbolwise a posteriori probabilities (APPs). The first decoder computes the exact APPs by jointly decoding the received sequences, whereas the second decoder approximates the APPs by combining the results of separately decoded received sequences. Using the proposed algorithms, we evaluate the performance of decoding multiple received sequences by means of achievable information rates and Monte-Carlo simulations. We show significant performance gains compared to a single received sequence. In addition, we succeed in improving the performance of the aforementioned coding scheme by optimizing both the inner and outer codes.

This paper proposes passive WiFi indoor localization. Instead of using WiFi signals received by mobile devices as fingerprints, we use signals received by routers to locate the mobile carrier. Consequently, software installation on the mobile device is not required. To resolve the data insufficiency problem, flow control signals such as request to send (RTS) and clear to send (CTS) are utilized. In our model, received signal strength indicator (RSSI) and channel state information (CSI) are used as fingerprints for several algorithms, including deterministic, probabilistic and neural networks localization algorithms. We further investigated localization algorithms performance through extensive on-site experiments with various models of phones at hundreds of testing locations. We demonstrate that our passive scheme achieves an average localization error of 0.8 m when the phone is actively transmitting data frames and 1.5 m when it is not transmitting data frames.

Massive grant-free multiple-access is a valuable research topic for next generation multiple-access, since it significantly reduces the control signaling overhead and transmission latency. This paper constructs a novel uniquely-decodable multi-amplitude sequence (UDAS) set for grant-free multiple-access systems, which can provide high spectrum efficiency (SE) without additional redundancy and realize low-complexity active user detection (AUD). We firstly propose an UDAS-based multi-dimensional bit interleaving coded modulation (MD-BICM) transmitter. Then, this paper presents the detailed definition of UDAS, and provides three conditions for constructing a UDAS set. Following, two kinds of UDAS sets are constructed based on cyclic and quasi-cyclic matrix modes; and some important features of the cyclic/quasi-cyclic UDAS sets are deduced. Besides, we present a statistic of UDAS feature based AUD algorithm (SoF-AUD), and a joint multiuser detection and improved message passing iterative decoding algorithm for the proposed system. Finally, the active user error rate (AUER) and Shannon limits of the proposed system are deduced in details. Simulation results show that the AUER of our proposed system can reach an extremely low value $10^{-5}$, when $E_b/N_0$ is 0 dB and the length of transmit block is larger than a given value (e.g., 784). Meanwhile, the SE of our proposed system can compare with the designed non-orthogonal multiple-access (NOMA) codebooks, verifying the valid and flexible.

Combinatorial designs provide an interesting source of optimization problems. Among them, permutation codes are particularly interesting given their applications in powerline communications, flash memories, and block ciphers. This paper addresses the design of permutation codes by evolutionary algorithms (EA) by developing an iterative approach. Starting from a single random permutation, new permutations satisfying the minimum distance constraint are incrementally added to the code by using a permutation-based EA. We investigate our approach against four different fitness functions targeting the minimum distance requirement at different levels of detail and with two different policies concerning code expansion and pruning. We compare the results achieved by our EA approach to those of a simple random search, remarking that neither method scales well with the problem size.

List versions of recursive decoding are known to approach maximum-likelihood (ML) performance for the short length Reed-Muller (RM) codes. The recursive decoder employs the Plotkin construction to split the original code into two shorter RM codes, with a lower-rate RM code being decoded first. In this paper, we consider non-iterative soft-input decoders for RM codes that, unlike recursive decoding, start decoding with a higher-rate constituent code. Although the error-rate performance of these algorithms is limited, it can be significantly improved by applying permutations from the automorphism group of the code, with permutations being selected on the fly based on soft information from a channel. Simulation results show that the error-rate performance of the proposed algorithms enhanced by a permutation selection technique is close to that of the automorphism-based recursive decoding algorithm with similar complexity for short length $(\leq 256)$ RM codes, while our decoders perform better for longer RM codes. In particular, it is demonstrated that the proposed algorithms achieve near-ML performance for RM codes of length $2^m$ and order $m - 3$ with reasonable complexity.

This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyze the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such models' predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterize the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behavior and lets us categorize networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths. With these tools, we can learn in detail about the inductive bias of architectures, hyperparameters, and optimizers.

Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.

In this paper, we investigate the challenges of using reinforcement learning agents for question-answering over knowledge graphs for real-world applications. We examine the performance metrics used by state-of-the-art systems and determine that they are inadequate for such settings. More specifically, they do not evaluate the systems correctly for situations when there is no answer available and thus agents optimized for these metrics are poor at modeling confidence. We introduce a simple new performance metric for evaluating question-answering agents that is more representative of practical usage conditions, and optimize for this metric by extending the binary reward structure used in prior work to a ternary reward structure which also rewards an agent for not answering a question rather than giving an incorrect answer. We show that this can drastically improve the precision of answered questions while only not answering a limited number of previously correctly answered questions. Employing a supervised learning strategy using depth-first-search paths to bootstrap the reinforcement learning algorithm further improves performance.

Classification tasks are usually analysed and improved through new model architectures or hyperparameter optimisation but the underlying properties of datasets are discovered on an ad-hoc basis as errors occur. However, understanding the properties of the data is crucial in perfecting models. In this paper we analyse exactly which characteristics of a dataset best determine how difficult that dataset is for the task of text classification. We then propose an intuitive measure of difficulty for text classification datasets which is simple and fast to calculate. We show that this measure generalises to unseen data by comparing it to state-of-the-art datasets and results. This measure can be used to analyse the precise source of errors in a dataset and allows fast estimation of how difficult a dataset is to learn. We searched for this measure by training 12 classical and neural network based models on 78 real-world datasets, then use a genetic algorithm to discover the best measure of difficulty. Our difficulty-calculating code ( //github.com/Wluper/edm ) and datasets ( //data.wluper.com ) are publicly available.

This paper addresses the problem of formally verifying desirable properties of neural networks, i.e., obtaining provable guarantees that neural networks satisfy specifications relating their inputs and outputs (robustness to bounded norm adversarial perturbations, for example). Most previous work on this topic was limited in its applicability by the size of the network, network architecture and the complexity of properties to be verified. In contrast, our framework applies to a general class of activation functions and specifications on neural network inputs and outputs. We formulate verification as an optimization problem (seeking to find the largest violation of the specification) and solve a Lagrangian relaxation of the optimization problem to obtain an upper bound on the worst case violation of the specification being verified. Our approach is anytime i.e. it can be stopped at any time and a valid bound on the maximum violation can be obtained. We develop specialized verification algorithms with provable tightness guarantees under special assumptions and demonstrate the practical significance of our general verification approach on a variety of verification tasks.

北京阿比特科技有限公司