Polar codes are normally designed based on the reliability of the sub-channels in the polarized vector channel. There are various methods with diverse complexity and accuracy to evaluate the reliability of the sub-channels. However, designing polar codes solely based on the sub-channel reliability may result in poor Hamming distance properties. In this work, we propose a different approach to design the information set for polar codes and PAC codes where the objective is to reduce the number of codewords with minimum weight (a.k.a. error coefficient) of a code designed for maximum reliability. This approach is based on the coset-wise characterization of the rows of polar transform $\mathbf{G}_N$ involved in the formation of the minimum-weight codewords. Our analysis capitalizes on the properties of the polar transform based on its row and column indices. The numerical results show that the designed codes outperform PAC codes and CRC-Polar codes at the practical block error rate of $10^{-2}-10^{-3}$. Furthermore, a by-product of the combinatorial properties analyzed in this paper is an alternative enumeration method of the minimum-weight codewords.
We establish a complete classification of binary group codes with complementary duals for a finite group and explicitly determine the number of linear complementary dual (LCD) cyclic group codes by using cyclotomic cosets. The dimension and the minimum distance for LCD group codes are explored. Finally, we find a connection between LCD MDS group codes and maximal ideals.
We present a fundamentally new regularization method for the solution of the Fredholm integral equation of the first kind, in which we incorporate solutions corresponding to a range of Tikhonov regularizers into the end result. This method identifies solutions within a much larger function space, spanned by this set of regularized solutions, than is available to conventional regularizaton methods. Each of these solutions is regularized to a different extent. In effect, we combine the stability of solutions with greater degrees of regularization with the resolution of those that are less regularized. In contrast, current methods involve selection of a single, or in some cases several, regularization parameters that define an optimal degree of regularization. Because the identified solution is within the span of a set of differently-regularized solutions, we call this method \textit{span of regularizations}, or SpanReg. We demonstrate the performance of SpanReg through a non-negative least squares analysis employing a Gaussian basis, and demonstrate the improved recovery of bimodal Gaussian distribution functions as compared to conventional methods. We also demonstrate that this method exhibits decreased dependence of the end result on the optimality of regularization parameter selection. We further illustrate the method with an application to myelin water fraction mapping in the human brain from experimental magnetic resonance imaging relaxometry data. We expect SpanReg to be widely applicable as an effective new method for regularization of inverse problems.
The decoding performance of product codes (PCs) and staircase codes (SCCs) based on iterative bounded-distance decoding (iBDD) can be improved with the aid of a moderate amount of soft information, maintaining a low decoding complexity. One promising approach is error-and-erasure (EaE) decoding, whose performance can be reliably estimated with density evolution (DE). However, the extrinsic message passing (EMP) decoder required by the DE analysis entails a much higher complexity than the simple intrinsic message passing (IMP) decoder. In this paper, we simplify the EMP decoding algorithm for the EaE channel for two commonly-used EaE decoders by deriving the EMP decoding results from the IMP decoder output and some additional logical operations based on the algebraic structure of the component codes and the EaE decoding rule. Simulation results show that the number of BDD steps is reduced to being comparable with IMP. Furthermore, we propose a heuristic modification of the EMP decoder that reduces the complexity further. In numerical simulations, the decoding performance of the modified decoder yields up to 0.25 dB improvement compared to standard EMP decoding.
The tensor rank of some Gabidulin codes of small dimension is investigated. In particular, we determine the tensor rank of any rank metric code equivalent to an $8$-dimensional $\mathbb{F}_q$-linear generalized Gabidulin code in $\mathbb{F}_{q}^{4\times4}$. This shows that such a code is never minimum tensor rank. In this way, we detect the first infinite family of Gabidulin codes which are not minimum tensor rank.
This paper explores list decoding of convolutional and polar codes for short messages such as those found in the 5G physical broadcast channel. A cyclic redundancy check (CRC) is used to select a codeword from a list of likely codewords. One example in the 5G standard encodes a 32-bit message with a 24-bit CRC and a 512-bit polar code with additional bits added by repetition to achieve a very low rate of 32/864. This paper shows that optimizing the CRC length improves the $E_b/N_0$ performance of this polar code, where $E_b/N_0$ is the ratio of the energy per data bit to the noise power spectral density. Furthermore, even better $E_b/N_0$ performance is achieved by replacing the polar code with a tail-biting convolutional code (TBCC) with a distance-spectrum-optimal (DSO) CRC. This paper identifies the optimal CRC length to minimize the frame error rate (FER) of a rate-1/5 TBCC at a specific value of $E_b/N_0$. We also show that this optimized TBCC/CRC can attain the same excellent $E_b/N_0$ performance with the very low rate of 32/864 of the 5G polar code, where the low rate is achieved through repetition. We show that the proposed TBCC/CRC concatenated code outperforms the PBCH polar code described in the 5G standard both in terms of FER and decoding run time. We also explore the tradeoff between undetected error rate and erasure rate as the CRC size varies.
As a parametric polynomial curve family, B\'ezier curves are widely used in safe and smooth motion design of intelligent robotic systems from flying drones to autonomous vehicles to robotic manipulators. In such motion planning settings, the critical features of high-order B\'ezier curves such as curve length, distance-to-collision, maximum curvature/velocity/acceleration are either numerically computed at a high computational cost or inexactly approximated by discrete samples. To address these issues, in this paper we present a novel computationally efficient approach for adaptive approximation of high-order B\'ezier curves by multiple low-order B\'ezier segments at any desired level of accuracy that is specified in terms of a B\'ezier metric. Accordingly, we introduce a new B\'ezier degree reduction method, called parameterwise matching reduction, that approximates B\'ezier curves more accurately compared to the standard least squares and Taylor reduction methods. We also propose a new B\'ezier metric, called the maximum control-point distance, that can be computed analytically, has a strong equivalence relation with other existing B\'ezier metrics, and defines a geometric relative bound between B\'ezier curves. We provide extensive numerical evidence to demonstrate the effectiveness of our proposed B\'ezier approximation approach. As a rule of thumb, based on the degree-one matching reduction error, we conclude that an $n^\text{th}$-order B\'ezier curve can be accurately approximated by $3(n-1)$ quadratic and $6(n-1)$ linear B\'ezier segments, which is fundamental for B\'ezier discretization.
We prove upper and lower bounds on the minimal spherical dispersion, improving upon previous estimates obtained by Rote and Tichy [Spherical dispersion with an application to polygonal approximation of curves, Anz. \"Osterreich. Akad. Wiss. Math.-Natur. Kl. 132 (1995), 3--10]. In particular, we see that the inverse $N(\varepsilon,d)$ of the minimal spherical dispersion is, for fixed $\varepsilon>0$, linear in the dimension $d$ of the ambient space. We also derive upper and lower bounds on the expected dispersion for points chosen independently and uniformly at random from the Euclidean unit sphere. In terms of the corresponding inverse $\widetilde{N}(\varepsilon,d)$, our bounds are optimal with respect to the dependence on $\varepsilon$.
The extreme or maximum age of information (AoI) is analytically studied for wireless communication systems. In particular, a wireless powered single-antenna source node and a receiver (connected to the power grid) equipped with multiple antennas are considered when operated under independent Rayleigh-faded channels. Via the extreme value theory and its corresponding statistical features, we demonstrate that the extreme AoI converges to the Gumbel distribution whereas its corresponding parameters are obtained in straightforward closed-form expressions. Capitalizing on this result, the risk of the extreme AoI realization is analytically evaluated according to some relevant performance metrics, while some useful engineering insights are manifested.
For neural networks (NNs) with rectified linear unit (ReLU) or binary activation functions, we show that their training can be accomplished in a reduced parameter space. Specifically, the weights in each neuron can be trained on the unit sphere, as opposed to the entire space, and the threshold can be trained in a bounded interval, as opposed to the real line. We show that the NNs in the reduced parameter space are mathematically equivalent to the standard NNs with parameters in the whole space. The reduced parameter space shall facilitate the optimization procedure for the network training, as the search space becomes (much) smaller. We demonstrate the improved training performance using numerical examples.
The Residual Networks of Residual Networks (RoR) exhibits excellent performance in the image classification task, but sharply increasing the number of feature map channels makes the characteristic information transmission incoherent, which losses a certain of information related to classification prediction, limiting the classification performance. In this paper, a Pyramidal RoR network model is proposed by analysing the performance characteristics of RoR and combining with the PyramidNet. Firstly, based on RoR, the Pyramidal RoR network model with channels gradually increasing is designed. Secondly, we analysed the effect of different residual block structures on performance, and chosen the residual block structure which best favoured the classification performance. Finally, we add an important principle to further optimize Pyramidal RoR networks, drop-path is used to avoid over-fitting and save training time. In this paper, image classification experiments were performed on CIFAR-10/100 and SVHN datasets, and we achieved the current lowest classification error rates were 2.96%, 16.40% and 1.59%, respectively. Experiments show that the Pyramidal RoR network optimization method can improve the network performance for different data sets and effectively suppress the gradient disappearance problem in DCNN training.