In this paper, we introduce a new coding and decoding structure for enhancing the reliability and performance of polar codes, specifically at low error rates. We achieve this by concatenating two polar codes in series to create robust error-correcting codes. The primary objective here is to optimize the behavior of individual elementary codes within polar codes. In this structure, we incorporate interleaving, a technique that rearranges bits to maximize the separation between originally neighboring symbols. This rearrangement is instrumental in converting error clusters into distributed errors across the entire sequence. To evaluate their performance, we proposed to model a communication system with seven components: an information source, a channel encoder, a modulator, a channel, a demodulator, a channel decoder, and a destination. This work focuses on evaluating the bit error rate (BER) of codes for different block lengths and code rates. Next, we compare the bit error rate (BER) performance between our proposed method and polar codes.
In this paper, we formulate and analyse a symmetric low-regularity integrator for solving the nonlinear Klein-Gordon equation in the $d$-dimensional space with $d=1,2,3$. The integrator is constructed based on the two-step trigonometric method and the proposed integrator has a simple form. Error estimates are rigorously presented to show that the integrator can achieve second-order time accuracy in the energy space under the regularity requirement in $H^{1+\frac{d}{4}}\times H^{\frac{d}{4}}$. Moreover, the time symmetry of the scheme ensures the good long-time energy conservation which is rigorously proved by the technique of modulated Fourier expansions. A numerical test is presented and the numerical results demonstrate the superiorities of the new integrator over some existing methods.
In this paper, we study recurrent neural networks in the presence of pairwise learning rules. We are specifically interested in how the attractor landscapes of such networks become altered as a function of the strength and nature (Hebbian vs. anti-Hebbian) of learning, which may have a bearing on the ability of such rules to mediate large-scale optimization problems. Through formal analysis, we show that a transition from Hebbian to anti-Hebbian learning brings about a pitchfork bifurcation that destroys convexity in the network attractor landscape. In larger-scale settings, this implies that anti-Hebbian plasticity will bring about multiple stable equilibria, and such effects may be outsized at interconnection or `choke' points. Furthermore, attractor landscapes are more sensitive to slower learning rates than faster ones. These results provide insight into the types of objective functions that can be encoded via different pairwise plasticity rules.
In this paper we revisit the classical problem of classification, but impose privacy constraints. Under such constraints, the raw data $(X_1,Y_1),\ldots,(X_n,Y_n)$ cannot be directly observed, and all classifiers are functions of the randomised outcome of a suitable local differential privacy mechanism. The statistician is free to choose the form of this privacy mechanism, and here we add Laplace distributed noise to a discretisation of the location of each feature vector $X_i$ and to its label $Y_i$. The classification rule is the privatized version of the well-studied partitioning classification rule. In addition to the standard Lipschitz and margin conditions, a novel characteristic is introduced, by which the exact rate of convergence of the classification error probability is calculated, both for non-private and private data.
This paper presents the first application of the direct parametrisation method for invariant manifolds to a fully coupled multiphysics problem involving the nonlinear vibrations of deformable structures subjected to an electrostatic field. The formulation proposed is intended for model order reduction of electrostatically actuated resonating Micro-Electro-Mechanical Systems (MEMS). The continuous problem is first rewritten in a manner that can be directly handled by the parametrisation method, which relies upon automated asymptotic expansions. A new mixed fully Lagrangian formulation is thus proposed which contains only explicit polynomial nonlinearities, which is then discretised in the framework of finite element procedures. Validation is performed on the classical parallel plate configuration, where different formulations using either the general framework, or an approximation of the electrostatic field due to the geometric configuration selected, are compared. Reduced-order models along these formulations are also compared to full-order simulations operated with a time integration approach. Numerical results show a remarkable performance both in terms of accuracy and wealth of nonlinear effects that can be accounted for. In particular, the transition from hardening to softening behaviour of the primary resonance while increasing the constant voltage component of the electric actuation, is recovered. Secondary resonances leading to superharmonic and parametric resonances are also investigated with the reduced-order model.
It was recently shown [7, 9] that "properly built" linear and polyhedral estimates nearly attain minimax accuracy bounds in the problem of recovery of unknown signal from noisy observations of linear images of the signal when the signal set is an ellitope. However, design of nearly optimal estimates relies upon solving semidefinite optimization problems with matrix variables, what puts the synthesis of such estimates beyond the rich of the standard Interior Point algorithms of semidefinite optimization even for moderate size recovery problems. Our goal is to develop First Order Optimization algorithms for the computationally efficient design of linear and polyhedral estimates. In this paper we (a) explain how to eliminate matrix variables, thus reducing dramatically the design dimension when passing from Interior Point to First Order optimization algorithms and (2) develop and analyse a dedicated algorithm of the latter type -- Composite Truncated Level method.
This work studies nonparametric Bayesian estimation of the intensity function of an inhomogeneous Poisson point process in the important case where the intensity depends on covariates, based on the observation of a single realisation of the point pattern over a large area. It is shown how the presence of covariates allows to borrow information from far away locations in the observation window, enabling consistent inference in the growing domain asymptotics. In particular, optimal posterior contraction rates under both global and point-wise loss functions are derived. The rates in global loss are obtained under conditions on the prior distribution resembling those in the well established theory of Bayesian nonparametrics, here combined with concentration inequalities for functionals of stationary processes to control certain random covariate-dependent loss functions appearing in the analysis. The local rates are derived with an ad-hoc study that builds on recent advances in the theory of P\'olya tree priors, extended to the present multivariate setting with a novel construction that makes use of the random geometry induced by the covariates.
In recent years many efforts have been devoted to finding bidiagonal factorizations of nonsingular totally positive matrices, since their accurate computation allows to numerically solve several important algebraic problems with great precision, even for large ill-conditioned matrices. In this framework, the present work provides the factorization of the collocation matrices of Newton bases -- of relevance when considering the Lagrange interpolation problem -- together with an algorithm that allows to numerically compute it to high relative accuracy. This further allows to determine the coefficients of the interpolating polynomial and to compute the singular values and the inverse of the collocation matrix. Conditions that guarantee high relative accuracy for these methods and, in the former case, for the classical recursion formula of divided differences, are determined. Numerical errors due to imprecise computer arithmetic or perturbed input data in the computation of the factorization are analyzed. Finally, numerical experiments illustrate the accuracy and effectiveness of the proposed methods with several algebraic problems, in stark contrast with traditional approaches.
In this paper, we study the stability and convergence of a fully discrete finite difference scheme for the initial value problem associated with the Korteweg-De Vries (KdV) equation. We employ the Crank-Nicolson method for temporal discretization and establish that the scheme is $L^2$-conservative. The convergence analysis reveals that utilizing inherent Kato's local smoothing effect, the proposed scheme converges to a classical solution for sufficiently regular initial data $u_0 \in H^{3}(\mathbb{R})$ and to a weak solution in $L^2(0,T;L^2_{\text{loc}}(\mathbb{R}))$ for non-smooth initial data $u_0 \in L^2(\mathbb{R})$. Optimal convergence rates in both time and space for the devised scheme are derived. The theoretical results are justified through several numerical illustrations.
This paper studies the asymptotic spectral properties of the sample covariance matrix for high dimensional compositional data, including the limiting spectral distribution, the limit of extreme eigenvalues, and the central limit theorem for linear spectral statistics. All asymptotic results are derived under the high-dimensional regime where the data dimension increases to infinity proportionally with the sample size. The findings reveal that the limiting spectral distribution is the well-known Marchenko-Pastur law. The largest (or smallest non-zero) eigenvalue converges almost surely to the left (or right) endpoint of the limiting spectral distribution, respectively. Moreover, the linear spectral statistics demonstrate a Gaussian limit. Simulation experiments demonstrate the accuracy of theoretical results.
In this paper we develop a novel neural network model for predicting implied volatility surface. Prior financial domain knowledge is taken into account. A new activation function that incorporates volatility smile is proposed, which is used for the hidden nodes that process the underlying asset price. In addition, financial conditions, such as the absence of arbitrage, the boundaries and the asymptotic slope, are embedded into the loss function. This is one of the very first studies which discuss a methodological framework that incorporates prior financial domain knowledge into neural network architecture design and model training. The proposed model outperforms the benchmarked models with the option data on the S&P 500 index over 20 years. More importantly, the domain knowledge is satisfied empirically, showing the model is consistent with the existing financial theories and conditions related to implied volatility surface.