Perfect error correcting codes allow for an optimal transmission of information while guaranteeing error correction. For this reason, proving their existence has been a classical problem in both pure mathematics and information theory. Indeed, the classification of the parameters of $e-$error correcting perfect codes over $q-$ary alphabets was a very active topic of research in the late 20th century. Consequently, all parameters of perfect $e-$error correcting codes were found if $e \ge 3$, and it was conjectured that no perfect $2-$error correcting codes exist over any $q-$ary alphabet, where $q > 3$. In the 1970s, this was proved for $q$ a prime power, for $q = 2^r3^s$ and for only $7$ other values of $q$. Almost $50$ years later, it is surprising to note that there have been no new results in this regard and the classification of $2-$error correcting codes over non-prime power alphabets remains an open problem. In this paper, we use techniques from the resolution of generalised Ramanujan--Nagell equation and from modern computational number theory to show that perfect $2-$error correcting codes do not exist for $172$ new values of $q$ which are not prime powers, substantially increasing the values of $q$ which are now classified. In addition, we prove that, for any fixed value of $q$, there can be at most finitely many perfect $2-$error correcting codes over an alphabet of size $q$.
We present a fault-tolerant [[8, 1, 3]] non-CSS quantum error correcting code and study its logical error rates. We choose the unitary encoding procedure for stabilizer codes given by Gottesman and modify it to suit the setting of a class of non- CSS codes. Considering two types of noise models for this study, namely the depolarising noise and anisotropic noise, to depict the logical error rates obtained in decoding, we adopt the procedure of the bare ancilla method presented by Brown et al. to reorder the measurement sequence in the syndrome extraction step and upgrade it to obtain higher pseudo-thresholds and lower leading order terms of logical error rates.
We propose a new full discretization of the Biot's equations in poroelasticity. The construction is driven by the inf-sup theory, which we recently developed. It builds upon the four-field formulation of the equations obtained by introducing the total pressure and the total fluid content. We discretize in space with Lagrange finite elements and in time with backward Euler. We establish inf-sup stability and quasi-optimality of the proposed discretization, with robust constants with respect to all material parameters. We further construct an interpolant showing how the error decays for smooth solutions.
In this paper, we propose a novel shape optimization approach for the source identification of elliptic equations. This identification problem arises from two application backgrounds: actuator placement in PDE-constrained optimal controls and the regularized least-squares formulation of source identifications. The optimization problem seeks both the source strength and its support. By eliminating the variable associated with the source strength, we reduce the problem to a shape optimization problem for a coupled elliptic system, known as the first-order optimality system. As a model problem, we derive the shape derivative for the regularized least-squares formulation of the inverse source problem and propose a gradient descent shape optimization algorithm, implemented using the level-set method. Several numerical experiments are presented to demonstrate the efficiency of our proposed algorithms.
Generalized linear models (GLMs) arguably represent the standard approach for statistical regression beyond the Gaussian likelihood scenario. When Bayesian formulations are employed, the general absence of a tractable posterior distribution has motivated the development of deterministic approximations, which are generally more scalable than sampling techniques. Among them, expectation propagation (EP) showed extreme accuracy, usually higher than many variational Bayes solutions. However, the higher computational cost of EP posed concerns about its practical feasibility, especially in high-dimensional settings. We address these concerns by deriving a novel efficient formulation of EP for GLMs, whose cost scales linearly in the number of covariates p. This reduces the state-of-the-art O(p^2 n) per-iteration computational cost of the EP routine for GLMs to O(p n min{p,n}), with n being the sample size. We also show that, for binary models and log-linear GLMs approximate predictive means can be obtained at no additional cost. To preserve efficient moment matching for count data, we propose employing a combination of log-normal Laplace transform approximations, avoiding numerical integration. These novel results open the possibility of employing EP in settings that were believed to be practically impossible. Improvements over state-of-the-art approaches are illustrated both for simulated and real data. The efficient EP implementation is available at //github.com/niccoloanceschi/EPglm.
Our goal is to highlight some of the deep links between numerical splitting methods and control theory. We consider evolution equations of the form $\dot{x} = f_0(x) + f_1(x)$, where $f_0$ encodes a non-reversible dynamic, so that one is interested in schemes only involving forward flows of $f_0$. In this context, a splitting method can be interpreted as a trajectory of the control-affine system $\dot{x}(t)=f_0(x(t))+u(t)f_1(x(t))$, associated with a control~$u$ which is a finite sum of Dirac masses. The general goal is then to find a control such that the flow of $f_0 + u(t) f_1$ is as close as possible to the flow of $f_0+f_1$. Using this interpretation and classical tools from control theory, we revisit well-known results concerning numerical splitting methods, and we prove a handful of new ones, with an emphasis on splittings with additional positivity conditions on the coefficients. First, we show that there exist numerical schemes of any arbitrary order involving only forward flows of $f_0$ if one allows complex coefficients for the flows of $f_1$. Equivalently, for complex-valued controls, we prove that the Lie algebra rank condition is equivalent to the small-time local controllability of a system. Second, for real-valued coefficients, we show that the well-known order restrictions are linked with so-called "bad" Lie brackets from control theory, which are known to yield obstructions to small-time local controllability. We use our recent basis of the free Lie algebra to precisely identify the conditions under which high-order methods exist.
The paper considers standard iterative methods for solving the generalized Stokes problem arising from the time and space approximation of the time-dependent incompressible Navier-Stokes equations. Various preconditioning techniques are considered (Cahouet&Chabard and augmented Lagrangian), and one investigates whether these methods can compete with traditional pressure-correction and velocity-correction methods in terms of CPU time per degree of freedom and per time step. Numerical tests on fine unstructured meshes (68 millions degrees of freedoms) demonstrate convergence rates that are independent of the mesh size and improve with the Reynolds number. Three conclusions are drawn from the paper: (1) Although very good parallel scalability is observed for the augmented Lagrangian method, thorough tests on large problems reveal that the overall CPU time per degree of freedom and per time step is best for the standard Cahouet&Chabar preconditioner. (2) Whether solving the pressure Schur complement problem or solving the full couple system at once does not make any significant difference in term of CPU time per degree of freedom and per time step. (3) All the methods tested in the paper, whether matrix-free or not, are on average 30 times slower than traditional pressure-correction and velocity-correction methods. Hence, although all these methods are very efficient for solving steady state problems, they are not yet competitive for solving time-dependent problems.
A key characteristic of the anomalous sub-solution equation is that the solution exhibits algebraic decay rate over long time intervals, which is often refered to the Mittag-Leffler type stability. For a class of power nonlinear sub-diffusion models with variable coefficients, we prove that their solutions have Mittag-Leffler stability when the source functions satisfy natural decay assumptions. That is the solutions have the decay rate $\|u(t)\|_{L^{s}(\Omega)}=O\left( t^{-(\alpha+\beta)/\gamma} \right)$ as $t\rightarrow\infty$, where $\alpha$, $\gamma$ are positive constants, $\beta\in(-\alpha,\infty)$ and $s\in (1,\infty)$. Then we develop the structure preserving algorithm for this type of model. For the complete monotonicity-preserving ($\mathcal{CM}$-preserving) schemes developed by Li and Wang (Commun. Math. Sci., 19(5):1301-1336, 2021), we prove that they satisfy the discrete comparison principle for time fractional differential equations with variable coefficients. Then, by carefully constructing the fine the discrete supsolution and subsolution, we obtain the long time optimal decay rate of the numerical solution $\|u_{n}\|_{L^{s}(\Omega)}=O\left( t_n^{-(\alpha+\beta)/\gamma} \right)$ as $t_{n}\rightarrow\infty$, which is fully agree with the theoretical solution. Finally, we validated the analysis results through numerical experiments.
The use of variable grid BDF methods for parabolic equations leads to structures that are called variable (coefficient) Toeplitz. Here, we consider a more general class of matrix-sequences and we prove that they belong to the maximal $*$-algebra of generalized locally Toeplitz (GLT) matrix-sequences. Then, we identify the associated GLT symbols in the general setting and in the specific case, by providing in both cases a spectral and singular value analysis. More specifically, we use the GLT tools in order to study the asymptotic behaviour of the eigenvalues and singular values of the considered BDF matrix-sequences, in connection with the given non-uniform grids. Numerical examples, visualizations, and open problems end the present work.
The purpose of this technical note is to summarize the relationship between the marginal variance and correlation length of a Gaussian random field with Mat\'ern covariance and the coefficients of the corresponding partial-differential-equation (PDE)-based precision operator.
A functional nonlinear regression approach, incorporating time information in the covariates, is proposed for temporal strong correlated manifold map data sequence analysis. Specifically, the functional regression parameters are supported on a connected and compact two--point homogeneous space. The Generalized Least--Squares (GLS) parameter estimator is computed in the linearized model, having error term displaying manifold scale varying Long Range Dependence (LRD). The performance of the theoretical and plug--in nonlinear regression predictors is illustrated by simulations on sphere, in terms of the empirical mean of the computed spherical functional absolute errors. In the case where the second--order structure of the functional error term in the linearized model is unknown, its estimation is performed by minimum contrast in the functional spectral domain. The linear case is illustrated in the Supplementary Material, revealing the effect of the slow decay velocity in time of the trace norms of the covariance operator family of the regression LRD error term. The purely spatial statistical analysis of atmospheric pressure at high cloud bottom, and downward solar radiation flux in Alegria et al. (2021) is extended to the spatiotemporal context, illustrating the numerical results from a generated synthetic data set.