Recursion formulas for mixed moments of three fundamental random matrix ensembles are derived. The reason such recursive formulas are possible is closely related to properties of polygon gluings studied by Akhmedov and Shakirov. The proofs of the formulas are however written in such a way that they do not rely on such polygon-models and can be understood without a background in combinatorics.
This article presents a new polynomial parameterized sigmoid called SIGTRON, which is an extended asymmetric sigmoid with Perceptron, and its companion convex model called SIGTRON-imbalanced classification (SIC) model that employs a virtual SIGTRON-induced convex loss function. In contrast to the conventional $\pi$-weighted cost-sensitive learning model, the SIC model does not have an external $\pi$-weight on the loss function but has internal parameters in the virtual SIGTRON-induced loss function. As a consequence, when the given training dataset is close to the well-balanced condition, we show that the proposed SIC model is more adaptive to variations of the dataset, such as the inconsistency of the scale-class-imbalance ratio between the training and test datasets. This adaptation is achieved by creating a skewed hyperplane equation. Additionally, we present a quasi-Newton optimization(L-BFGS) framework for the virtual convex loss by developing an interval-based bisection line search. Empirically, we have observed that the proposed approach outperforms $\pi$-weighted convex focal loss and balanced classifier LIBLINEAR(logistic regression, SVM, and L2SVM) in terms of test classification accuracy with $51$ two-class and $67$ multi-class datasets. In binary classification problems, where the scale-class-imbalance ratio of the training dataset is not significant but the inconsistency exists, a group of SIC models with the best test accuracy for each dataset (TOP$1$) outperforms LIBSVM(C-SVC with RBF kernel), a well-known kernel-based classifier.
Exact travelling wave solutions to the two-dimensional stochastic Allen-Cahn equation with multiplicative noise are obtained through the hyperbolic tangent (tanh) method. This technique limits the solutions to travelling wave profiles by representing them with a finite tanh power series. This study focuses on how multiplicative noise affects the dynamics of these travelling waves, in particular, occurring of wave propagation failure due to high levels of noise.
We propose a method for computing the Lyapunov exponents of renewal equations (delay equations of Volterra type) and of coupled systems of renewal and delay differential equations. The method consists in the reformulation of the delay equation as an abstract differential equation, the reduction of the latter to a system of ordinary differential equations via pseudospectral collocation, and the application of the standard discrete QR method. The effectiveness of the method is shown experimentally and a MATLAB implementation is provided.
Magnetization dynamics in ferromagnetic materials is modeled by the Landau-Lifshitz (LL) equation, a nonlinear system of partial differential equations. Among the numerical approaches, semi-implicit schemes are widely used in the micromagnetics simulation, due to a nice compromise between accuracy and efficiency. At each time step, only a linear system needs to be solved and a projection is then applied to preserve the length of magnetization. However, this linear system contains variable coefficients and a non-symmetric structure, and thus an efficient linear solver is highly desired. If the damping parameter becomes large, it has been realized that efficient solvers are only available to a linear system with constant, symmetric, and positive definite (SPD) structure. In this work, based on the implicit-explicit Runge-Kutta (IMEX-RK) time discretization, we introduce an artificial damping term, which is treated implicitly. The remaining terms are treated explicitly. This strategy leads to a semi-implicit scheme with the following properties: (1) only a few linear system with constant and SPD structure needs to be solved at each time step; (2) it works for the LL equation with arbitrary damping parameter; (3) high-order accuracy can be obtained with high-order IMEX-RK time discretization. Numerically, second-order and third-order IMEX-RK methods are designed in both the 1-D and 3-D domains. A comparison with the backward differentiation formula scheme is undertaken, in terms of accuracy and efficiency. The robustness of both numerical methods is tested on the first benchmark problem from National Institute of Standards and Technology. The linearized stability estimate and optimal rate convergence analysis are provided for an alternate IMEX-RK2 numerical scheme as well.
The Galerkin method is often employed for numerical integration of evolutionary equations, such as the Navier-Stokes equation or the magnetic induction equation. Application of the method requires solving an equation of the form $P(Av-f)=0$ at each time step, where $v$ is an element of a finite-dimensional space $V$ with a basis satisfying boundary conditions, $P$ is the orthogonal projection on this space and $A$ is a linear operator. Usually the coefficients of $v$ expanded in the basis are found by calculating the matrix of $PA$ acting on $V$ and solving the respective system of linear equations. For physically realistic boundary conditions (such as the no-slip boundary conditions for the velocity, or for a dielectric outside the fluid volume for the magnetic field) the basis is often not orthogonal and solving the problem can be computationally demanding. We propose an algorithm giving an opportunity to reduce the computational cost for such a problem. Suppose there exists a space $W$ that contains $V$, the difference between the dimensions of $W$ and $V$ is small relative to the dimension of $V$, and solving the problem $P(Aw-f)=0$, where $w$ is an element of $W$, requires less operations than solving the original problem. The equation $P(Av-f)=0$ is then solved in two steps: we solve the problem $P(Aw-f)=0$ in $W$, find a correction $h=v-w$ that belongs to a complement to $V$ in $W$, and obtain the solution $w+h$. When the dimension of the complement is small the proposed algorithm is more efficient than the traditional one.
Long quantum codes using projective Reed-Muller codes are constructed. We obtain asymmetric and symmetric quantum codes by using the CSS construction and the Hermitian construction, respectively. Quantum codes obtained from projective Reed-Muller codes usually require entanglement assistance, but we show that sometimes we can avoid this requirement by considering monomially equivalent codes. Moreover, we also provide some constructions of quantum codes from subfield subcodes of projective Reed-Muller codes.
In this paper, we formulate and analyse a symmetric low-regularity integrator for solving the nonlinear Klein-Gordon equation in the $d$-dimensional space with $d=1,2,3$. The integrator is constructed based on the two-step trigonometric method and the proposed integrator has a simple form. Error estimates are rigorously presented to show that the integrator can achieve second-order time accuracy in the energy space under the regularity requirement in $H^{1+\frac{d}{4}}\times H^{\frac{d}{4}}$. Moreover, the time symmetry of the scheme ensures the good long-time energy conservation which is rigorously proved by the technique of modulated Fourier expansions. A numerical test is presented and the numerical results demonstrate the superiorities of the new integrator over some existing methods.
A posteriori reduced-order models, e.g. proper orthogonal decomposition, are essential to affordably tackle realistic parametric problems. They rely on a trustful training set, that is a family of full-order solutions (snapshots) representative of all possible outcomes of the parametric problem. Having such a rich collection of snapshots is not, in many cases, computationally viable. A strategy for data augmentation, designed for parametric laminar incompressible flows, is proposed to enrich poorly populated training sets. The goal is to include in the new, artificial snapshots emerging features, not present in the original basis, that do enhance the quality of the reduced-order solution. The methodologies devised are based on exploiting basic physical principles, such as mass and momentum conservation, to devise physically-relevant, artificial snapshots at a fraction of the cost of additional full-order solutions. Interestingly, the numerical results show that the ideas exploiting only mass conservation (i.e., incompressibility) are not producing significant added value with respect to the standard linear combinations of snapshots. Conversely, accounting for the linearized momentum balance via the Oseen equation does improve the quality of the resulting approximation and therefore is an effective data augmentation strategy in the framework of viscous incompressible laminar flows.
A Cahn-Hilliard-Allen-Cahn phase-field model coupled with a heat transfer equation, particularly with full non-diagonal mobility matrices, is studied. After reformulating the problem w.r.t. the inverse of temperature, we proposed and analysed a structure-preserving approximation for the semi-discretisation in space and then a fully discrete approximation using conforming finite elements and time-stepping methods. We prove structure-preserving property and discrete stability using relative entropy methods for the semi-discrete and fully discrete case. The theoretical results are illustrated by numerical experiments.
Partitioned neural network functions are used to approximate the solution of partial differential equations. The problem domain is partitioned into non-overlapping subdomains and the partitioned neural network functions are defined on the given non-overlapping subdomains. Each neural network function then approximates the solution in each subdomain. To obtain the convergent neural network solution, certain continuity conditions on the partitioned neural network functions across the subdomain interface need to be included in the loss function, that is used to train the parameters in the neural network functions. In our work, by introducing suitable interface values, the loss function is reformulated into a sum of localized loss functions and each localized loss function is used to train the corresponding local neural network parameters. In addition, to accelerate the neural network solution convergence, the localized loss function is enriched with an augmented Lagrangian term, where the interface condition and the boundary condition are enforced as constraints on the local solutions by using Lagrange multipliers. The local neural network parameters and Lagrange multipliers are then found by optimizing the localized loss function. To take the advantage of the localized loss function for the parallel computation, an iterative algorithm is also proposed. For the proposed algorithms, their training performance and convergence are numerically studied for various test examples.