This paper presents a convolution tensor decomposition based model reduction method for solving the Allen-Cahn equation. The Allen-Cahn equation is usually used to characterize phase separation or the motion of anti-phase boundaries in materials. Its solution is time-consuming when high-resolution meshes and large time scale integration are involved. To resolve these issues, the convolution tensor decomposition method is developed, in conjunction with a stabilized semi-implicit scheme for time integration. The development enables a powerful computational framework for high-resolution solutions of Allen-Cahn problems, and allows the use of relatively large time increments for time integration without violating the discrete energy law. To further improve the efficiency and robustness of the method, an adaptive algorithm is also proposed. Numerical examples have confirmed the efficiency of the method in both 2D and 3D problems. Orders-of-magnitude speedups were obtained with the method for high-resolution problems, compared to the finite element method. The proposed computational framework opens numerous opportunities for simulating complex microstructure formation in materials on large-volume high-resolution meshes at a deeply reduced computational cost.
This paper presents a data-driven finite volume method for solving 1D and 2D hyperbolic partial differential equations. This work builds upon the prior research incorporating a data-driven finite-difference approximation of smooth solutions of scalar conservation laws, where optimal coefficients of neural networks approximating space derivatives are learned based on accurate, but cumbersome solutions to these equations. We extend this approach to flux-limited finite volume schemes for hyperbolic scalar and systems of conservation laws. We also train the discretization to efficiently capture discontinuous solutions with shock and contact waves, as well as to the application of boundary conditions. The learning procedure of the data-driven model is extended through the definition of a new loss, paddings and adequate database. These new ingredients guarantee computational stability, preserve the accuracy of fine-grid solutions, and enhance overall performance. Numerical experiments using test cases from the literature in both one- and two-dimensional spaces demonstrate that the learned model accurately reproduces fine-grid results on very coarse meshes.
In this contribution, we address the numerical solutions of high-order asymptotic equivalent partial differential equations with the results of a lattice Boltzmann scheme for an inhomogeneous advection problem in one spatial dimension. We first derive a family of equivalent partial differential equations at various orders, and we compare the lattice Boltzmann experimental results with a spectral approximation of the differential equations. For an unsteady situation, we show that the initialization scheme at a sufficiently high order of the microscopic moments plays a crucial role to observe an asymptotic error consistent with the order of approximation. For a stationary long-time limit, we observe that the measured asymptotic error converges with a reduced order of precision compared to the one suggested by asymptotic analysis.
This paper concerns the mathematical analyses of the diffusion model in machine learning. The drift term of the backward sampling process is represented as a conditional expectation involving the data distribution and the forward diffusion. The training process aims to find such a drift function by minimizing the mean-squared residue related to the conditional expectation. Using small-time approximations of the Green's function of the forward diffusion, we show that the analytical mean drift function in DDPM and the score function in SGM asymptotically blow up in the final stages of the sampling process for singular data distributions such as those concentrated on lower-dimensional manifolds, and are therefore difficult to approximate by a network. To overcome this difficulty, we derive a new target function and associated loss, which remains bounded even for singular data distributions. We validate the theoretical findings with several numerical examples.
A finite element method is introduced to track interface evolution governed by the level set equation. The method solves for the level set indicator function in a narrow band around the interface. An extension procedure, which is essential for a narrow band level set method, is introduced based on a finite element $L^2$- or $H^1$-projection combined with the ghost-penalty method. This procedure is formulated as a linear variational problem in a narrow band around the surface, making it computationally efficient and suitable for rigorous error analysis. The extension method is combined with a discontinuous Galerkin space discretization and a BDF time-stepping scheme. The paper analyzes the stability and accuracy of the extension procedure and evaluates the performance of the resulting narrow band finite element method for the level set equation through numerical experiments.
We present weak approximations schemes of any order for the Heston model that are obtained by using the method developed by Alfonsi and Bally (2021). This method consists in combining approximation schemes calculated on different random grids to increase the order of convergence. We apply this method with either the Ninomiya-Victoir scheme (2008) or a second-order scheme that samples exactly the volatility component, and we show rigorously that we can achieve then any order of convergence. We give numerical illustrations on financial examples that validate the theoretical order of convergence. We also present promising numerical results for the multifactor/rough Heston model and hint at applications to other models, including the Bates model and the double Heston model.
We consider linear second order differential equation y''= f with zero Dirichlet boundary conditions. At the continuous level this problem is solvable using the Green function, and this technique has a counterpart on the discrete level. The discrete solution is represented via an application of a matrix -- the Green matrix -- to the discretised right-hand side, and we propose an algorithm for fast construction of the Green matrix. In particular, we discretise the original problem using the spectral collocation method based on the Chebyshev--Gauss--Lobatto points, and using the discrete cosine transformation we show that the corresponding Green matrix is fast to construct even for large number of collocation points/high polynomial degree. Furthermore, we show that the action of the discrete solution operator (Green matrix) to the corresponding right-hand side can be implemented in a matrix-free fashion.
We develop and analyze stochastic inexact Gauss-Newton methods for nonlinear least-squares problems and for nonlinear systems ofequations. Random models are formed using suitable sampling strategies for the matrices involved in the deterministic models. The analysis of the expected number of iterations needed in the worst case to achieve a desired level of accuracy in the first-order optimality condition provides guidelines for applying sampling and enforcing, with \minor{a} fixed probability, a suitable accuracy in the random approximations. Results of the numerical validation of the algorithms are presented.
We consider the problem of estimating the error when solving a system of differential algebraic equations. Richardson extrapolation is a classical technique that can be used to judge when computational errors are irrelevant and estimate the discretization error. We have simulated molecular dynamics with constraints using the GROMACS library and found that the output is not always amenable to Richardson extrapolation. We derive and illustrate Richardson extrapolation using a variety of numerical experiments. We identify two necessary conditions that are not always satisfied by the GROMACS library.
Many articles have recently been devoted to Mahler equations, partly because of their links with other branches of mathematics such as automata theory. Hahn series (a generalization of the Puiseux series allowing arbitrary exponents of the indeterminate as long as the set that supports them is well-ordered) play a central role in the theory of Mahler equations. In this paper, we address the following fundamental question: is there an algorithm to calculate the Hahn series solutions of a given linear Mahler equation? What makes this question interesting is the fact that the Hahn series appearing in this context can have complicated supports with infinitely many accumulation points. Our (positive) answer to the above question involves among other things the construction of a computable well-ordered receptacle for the supports of the potential Hahn series solutions.
The paper aims at proposing an efficient and stable quasi-interpolation based method for numerically computing the Helmholtz-Hodge decomposition of a vector field. To this end, we first explicitly construct a matrix kernel in a general form from polyharmonic splines such that it includes divergence-free/curl-free/harmonic matrix kernels as special cases. Then we apply the matrix kernel to vector decomposition via the convolution technique together with the Helmholtz-Hodge decomposition. More precisely, we show that if we convolve a vector field with a scaled divergence-free (curl-free) matrix kernel, then the resulting divergence-free (curl-free) convolution sequence converges to the corresponding divergence-free (curl-free) part of the Helmholtz-Hodge decomposition of the field. Finally, by discretizing the convolution sequence via certain quadrature rule, we construct a family of (divergence-free/curl-free) quasi-interpolants for the Helmholtz-Hodge decomposition (defined both in the whole space and over a bounded domain). Corresponding error estimates derived in the paper show that our quasi-interpolation based method yields convergent approximants to both the vector field and its Helmholtz-Hodge decomposition