This paper presents the error analysis of numerical methods on graded meshes for stochastic Volterra equations with weakly singular kernels. We first prove a novel regularity estimate for the exact solution via analyzing the associated convolution structure. This reveals that the exact solution exhibits an initial singularity in the sense that its H\"older continuous exponent on any neighborhood of $t=0$ is lower than that on every compact subset of $(0,T]$. Motivated by the initial singularity, we then construct the Euler--Maruyama method, fast Euler--Maruyama method, and Milstein method based on graded meshes. By establishing their pointwise-in-time error estimates, we give the grading exponents of meshes to attain the optimal uniform-in-time convergence orders, where the convergence orders improve those of the uniform mesh case. Numerical experiments are finally reported to confirm the sharpness of theoretical findings.
In this paper, we consider an inverse space-dependent source problem for a time-fractional diffusion equation. To deal with the ill-posedness of the problem, we transform the problem into an optimal control problem with total variational (TV) regularization. In contrast to the classical Tikhonov model incorporating $L^2$ penalty terms, the inclusion of a TV term proves advantageous in reconstructing solutions that exhibit discontinuities or piecewise constancy. The control problem is approximated by a fully discrete scheme, and convergence results are provided within this framework. Furthermore, a lineraed primal-dual iterative algorithm is proposed to solve the discrete control model based on an equivalent saddle-point reformulation, and several numerical experiments are presented to demonstrate the efficiency of the algorithm.
In this paper we show that using implicative algebras one can produce models of set theory generalizing Heyting/Boolean-valued models and realizability models of (I)ZF, both in intuitionistic and classical logic. This has as consequence that any topos which is obtained from a Set-based tripos as the result of the tripos-to-topos construction hosts a model of intuitionistic or classical set theory, provided a large enough strongly inaccessible cardinal exists.
In this paper we consider the numerical solution of fractional differential equations. In particular, we study a step-by-step graded mesh procedure based on an expansion of the vector field using orthonormal Jacobi polynomials. Under mild hypotheses, the proposed procedure is capable of getting spectral accuracy. A few numerical examples are reported to confirm the theoretical findings.
Engineers are often faced with the decision to select the most appropriate model for simulating the behavior of engineered systems, among a candidate set of models. Experimental monitoring data can generate significant value by supporting engineers toward such decisions. Such data can be leveraged within a Bayesian model updating process, enabling the uncertainty-aware calibration of any candidate model. The model selection task can subsequently be cast into a problem of decision-making under uncertainty, where one seeks to select the model that yields an optimal balance between the reward associated with model precision, in terms of recovering target Quantities of Interest (QoI), and the cost of each model, in terms of complexity and compute time. In this work, we examine the model selection task by means of Bayesian decision theory, under the prism of availability of models of various refinements, and thus varying levels of fidelity. In doing so, we offer an exemplary application of this framework on the IMAC-MVUQ Round-Robin Challenge. Numerical investigations show various outcomes of model selection depending on the target QoI.
The accuracy of solving partial differential equations (PDEs) on coarse grids is greatly affected by the choice of discretization schemes. In this work, we propose to learn time integration schemes based on neural networks which satisfy three distinct sets of mathematical constraints, i.e., unconstrained, semi-constrained with the root condition, and fully-constrained with both root and consistency conditions. We focus on the learning of 3-step linear multistep methods, which we subsequently applied to solve three model PDEs, i.e., the one-dimensional heat equation, the one-dimensional wave equation, and the one-dimensional Burgers' equation. The results show that the prediction error of the learned fully-constrained scheme is close to that of the Runge-Kutta method and Adams-Bashforth method. Compared to the traditional methods, the learned unconstrained and semi-constrained schemes significantly reduce the prediction error on coarse grids. On a grid that is 4 times coarser than the reference grid, the mean square error shows a reduction of up to an order of magnitude for some of the heat equation cases, and a substantial improvement in phase prediction for the wave equation. On a 32 times coarser grid, the mean square error for the Burgers' equation can be reduced by up to 35% to 40%.
We deal with Mckean-Vlasov and Boltzmann type jump equations. This means that the coefficients of the stochastic equation depend on the law of the solution, and the equation is driven by a Poisson point measure with intensity measure which depends on the law of the solution as well. In [3], Alfonsi and Bally have proved that under some suitable conditions, the solution $X_t$ of such equation exists and is unique. One also proves that $X_t$ is the probabilistic interpretation of an analytical weak equation. Moreover, the Euler scheme $X_t^{\mathcal{P}}$ of this equation converges to $X_t$ in Wasserstein distance. In this paper, under more restricted assumptions, we show that the Euler scheme $X_t^{\mathcal{P}}$ converges to $X_t$ in total variation distance and $X_t$ has a smooth density (which is a function solution of the analytical weak equation). On the other hand, in view of simulation, we use a truncated Euler scheme $X^{\mathcal{P},M}_t$ which has a finite numbers of jumps in any compact interval. We prove that $X^{\mathcal{P},M}_{t}$ also converges to $X_t$ in total variation distance. Finally, we give an algorithm based on a particle system associated to $X^{\mathcal{P},M}_t$ in order to approximate the density of the law of $X_t$. Complete estimates of the error are obtained.
Due to the lack of corresponding analysis on appropriate mapping operator between two grids, high-order two-grid difference algorithms are rarely studied. In this paper, we firstly discuss the boundedness of a local bi-cubic Lagrange interpolation operator. And then, taking the semilinear parabolic equation as an example, we first construct a variable-step high-order nonlinear difference algorithm using compact difference technique in space and the second-order backward differentiation formula (BDF2) with variable temporal stepsize in time. With the help of discrete orthogonal convolution (DOC) kernels and a cut-off numerical technique, the unique solvability and corresponding error estimates of the high-order nonlinear difference scheme are established under assumptions that the temporal stepsize ratio satisfies rk < 4.8645 and the maximum temporal stepsize satisfies tau = o(h^1/2 ). Then, an efficient two-grid high-order difference algorithm is developed by combining a small-scale variable-step high-order nonlinear difference algorithm on the coarse grid and a large-scale variable-step high-order linearized difference algorithm on the fine grid, in which the constructed piecewise bi-cubic Lagrange interpolation mapping operator is adopted to project the coarse-grid solution to the fine grid. Under the same temporal stepsize ratio restriction rk < 4.8645 and a weaker maximum temporal stepsize condition tau = o(H^1.2 ), optimal fourth-order in space and second-order in time error estimates of the two-grid difference scheme is established if the coarse-fine grid stepsizes satisfy H = O(h^4/7). Finally, several numerical experiments are carried out to demonstrate the effectiveness and efficiency of the proposed scheme.
This article proposes entropy stable discontinuous Galerkin schemes (DG) for two-fluid relativistic plasma flow equations. These equations couple the flow of relativistic fluids via electromagnetic quantities evolved using Maxwell's equations. The proposed schemes are based on the Gauss-Lobatto quadrature rule, which has the summation by parts (SBP) property. We exploit the structure of the equations having the flux with three independent parts coupled via nonlinear source terms. We design entropy stable DG schemes for each flux part, coupled with the fact that the source terms do not affect entropy, resulting in an entropy stable scheme for the complete system. The proposed schemes are then tested on various test problems in one and two dimensions to demonstrate their accuracy and stability.
In this paper, by using methods of $D$-companion matrix, we reprove a generalization of the Guass-Lucas theorem and get the majorization relationship between the zeros of convex combinations of incomplete polynomials and an origin polynomial. Moreover, we prove that the set of all zeros of all convex combinations of incomplete polynomials coincides with the closed convex hull of zeros of the original polynomial. The location of zeros of convex combinations of incomplete polynomials is determined.
Physics informed neural network (PINN) based solution methods for differential equations have recently shown success in a variety of scientific computing applications. Several authors have reported difficulties, however, when using PINNs to solve equations with multiscale features. The objective of the present work is to illustrate and explain the difficulty of using standard PINNs for the particular case of divergence-form elliptic partial differential equations (PDEs) with oscillatory coefficients present in the differential operator. We show that if the coefficient in the elliptic operator $a^{\epsilon}(x)$ is of the form $a(x/\epsilon)$ for a 1-periodic coercive function $a(\cdot)$, then the Frobenius norm of the neural tangent kernel (NTK) matrix associated to the loss function grows as $1/\epsilon^2$. This implies that as the separation of scales in the problem increases, training the neural network with gradient descent based methods to achieve an accurate approximation of the solution to the PDE becomes increasingly difficult. Numerical examples illustrate the stiffness of the optimization problem.