We present a numerical scheme for the solution of the initial-value problem for the ``bad'' Boussinesq equation. The accuracy of the scheme is tested by comparison with exact soliton solutions as well as with recently obtained asymptotic formulas for the solution.
In this paper, we investigate the strong convergence analysis of parareal algorithms for stochastic Maxwell equations with the damping term driven by additive noise. The proposed parareal algorithms proceed as two-level temporal parallelizable integrators with the stochastic exponential integrator as the coarse propagator and both the exact solution integrator and the stochastic exponential integrator as the fine propagator. It is proved that the convergence order of the proposed algorithms linearly depends on the iteration number. Numerical experiments are performed to illustrate the convergence order of the algorithms for different choices of the iteration number, the damping coefficient and the scale of noise.
A global approximation method of Nystr\"om type is explored for the numerical solution of a class of nonlinear integral equations of the second kind. The cases of smooth and weakly singular kernels are both considered. In the first occurrence, the method uses a Gauss-Legendre rule whereas in the second one resorts to a product rule based on Legendre nodes. Stability and convergence are proved in functional spaces equipped with the uniform norm and several numerical tests are given to show the good performance of the proposed method. An application to the interior Neumann problem for the Laplace equation with nonlinear boundary conditions is also considered.
Generalised quantifiers, which include Henkin's branching quantifiers, have been introduced by Mostowski and Lindstr\"om and developed as a substantial topic application of logic, especially model theory, to linguistics with work by Barwise, Cooper, Keenan. In this paper, we mainly study the proof theory of some non-standard quantifiers as second order formulae . Our first example is the usual pair of first order quantifiers (for all / there exists) when individuals are viewed as individual concepts handled by second order deductive rules. Our second example is the study of a second order translation of the simplest branching quantifier: ``A member of each team and a member of each board of directors know each other", for which we propose a second order treatment.
This work explores the representation of univariate and multivariate functions as matrix product states (MPS), also known as quantized tensor-trains (QTT). It proposes an algorithm that employs iterative Chebyshev expansions and Clenshaw evaluations to represent analytic and highly differentiable functions as MPS Chebyshev interpolants. It demonstrates rapid convergence for highly-differentiable functions, aligning with theoretical predictions, and generalizes efficiently to multidimensional scenarios. The performance of the algorithm is compared with that of tensor cross-interpolation (TCI) and multiscale interpolative constructions through a comprehensive comparative study. When function evaluation is inexpensive or when the function is not analytical, TCI is generally more efficient for function loading. However, the proposed method shows competitive performance, outperforming TCI in certain multivariate scenarios. Moreover, it shows advantageous scaling rates and generalizes to a wider range of tasks by providing a framework for function composition in MPS, which is useful for non-linear problems and many-body statistical physics.
In high-temperature plasma physics, a strong magnetic field is usually used to confine charged particles. Therefore, for studying the classical mathematical models of the physical problems it needs to consider the effect of external magnetic fields. One of the important model equations in plasma is the Vlasov-Poisson equation with an external magnetic field. This equation usually has multi-scale characteristics and rich physical properties, thus it is very important and meaningful to construct numerical methods that can maintain the physical properties inherited by the original systems over long time. This paper extends the corresponding theory in Cartesian coordinates to general orthogonal curvilinear coordinates, and proves that a Poisson-bracket structure can still be obtained after applying the corresponding finite element discretization. However, the Hamiltonian systems in the new coordinate systems generally cannot be decomposed into sub-systems that can be solved accurately, so it is impossible to use the splitting methods to construct the corresponding geometric integrators. Therefore, this paper proposes a semi-implicit method for strong magnetic fields and analyzes the asymptotic stability of this method.
In this work we study the numerical approximation of a class of ergodic Backward Stochastic Differential Equations. These equations are formulated in an infinite horizon framework and provide a probabilistic representation for elliptic Partial Differential Equations of ergodic type. In order to build our numerical scheme, we put forward a new representation of the PDE solution by using a classical probabilistic representation of the gradient. Then, based on this representation, we propose a fully implementable numerical scheme using a Picard iteration procedure, a grid space discretization and a Monte-Carlo approximation. Up to a limiting technical condition that guarantee the contraction of the Picard procedure, we obtain an upper bound for the numerical error. We also provide some numerical experiments that show the efficiency of this approach for small dimensions.
We study the numerical approximation of the stochastic heat equation with a distributional reaction term. Under a condition on the Besov regularity of the reaction term, it was proven recently that a strong solution exists and is unique in the pathwise sense, in a class of H\"older continuous processes. For a suitable choice of sequence $(b^k)_{k\in \mathbb{N}}$ approximating $b$, we prove that the error between the solution $u$ of the SPDE with reaction term $b$ and its tamed Euler finite-difference scheme with mollified drift $b^k$, converges to $0$ in $L^m(\Omega)$ with a rate that depends on the Besov regularity of $b$. In particular, one can consider two interesting cases: first, even when $b$ is only a (finite) measure, a rate of convergence is obtained. On the other hand, when $b$ is a bounded measurable function, the (almost) optimal rate of convergence $(\frac{1}{2}-\varepsilon)$-in space and $(\frac{1}{4}-\varepsilon)$-in time is achieved. Stochastic sewing techniques are used in the proofs, in particular to deduce new regularising properties of the discrete Ornstein-Uhlenbeck process.
Motivated by the recent successful application of physics-informed neural networks (PINNs) to solve Boltzmann-type equations [S. Jin, Z. Ma, and K. Wu, J. Sci. Comput., 94 (2023), pp. 57], we provide a rigorous error analysis for PINNs in approximating the solution of the Boltzmann equation near a global Maxwellian. The challenge arises from the nonlocal quadratic interaction term defined in the unbounded domain of velocity space. Analyzing this term on an unbounded domain requires the inclusion of a truncation function, which demands delicate analysis techniques. As a generalization of this analysis, we also provide proof of the asymptotic preserving property when using micro-macro decomposition-based neural networks.
We prove the convergence of a damped Newton's method for the nonlinear system resulting from a discretization of the second boundary value problem for the Monge-Ampere equation. The boundary condition is enforced through the use of the notion of asymptotic cone. The differential operator is discretized based on a discrete analogue of the subdifferential.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.