We derive a priori error of the Godunov method for the multidimensional Euler system of gas dynamics. To this end we apply the relative energy principle and estimate the distance between the numerical solution and the strong solution. This yields also the estimates of the $L^2$-norm of errors in density, momentum and entropy. Under the assumption that the numerical density and energy are bounded, we obtain a convergence rate of $1/2$ for the relative energy in the $L^1$-norm. Further, under the assumption -- the total variation of numerical solution is bounded, we obtain the first order convergence rate for the relative energy in the $L^1$-norm. Consequently, numerical solutions (density, momentum and entropy) converge in the $L^2$-norm with the convergence rate of $1/2$. The numerical results presented for Riemann problems are consistent with our theoretical analysis.
The numerical solution of a linear Schr\"odinger equation in the semiclassical regime is very well understood in a torus $\mathbb{T}^d$. A raft of modern computational methods are precise and affordable, while conserving energy and resolving high oscillations very well. This, however, is far from the case with regard to its solution in $\mathbb{R}^d$, a setting more suitable for many applications. In this paper we extend the theory of splitting methods to this end. The main idea is to derive the solution using a spectral method from a combination of solutions of the free Schr\"odinger equation and of linear scalar ordinary differential equations, in a symmetric Zassenhaus splitting method. This necessitates detailed analysis of certain orthonormal spectral bases on the real line and their evolution under the free Schr\"odinger operator.
We show that a specific skew-symmetric form of hyperbolic problems leads to energy conservation and an energy bound. Next, the compressible Euler equations is transformed to this skew-symmetric form and it is explained how to obtain an energy estimate. Finally we show that the new formulation lead to energy stable and energy conserving discrete approximations if the scheme is formulated on summation-by-parts form.
Uncertainty quantification plays an important role in problems that involve inferring a parameter of an initial value problem from observations of the solution. Conrad et al.\ (\textit{Stat.\ Comput.}, 2017) proposed randomisation of deterministic time integration methods as a strategy for quantifying uncertainty due to the unknown time discretisation error. We consider this strategy for systems that are described by deterministic, possibly time-dependent operator differential equations defined on a Banach space or a Gelfand triple. Our main results are strong error bounds on the random trajectories measured in Orlicz norms, proven under a weaker assumption on the local truncation error of the underlying deterministic time integration method. Our analysis establishes the theoretical validity of randomised time integration for differential equations in infinite-dimensional settings.
We develop an \textit{a posteriori} error analysis for the time of the first occurrence of an event, specifically, the time at which a functional of the solution to a partial differential equation (PDE) first achieves a threshold value on a given time interval. This novel quantity of interest (QoI) differs from classical QoIs which are modeled as bounded linear (or nonlinear) functionals. Taylor's theorem and an adjoint-based \textit{a posteriori} analysis is used to derive computable and accurate error estimates for semi-linear parabolic and hyperbolic PDEs. The accuracy of the error estimates is demonstrated through numerical solutions of the one-dimensional heat equation and linearized shallow water equations (SWE), representing parabolic and hyperbolic cases, respectively.
We construct a space-time parallel method for solving parabolic partial differential equations by coupling the Parareal algorithm in time with overlapping domain decomposition in space. The goal is to obtain a discretization consisting of "local" problems that can be solved on parallel computers efficiently. However, this introduces significant sources of error that must be evaluated. Reformulating the original Parareal algorithm as a variational method and implementing a finite element discretization in space enables an adjoint-based a posteriori error analysis to be performed. Through an appropriate choice of adjoint problems and residuals the error analysis distinguishes between errors arising due to the temporal and spatial discretizations, as well as between the errors arising due to incomplete Parareal iterations and incomplete iterations of the domain decomposition solver. We first develop an error analysis for the Parareal method applied to parabolic partial differential equations, and then refine this analysis to the case where the associated spatial problems are solved using overlapping domain decomposition. These constitute our Time Parallel Algorithm (TPA) and Space-Time Parallel Algorithm (STPA) respectively. Numerical experiments demonstrate the accuracy of the estimator for both algorithms and the iterations between distinct components of the error.
High-order implicit shock tracking is a new class of numerical methods to approximate solutions of conservation laws with non-smooth features. These methods align elements of the computational mesh with non-smooth features to represent them perfectly, allowing high-order basis functions to approximate smooth regions of the solution without the need for nonlinear stabilization, which leads to accurate approximations on traditionally coarse meshes. The hallmark of these methods is the underlying optimization formulation whose solution is a feature-aligned mesh and the corresponding high-order approximation to the flow; the key challenge is robustly solving the central optimization problem. In this work, we develop a robust optimization solver for high-order implicit shock tracking methods so they can be reliably used to simulate complex, high-speed, compressible flows in multiple dimensions. The proposed method integrates practical robustness measures into a sequential quadratic programming method, including dimension- and order-independent simplex element collapses, mesh smoothing, and element-wise solution re-initialization, which prove to be necessary to reliably track complex discontinuity surfaces, such as curved and reflecting shocks, shock formation, and shock-shock interaction. A series of nine numerical experiments -- including two- and three-dimensional compressible flows with complex discontinuity surfaces -- are used to demonstrate: 1) the robustness of the solver, 2) the meshes produced are high-quality and track continuous, non-smooth features in addition to discontinuities, 3) the method achieves the optimal convergence rate of the underlying discretization even for flows containing discontinuities, and 4) the method produces highly accurate solutions on extremely coarse meshes relative to approaches based on shock capturing.
DeepONets have recently been proposed as a framework for learning nonlinear operators mapping between infinite dimensional Banach spaces. We analyze DeepONets and prove estimates on the resulting approximation and generalization errors. In particular, we extend the universal approximation property of DeepONets to include measurable mappings in non-compact spaces. By a decomposition of the error into encoding, approximation and reconstruction errors, we prove both lower and upper bounds on the total error, relating it to the spectral decay properties of the covariance operators, associated with the underlying measures. We derive almost optimal error bounds with very general affine reconstructors and with random sensor locations as well as bounds on the generalization error, using covering number arguments. We illustrate our general framework with four prototypical examples of nonlinear operators, namely those arising in a nonlinear forced ODE, an elliptic PDE with variable coefficients and nonlinear parabolic and hyperbolic PDEs. While the approximation of arbitrary Lipschitz operators by DeepONets to accuracy $\epsilon$ is argued to suffer from a "curse of dimensionality" (requiring a neural networks of exponential size in $1/\epsilon$), in contrast, for all the above concrete examples of interest, we rigorously prove that DeepONets can break this curse of dimensionality (achieving accuracy $\epsilon$ with neural networks of size that can grow algebraically in $1/\epsilon$). Thus, we demonstrate the efficient approximation of a potentially large class of operators with this machine learning framework.
We consider the inverse problem of reconstructing the boundary curve of a cavity embedded in a bounded domain. The problem is formulated in two dimensions for the wave equation. We combine the Laguerre transform with the integral equation method and we reduce the inverse problem to a system of boundary integral equations. We propose an iterative scheme that linearizes the equation using the Fr\'echet derivative of the forward operator. The application of special quadrature rules results to an ill-conditioned linear system which we solve using Tikhonov regularization. The numerical results show that the proposed method produces accurate and stable reconstructions.
Moment methods are an important means of density estimation, but they are generally strongly dependent on the choice of feasible functions, which severely affects the performance. We propose a non-classical parameterization for density estimation using the sample moments, which does not require the choice of such functions. The parameterization is induced by the Kullback-Leibler distance, and the solution of it, which is proved to exist and be unique subject to simple prior that does not depend on data, can be obtained by convex optimization. Simulation results show the performance of the proposed estimator in estimating multi-modal densities which are mixtures of different types of functions.
This paper uses the concept of algorithmic efficiency to present a unified theory of intelligence. Intelligence is defined informally, formally, and computationally. I introduce the concept of Dimensional complexity in algorithmic efficiency and deduce that an optimally efficient algorithm has zero Time complexity, zero Space complexity, and an infinite Dimensional complexity. This algorithm is then used to generate the number line.