The quantitative characterization of the evolution of the error distribution (as the step-size tends to zero) is a fundamental problem in the analysis of stochastic numerical method. In this paper, we answer this problem by proving that the error of numerical method for linear stochastic differential equation satisfies the limit theorems and large deviation principle. To the best of our knowledge, this is the first result on the quantitative characterization of the evolution of the error distribution of stochastic numerical method. As an application, we provide a new perspective to explain the superiority of symplectic methods for stochastic Hamiltonian systems in the long-time computation. To be specific, by taking the linear stochastic oscillator as the test equation, we show that in the long-time computation, the probability that the error deviates from the typical value is smaller for the symplectic methods than that for the non-symplectic methods, which reveals that the stochastic symplectic methods are more stable than non-symplectic methods.
Nowadays, numerical models are widely used in most of engineering fields to simulate the behaviour of complex systems, such as for example power plants or wind turbine in the energy sector. Those models are nevertheless affected by uncertainty of different nature (numerical, epistemic) which can affect the reliability of their predictions. We develop here a new method for quantifying conditional parameter uncertainty within a chain of two numerical models in the context of multiphysics simulation. More precisely, we aim to calibrate the parameters $\theta$ of the second model of the chain conditionally on the value of parameters $\lambda$ of the first model, while assuming the probability distribution of $\lambda$ is known. This conditional calibration is carried out from the available experimental data of the second model. In doing so, we aim to quantify as well as possible the impact of the uncertainty of $\lambda$ on the uncertainty of $\theta$. To perform this conditional calibration, we set out a nonparametric Bayesian formalism to estimate the functional dependence between $\theta$ and $\lambda$, denoted by $\theta(\lambda)$. First, each component of $\theta(\lambda)$ is assumed to be the realization of a Gaussian process prior. Then, if the second model is written as a linear function of $\theta(\lambda)$, the Bayesian machinery allows us to compute analytically the posterior predictive distribution of $\theta(\lambda)$ for any set of realizations $\lambda$. The effectiveness of the proposed method is illustrated on several analytical examples.
We study space--time isogeometric discretizations of the linear acoustic wave equation that use B-splines of arbitrary degree $p$, both in space and time. We propose a space--time variational formulation that is obtained by adding a non-consistent penalty term of order $2p+2$ to the bilinear form coming from integration by parts. This formulation, when discretized with tensor-product spline spaces with maximal regularity in time, is unconditionally stable: the mesh size in time is not constrained by the mesh size in space. We give extensive numerical evidence for the good stability, approximation, dissipation and dispersion properties of the stabilized isogeometric formulation, comparing against stabilized finite element schemes, for a range of wave propagation problems with constant and variable wave speed.
We present a reduced basis stochastic Galerkin method for partial differential equations with random inputs. In this method, the reduced basis methodology is integrated into the stochastic Galerkin method, resulting in a significant reduction in the cost of solving the Galerkin system. To reduce the main cost of matrix-vector manipulation involved in our reduced basis stochastic Galerkin approach, the secant method is applied to identify the number of reduced basis functions. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.
We develop a numerical method for the Westervelt equation, an important equation in nonlinear acoustics, in the form where the attenuation is represented by a class of non-local in time operators. A semi-discretisation in time based on the trapezoidal rule and A-stable convolution quadrature is stated and analysed. Existence and regularity analysis of the continuous equations informs the stability and error analysis of the semi-discrete system. The error analysis includes the consideration of the singularity at $t = 0$ which is addressed by the use of a correction in the numerical scheme. Extensive numerical experiments confirm the theory.
Effective application of mathematical models to interpret biological data and make accurate predictions often requires that model parameters are identifiable. Approaches to assess the so-called structural identifiability of models are well-established for ordinary differential equation models, yet there are no commonly adopted approaches that can be applied to assess the structural identifiability of the partial differential equation (PDE) models that are requisite to capture spatial features inherent to many phenomena. The differential algebra approach to structural identifiability has recently been demonstrated to be applicable to several specific PDE models. In this brief article, we present general methodology for performing structural identifiability analysis on partially observed linear reaction-advection-diffusion (RAD) PDE models. We show that the differential algebra approach can always, in theory, be applied to linear RAD models. Moreover, despite the perceived complexity introduced by the addition of advection and diffusion terms, identifiability of spatial analogues of non-spatial models cannot decrease structural identifiability. Finally, we show that our approach can also be applied to a class of non-linear PDE models that are linear in the unobserved variables, and conclude by discussing future possibilities and computational cost of performing structural identifiability analysis on more general PDE models in mathematical biology.
The convergence of the first order Euler scheme and an approximative variant thereof, along with convergence rates, are established for rough differential equations driven by c\`adl\`ag paths satisfying a suitable criterion, namely the so-called Property (RIE), along time discretizations with vanishing mesh size. This property is then verified for almost all sample paths of Brownian motion, It\^o processes, L\'evy processes and general c\`adl\`ag semimartingales, as well as the driving signals of both mixed and rough stochastic differential equations, relative to various time discretizations. Consequently, we obtain pathwise convergence in p-variation of the Euler--Maruyama scheme for stochastic differential equations driven by these processes.
The nonlinear Poisson-Boltzmann equation (NPBE) is an elliptic partial differential equation used in applications such as protein interactions and biophysical chemistry (among many others). It describes the nonlinear electrostatic potential of charged bodies submerged in an ionic solution. The kinetic presence of the solvent molecules introduces randomness to the shape of a protein, and thus a more accurate model that incorporates these random perturbations of the domain is analyzed to compute the statistics of quantities of interest of the solution. When the parameterization of the random perturbations is high-dimensional, this calculation is intractable as it is subject to the curse of dimensionality. However, if the solution of the NPBE varies analytically with respect to the random parameters, the problem becomes amenable to techniques such as sparse grids and deep neural networks. In this paper, we show analyticity of the solution of the NPBE with respect to analytic perturbations of the domain by using the analytic implicit function theorem and the domain mapping method. Previous works have shown analyticity of solutions to linear elliptic equations but not for nonlinear problems. We further show how to derive \emph{a priori} bounds on the size of the region of analyticity. This method is applied to the trypsin molecule to demonstrate that the convergence rates of the quantity of interest are consistent with the analyticity result. Furthermore, the approach developed here is sufficiently general enough to be applied to other nonlinear problems in uncertainty quantification.
The complexity class Quantum Statistical Zero-Knowledge ($\mathsf{QSZK}$) captures computational difficulties of the time-bounded quantum state testing problem with respect to the trace distance, known as the Quantum State Distinguishability Problem (QSDP) introduced by Watrous (FOCS 2002). However, QSDP is in $\mathsf{QSZK}$ merely within the constant polarizing regime, similar to its classical counterpart shown by Sahai and Vadhan (JACM 2003) due to the polarization lemma (error reduction for SDP). Recently, Berman, Degwekar, Rothblum, and Vasudevan (TCC 2019) extended the $\mathsf{SZK}$ containment for SDP beyond the polarizing regime via the time-bounded distribution testing problems with respect to the triangular discrimination and the Jensen-Shannon divergence. Our work introduces proper quantum analogs for these problems by defining quantum counterparts for triangular discrimination. We investigate whether the quantum analogs behave similarly to their classical counterparts and examine the limitations of existing approaches to polarization regarding quantum distances. These new $\mathsf{QSZK}$-complete problems improve $\mathsf{QSZK}$ containments for QSDP beyond the polarizing regime and establish a simple $\mathsf{QSZK}$-hardness for the quantum entropy difference problem (QEDP) defined by Ben-Aroya, Schwartz, and Ta-Shma (ToC 2010). Furthermore, we prove that QSDP with some exponentially small errors is in $\mathsf{PP}$, while the same problem without error is in $\mathsf{NQP}$.
We derive and analyse well-posed boundary conditions for the linear shallow water wave equation. The analysis is based on the energy method and it identifies the number, location and form of the boundary conditions so that the initial boundary value problem is well-posed. A finite volume method is developed based on the summation-by-parts framework with the boundary conditions implemented weakly using penalties. Stability is proven by deriving a discrete energy estimate analogous to the continuous estimate. The continuous and discrete analysis covers all flow regimes. Numerical experiments are presented verifying the analysis.
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.