We prove a weak rate of convergence of a fully discrete scheme for stochastic Cahn--Hilliard equation with additive noise, where the spectral Galerkin method is used in space and the backward Euler method is used in time. Compared with the Allen--Cahn type stochastic partial differential equation, the error analysis here is much more sophisticated due to the presence of the unbounded operator in front of the nonlinear term. To address such issues, a novel and direct approach has been exploited which does not rely on a Kolmogorov equation but on the integration by parts formula from Malliavin calculus. To the best of our knowledge, the rates of weak convergence are revealed in the stochastic Cahn--Hilliard equation setting for the first time.
We introduce a priori Sobolev-space error estimates for the solution of nonlinear, and possibly parametric, PDEs using Gaussian process and kernel based methods. The primary assumptions are: (1) a continuous embedding of the reproducing kernel Hilbert space of the kernel into a Sobolev space of sufficient regularity; and (2) the stability of the differential operator and the solution map of the PDE between corresponding Sobolev spaces. The proof is articulated around Sobolev norm error estimates for kernel interpolants and relies on the minimizing norm property of the solution. The error estimates demonstrate dimension-benign convergence rates if the solution space of the PDE is smooth enough. We illustrate these points with applications to high-dimensional nonlinear elliptic PDEs and parametric PDEs. Although some recent machine learning methods have been presented as breaking the curse of dimensionality in solving high-dimensional PDEs, our analysis suggests a more nuanced picture: there is a trade-off between the regularity of the solution and the presence of the curse of dimensionality. Therefore, our results are in line with the understanding that the curse is absent when the solution is regular enough.
A class of stochastic Besov spaces $B^p L^2(\Omega;\dot H^\alpha(\mathcal{O}))$, $1\le p\le\infty$ and $\alpha\in[-2,2]$, is introduced to characterize the regularity of the noise in the semilinear stochastic heat equation \begin{equation*} {\rm d} u -\Delta u {\rm d} t =f(u) {\rm d} t + {\rm d} W(t) , \end{equation*} under the following conditions for some $\alpha\in(0,1]$: $$ \Big\| \int_0^te^{-(t-s)A}{\rm d} W(s) \Big\|_{L^2(\Omega;L^2(\mathcal{O}))} \le C t^{\frac{\alpha}{2}} \quad\mbox{and}\quad \Big\| \int_0^te^{-(t-s)A}{\rm d} W(s) \Big\|_{B^\infty L^2(\Omega;\dot H^\alpha(\mathcal{O}))}\le C. $$ The conditions above are shown to be satisfied by both trace-class noises (with $\alpha=1$) and one-dimensional space-time white noises (with $\alpha=\frac12$). The latter would fail to satisfy the conditions with $\alpha=\frac12$ if the stochastic Besov norm $\|\cdot\|_{B^\infty L^2(\Omega;\dot H^\alpha(\mathcal{O}))}$ is replaced by the classical Sobolev norm $\|\cdot\|_{L^2(\Omega;\dot H^\alpha(\mathcal{O}))}$, and this often causes reduction of the convergence order in the numerical analysis of the semilinear stochastic heat equation. In this article, the convergence of a modified exponential Euler method, with a spectral method for spatial discretization, is proved to have order $\alpha$ in both time and space for possibly nonsmooth initial data in $L^4(\Omega;\dot{H}^{\beta}(\mathcal{O}))$ with $\beta>-1$, by utilizing the real interpolation properties of the stochastic Besov spaces and a class of locally refined stepsizes to resolve the singularity of the solution at $t=0$.
Many real-world systems modeled using differential equations involve unknown or uncertain parameters. Standard approaches to address parameter estimation inverse problems in this setting typically focus on estimating constants; yet some unobservable system parameters may vary with time without known evolution models. In this work, we propose a novel approximation method inspired by the Fourier series to estimate time-varying parameters in deterministic dynamical systems modeled with ordinary differential equations. Using ensemble Kalman filtering in conjunction with Fourier series-based approximation models, we detail two possible implementation schemes for sequentially updating the time-varying parameter estimates given noisy observations of the system states. We demonstrate the capabilities of the proposed approach in estimating periodic parameters, both when the period is known and unknown, as well as non-periodic time-varying parameters of different forms with several computed examples using a forced harmonic oscillator. Results emphasize the importance of the frequencies and number of approximation model terms on the time-varying parameter estimates and corresponding dynamical system predictions.
It is known that standard stochastic Galerkin methods encounter challenges when solving partial differential equations with high dimensional random inputs, which are typically caused by the large number of stochastic basis functions required. It becomes crucial to properly choose effective basis functions, such that the dimension of the stochastic approximation space can be reduced. In this work, we focus on the stochastic Galerkin approximation associated with generalized polynomial chaos (gPC), and explore the gPC expansion based on the analysis of variance (ANOVA) decomposition. A concise form of the gPC expansion is presented for each component function of the ANOVA expansion, and an adaptive ANOVA procedure is proposed to construct the overall stochastic Galerkin system. Numerical results demonstrate the efficiency of our proposed adaptive ANOVA stochastic Galerkin method.
This short paper is concerned with the numerical reconstruction of small sources from boundary Cauchy data for a single frequency. We study a sampling method to determine the location of small sources in a very fast and robust way. Furthermore, the method can also compute the intensity of point sources provided that the sources are well separated. A simple justification of the method is done using the Green representation formula and an asymptotic expansion of the radiated field for small volume sources. The implementation of the method is non-iterative, computationally cheap, fast, and very simple. Numerical examples are presented to illustrate the performance of the method.
Gaussian processes (GPs) are widely-used tools in spatial statistics and machine learning and the formulae for the mean function and covariance kernel of a GP $v$ that is the image of another GP $u$ under a linear transformation $T$ acting on the sample paths of $u$ are well known, almost to the point of being folklore. However, these formulae are often used without rigorous attention to technical details, particularly when $T$ is an unbounded operator such as a differential operator, which is common in several modern applications. This note provides a self-contained proof of the claimed formulae for the case of a closed, densely-defined operator $T$ acting on the sample paths of a square-integrable stochastic process. Our proof technique relies upon Hille's theorem for the Bochner integral of a Banach-valued random variable.
Stochastic partial differential equations have been used in a variety of contexts to model the evolution of uncertain dynamical systems. In recent years, their applications to geophysical fluid dynamics has increased massively. For a judicious usage in modelling fluid evolution, one needs to calibrate the amplitude of the noise to data. In this paper we address this requirement for the stochastic rotating shallow water (SRSW) model. This work is a continuation of [LvLCP23], where a data assimilation methodology has been introduced for the SRSW model. The noise used in [LvLCP23] was introduced as an arbitrary random phase shift in the Fourier space. This is not necessarily consistent with the uncertainty induced by a model reduction procedure. In this paper, we introduce a new method of noise calibration of the SRSW model which is compatible with the model reduction technique. The method is generic and can be applied to arbitrary stochastic parametrizations. It is also agnostic as to the source of data (real or synthetic). It is based on a principal component analysis technique to generate the eigenvectors and the eigenvalues of the covariance matrix of the stochastic parametrization. For SRSW model covered in this paper, we calibrate the noise by using the elevation variable of the model, as this is an observable easily obtainable in practical application, and use synthetic data as input for the calibration procedure.
Recently, $(\beta,\gamma)$-Chebyshev functions, as well as the corresponding zeros, have been introduced as a generalization of classical Chebyshev polynomials of the first kind and related roots. They consist of a family of orthogonal functions on a subset of $[-1,1]$, which indeed satisfies a three-term recurrence formula. In this paper we present further properties, which are proven to comply with various results about classical orthogonal polynomials. In addition, we prove a conjecture concerning the Lebesgue constant's behavior related to the roots of $(\beta,\gamma)$-Chebyshev functions in the corresponding orthogonality interval.
In this paper we consider the generalized inverse iteration for computing ground states of the Gross-Pitaevskii eigenvector problem (GPE). For that we prove explicit linear convergence rates that depend on the maximum eigenvalue in magnitude of a weighted linear eigenvalue problem. Furthermore, we show that this eigenvalue can be bounded by the first spectral gap of a linearized Gross-Pitaevskii operator, recovering the same rates as for linear eigenvector problems. With this we establish the first local convergence result for the basic inverse iteration for the GPE without damping. We also show how our findings directly generalize to extended inverse iterations, such as the Gradient Flow Discrete Normalized (GFDN) proposed in [W. Bao, Q. Du, SIAM J. Sci. Comput., 25 (2004)] or the damped inverse iteration suggested in [P. Henning, D. Peterseim, SIAM J. Numer. Anal., 53 (2020)]. Our analysis also reveals why the inverse iteration for the GPE does not react favourably to spectral shifts. This empirical observation can now be explained with a blow-up of a weighting function that crucially contributes to the convergence rates. Our findings are illustrated by numerical experiments.
The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.