Simulation of stochastic partial differential equations (SPDE) on a general domain requires a discretization of the noise. In this paper, the noise is discretized by a piecewise linear interpolation. The error caused by this is analyzed in the context of a fully discrete finite element approximation of a semilinear stochastic reaction-advection-diffusion equation on a convex polygon. The noise is Gaussian, white in time and correlated in space. It is modeled as a standard cylindrical Wiener process on the reproducing kernel Hilbert space associated to the covariance kernel. The noise is assumed to extend to a larger polygon than the SPDE domain to allow for sampling by the circulant embedding method. The interpolation error is analyzed under mild assumptions on the kernel. The main tools used are Hilbert--Schmidt bounds of multiplication operators onto negative order Sobolev spaces and an error bound for the finite element interpolant in fractional Sobolev norms. Examples with covariance kernels encountered in applications are illustrated in numerical simulations using the FEniCS finite element software. Conclusions from the analysis include that interpolation of noise with Mat\'ern kernels does not cause an additional error, that there exist kernels where the interpolation error dominates and that generation of noise on a coarser mesh than that of the SPDE discretization does not always result in a loss of accuracy.
In this work we analyze the inverse problem of recovering the space-dependent potential coefficient in an elliptic / parabolic problem from distributed observation. We establish novel (weighted) conditional stability estimates under very mild conditions on the problem data. Then we provide an error analysis of a standard reconstruction scheme based on the standard output least-squares formulation with Tikhonov regularization (by an $H^1$-seminorm penalty), which is then discretized by the Galerkin finite element method with continuous piecewise linear finite elements in space (and also backward Euler method in time for parabolic problems). We present a detailed analysis of the discrete scheme, and provide convergence rates in a weighted $L^2(\Omega)$ for discrete approximations with respect to the exact potential. The error bounds are explicitly dependent on the noise level, regularization parameter and discretization parameter(s). Under suitable conditions, we also derive error estimates in the standard $L^2(\Omega)$ and interior $L^2$ norms. The analysis employs sharp a priori error estimates and nonstandard test functions. Several numerical experiments are given to complement the theoretical analysis.
Echo State Networks (ESN) are a type of Recurrent Neural Networks that yields promising results in representing time series and nonlinear dynamic systems. Although they are equipped with a very efficient training procedure, Reservoir Computing strategies, such as the ESN, require the use of high order networks, i.e. large number of layers, resulting in number of states that is magnitudes higher than the number of model inputs and outputs. This not only makes the computation of a time step more costly, but also may pose robustness issues when applying ESNs to problems such as Model Predictive Control (MPC) and other optimal control problems. One such way to circumvent this is through Model Order Reduction strategies such as the Proper Orthogonal Decomposition (POD) and its variants (POD-DEIM), whereby we find an equivalent lower order representation to an already trained high dimension ESN. The objective of this work is to investigate and analyze the performance of POD methods in Echo State Networks, evaluating their effectiveness. To this end, we evaluate the Memory Capacity (MC) of the POD-reduced network in comparison to the original (full order) ENS. We also perform experiments on two different numerical case studies: a NARMA10 difference equation and an oil platform containing two wells and one riser. The results show that there is little loss of performance comparing the original ESN to a POD-reduced counterpart, and also that the performance of a POD-reduced ESN tend to be superior to a normal ESN of the same size. Also we attain speedups of around $80\%$ in comparison to the original ESN.
Many applications, such as system identification, classification of time series, direct and inverse problems in partial differential equations, and uncertainty quantification lead to the question of approximation of a non-linear operator between metric spaces $\mathfrak{X}$ and $\mathfrak{Y}$. We study the problem of determining the degree of approximation of such operators on a compact subset $K_\mathfrak{X}\subset \mathfrak{X}$ using a finite amount of information. If $\mathcal{F}: K_\mathfrak{X}\to K_\mathfrak{Y}$, a well established strategy to approximate $\mathcal{F}(F)$ for some $F\in K_\mathfrak{X}$ is to encode $F$ (respectively, $\mathcal{F}(F)$) in terms of a finite number $d$ (repectively $m$) of real numbers. Together with appropriate reconstruction algorithms (decoders), the problem reduces to the approximation of $m$ functions on a compact subset of a high dimensional Euclidean space $\mathbb{R}^d$, equivalently, the unit sphere $\mathbb{S}^d$ embedded in $\mathbb{R}^{d+1}$. The problem is challenging because $d$, $m$, as well as the complexity of the approximation on $\mathbb{S}^d$ are all large, and it is necessary to estimate the accuracy keeping track of the inter-dependence of all the approximations involved. In this paper, we establish constructive methods to do this efficiently; i.e., with the constants involved in the estimates on the approximation on $\mathbb{S}^d$ being $\mathcal{O}(d^{1/6})$. We study different smoothness classes for the operators, and also propose a method for approximation of $\mathcal{F}(F)$ using only information in a small neighborhood of $F$, resulting in an effective reduction in the number of parameters involved.
Using techniques developed recently in the field of compressed sensing we prove new upper bounds for general (non-linear) sampling numbers of (quasi-)Banach smoothness spaces in $L^2$. In relevant cases such as mixed and isotropic weighted Wiener classes or Sobolev spaces with mixed smoothness, sampling numbers in $L^2$ can be upper bounded by best $n$-term trigonometric widths in $L^{\infty}$. We describe a recovery procedure based on $\ell^1$-minimization (basis pursuit denoising) using only $m$ function values with $m$ close to $n$. With this method, a significant gain in the rate of convergence compared to recently developed linear recovery methods is achieved. In this deterministic worst-case setting we see an additional speed-up of $n^{-1/2}$ compared to linear methods in case of weighted Wiener spaces. For their quasi-Banach counterparts even arbitrary polynomial speed-up is possible. Surprisingly, our approach allows to recover mixed smoothness Sobolev functions belonging to $S^r_pW(\mathbb{T}^d)$ on the $d$-torus with a logarithmically better error decay than any linear method can achieve when $1 < p < 2$ and $d$ is large. This effect is not present for isotropic Sobolev spaces.
This paper makes 3 contributions. First, it generalizes the Lindeberg\textendash Feller and Lyapunov Central Limit Theorems to Hilbert Spaces by way of $L^2$. Second, it generalizes these results to spaces in which sample failure and missingness can occur. Finally, it shows that satisfaction of the Lindeberg\textendash Feller and Lyapunov Conditions in such spaces implies the satisfaction of the conditions in the completely observed space, and how this guarantees the consistency of inferences from the partial functional data. These latter two results are especially important given the increasing attention to statistical inference with partially observed functional data. This paper goes beyond previous research by providing simple boundedness conditions which guarantee that \textit{all} inferences, as opposed to some proper subset of them, will be consistently estimated. This is shown primarily by aggregating conditional expectations with respect to the space of missingness patterns. This paper appears to be the first to apply this technique.
The Laplace-Beltrami problem on closed surfaces embedded in three dimensions arises in many areas of physics, including molecular dynamics (surface diffusion), electromagnetics (harmonic vector fields), and fluid dynamics (vesicle deformation). In particular, the Hodge decomposition of vector fields tangent to a surface can be computed by solving a sequence of Laplace-Beltrami problems. Such decompositions are very important in magnetostatic calculations and in various plasma and fluid flow problems. In this work we develop $L^2$-invertibility theory for the Laplace-Beltrami operator on piecewise smooth surfaces, extending earlier weak formulations and integral equation approaches on smooth surfaces. Furthermore, we reformulate the weak form of the problem as an interface problem with continuity conditions across edges of adjacent piecewise smooth panels of the surface. We then provide high-order numerical examples along surfaces of revolution to support our analysis, and discuss numerical extensions to general surfaces embedded in three dimensions.
In the present paper we initiate the challenging task of building a mathematically sound theory for Adaptive Virtual Element Methods (AVEMs). Among the realm of polygonal meshes, we restrict our analysis to triangular meshes with hanging nodes in 2d -- the simplest meshes with a systematic refinement procedure that preserves shape regularity and optimal complexity. A major challenge in the a posteriori error analysis of AVEMs is the presence of the stabilization term, which is of the same order as the residual-type error estimator but prevents the equivalence of the latter with the energy error. Under the assumption that any chain of recursively created hanging nodes has uniformly bounded length, we show that the stabilization term can be made arbitrarily small relative to the error estimator provided the stabilization parameter of the scheme is sufficiently large. This quantitative estimate leads to stabilization-free upper and lower a posteriori bounds for the energy error. This novel and crucial property of VEMs hinges on the largest subspace of continuous piecewise linear functions and the delicate interplay between its coarser scales and the finer ones of the VEM space. An important consequence for piecewise constant data is a contraction property between consecutive loops of AVEMs, which we also prove. Our results apply to $H^1$-conforming (lowest order) VEMs of any kind, including the classical and enhanced VEMs.
Polynomial spectral methods provide fast, accurate, and flexible solvers for broad ranges of PDEs with one bounded dimension, where the incorporation of general boundary conditions is well understood. However, automating extensions to domains with multiple bounded dimensions is challenging because of difficulties in implementing boundary conditions and imposing compatibility conditions at shared edges and corners. Past work has included various workarounds, such as the anisotropic inclusion of partial boundary data at shared edges or approaches that only work for specific boundary conditions. Here we present a general system for imposing boundary and compatibility conditions for elliptic equations on hypercubes. We take an approach based on the generalized tau method, which allows for a wide range of boundary conditions for many types of spectral methods. The generalized tau method has the distinct advantage that the specified polynomial residual determines the exact algebraic solution; afterwards, any stable numerical scheme will find the same result. We can, therefore, provide one-to-one comparisons to traditional collocation and Galerkin methods within the tau framework. As an essential requirement, we add specific tau corrections to the boundary conditions in addition to the bulk PDE. We then impose additional mutual compatibility conditions to ensure boundary conditions match at shared subsurfaces. Our approach works with general boundary conditions that commute on intersecting subsurfaces, including Dirichlet, Neumann, Robin, and any combination of these on all boundaries. The tau corrections and compatibility conditions can be fully isotropic and easily incorporated into existing solvers. We present the method explicitly for the Poisson equation in two and three dimensions and describe its extension to arbitrary elliptic equations (e.g. biharmonic) in any dimension.
We introduce an integral representation of the Monge-Amp\`ere equation, which leads to a new finite difference method based upon numerical quadrature. The resulting scheme is monotone and fits immediately into existing convergence proofs for the Monge-Amp\`ere equation with either Dirichlet or optimal transport boundary conditions. The use of higher-order quadrature schemes allows for substantial reduction in the component of the error that depends on the angular resolution of the finite difference stencil. This, in turn, allows for significant improvements in both stencil width and formal truncation error. The resulting schemes can achieve a formal accuracy that is arbitrarily close to $\mathcal{O}(h^2)$, which is the optimal consistency order for monotone approximations of second order operators. We present three different implementations of this method. The first two exploit the spectral accuracy of the trapezoid rule on uniform angular discretizations to allow for computation on a nearest-neighbors finite difference stencil over a large range of grid refinements. The third uses higher-order quadrature to produce superlinear convergence while simultaneously utilizing narrower stencils than other monotone methods. Computational results are presented in two dimensions for problems of various regularity.
The flux-mortar mixed finite element method was recently developed for a general class of domain decomposition saddle point problems on non-matching grids. In this work we develop the method for Darcy flow using the multipoint flux approximation as the subdomain discretization. The subdomain problems involve solving positive definite cell-centered pressure systems. The normal flux on the subdomain interfaces is the mortar coupling variable, which plays the role of a Lagrange multiplier to impose weakly continuity of pressure. We present well-posedness and error analysis based on reformulating the method as a mixed finite element method with a quadrature rule. We develop a non-overlapping domain decomposition algorithm for the solution of the resulting algebraic system that reduces it to an interface problem for the flux-mortar, as well as an efficient interface preconditioner. A series of numerical experiments is presented illustrating the performance of the method on general grids, including applications to flow in complex porous media.