Numerical analysis for the stochastic Stokes equations is still challenging even though it has been well done for the corresponding deterministic equations. In particular, the pre-existing error estimates of finite element methods for the stochastic Stokes equations { in the $L^\infty(0, T; L^2(\Omega; L^2))$ norm} all suffer from the order reduction with respect to the spatial discretizations. The best convergence result obtained for these fully discrete schemes is only half-order in time and first-order in space, which is not optimal in space in the traditional sense. The objective of this article is to establish strong convergence of $O(\tau^{1/2}+ h^2)$ in the $L^\infty(0, T; L^2(\Omega; L^2))$ norm for approximating the velocity, and strong convergence of $O(\tau^{1/2}+ h)$ in the $L^{\infty}(0, T;L^2(\Omega;L^2))$ norm for approximating the time integral of pressure, where $\tau$ and $h$ denote the temporal step size and spatial mesh size, respectively. The error estimates are of optimal order for the spatial discretization considered in this article (with MINI element), and consistent with the numerical experiments. The analysis is based on the fully discrete Stokes semigroup technique and the corresponding new estimates.
Partial differential equations (PDEs) have become an essential tool for modeling complex physical systems. Such equations are typically solved numerically via mesh-based methods, such as finite element methods, with solutions over the spatial domain. However, obtaining these solutions are often prohibitively costly, limiting the feasibility of exploring parameters in PDEs. In this paper, we propose an efficient emulator that simultaneously predicts the solutions over the spatial domain, with theoretical justification of its uncertainty quantification. The novelty of the proposed method lies in the incorporation of the mesh node coordinates into the statistical model. In particular, the proposed method segments the mesh nodes into multiple clusters via a Dirichlet process prior and fits Gaussian process models with the same hyperparameters in each of them. Most importantly, by revealing the underlying clustering structures, the proposed method can provide valuable insights into qualitative features of the resulting dynamics that can be used to guide further investigations. Real examples are demonstrated to show that our proposed method has smaller prediction errors than its main competitors, with competitive computation time, and identifies interesting clusters of mesh nodes that possess physical significance, such as satisfying boundary conditions. An R package for the proposed methodology is provided in an open repository.
We revisit the question of whether the strong law of large numbers (SLLN) holds uniformly in a rich family of distributions, culminating in a distribution-uniform generalization of the Marcinkiewicz-Zygmund SLLN. These results can be viewed as extensions of Chung's distribution-uniform SLLN to random variables with uniformly integrable $q^\text{th}$ absolute central moments for $0 < q < 2;\ q \neq 1$. Furthermore, we show that uniform integrability of the $q^\text{th}$ moment is both sufficient and necessary for the SLLN to hold uniformly at the Marcinkiewicz-Zygmund rate of $n^{1/q - 1}$. These proofs centrally rely on distribution-uniform analogues of some familiar almost sure convergence results including the Khintchine-Kolmogorov convergence theorem, Kolmogorov's three-series theorem, a stochastic generalization of Kronecker's lemma, and the Borel-Cantelli lemmas. The non-identically distributed case is also considered.
An adaptive method for parabolic partial differential equations that combines sparse wavelet expansions in time with adaptive low-rank approximations in the spatial variables is constructed and analyzed. The method is shown to converge and satisfy similar complexity bounds as existing adaptive low-rank methods for elliptic problems, establishing its suitability for parabolic problems on high-dimensional spatial domains. The construction also yields computable rigorous a posteriori error bounds for such problems. The results are illustrated by numerical experiments.
We propose a physics-constrained convolutional neural network (PC-CNN) to solve two types of inverse problems in partial differential equations (PDEs), which are nonlinear and vary both in space and time. In the first inverse problem, we are given data that is offset by spatially varying systematic error (i.e., the bias, also known as the epistemic uncertainty). The task is to uncover from the biased data the true state, which is the solution of the PDE. In the second inverse problem, we are given sparse information on the solution of a PDE. The task is to reconstruct the solution in space with high-resolution. First, we present the PC-CNN, which constrains the PDE with a simple time-windowing scheme to handle sequential data. Second, we analyse the performance of the PC-CNN for uncovering solutions from biased data. We analyse both linear and nonlinear convection-diffusion equations, and the Navier-Stokes equations, which govern the spatiotemporally chaotic dynamics of turbulent flows. We find that the PC-CNN correctly recovers the true solution for a variety of biases, which are parameterised as non-convex functions. Third, we analyse the performance of the PC-CNN for reconstructing solutions from biased data for the turbulent flow. We reconstruct the spatiotemporal chaotic solution on a high-resolution grid from only 2\% of the information contained in it. For both tasks, we further analyse the Navier-Stokes solutions. We find that the inferred solutions have a physical spectral energy content, whereas traditional methods, such as interpolation, do not. This work opens opportunities for solving inverse problems with partial differential equations.
This paper is concerned with the approximation of solutions to a class of second order non linear abstract differential equations. The finite-dimensional approximate solutions of the given system are built with the aid of the projection operator. We investigate the connection between the approximate solution and exact solution, and the question of convergence. Moreover, we define the Faedo-Galerkin(F-G) approximations and prove the existence and convergence results. The results are obtained by using the theory of cosine functions, Banach fixed point theorem and fractional power of closed linear operators. At last, an example of abstract formulation is provided.
We propose a new numerical domain decomposition method for solving elliptic equations on compact Riemannian manifolds. One advantage of this method is its ability to bypass the need for global triangulations or grids on the manifolds. Additionally, it features a highly parallel iterative scheme. To verify its efficacy, we conduct numerical experiments on some $4$-dimensional manifolds without and with boundary.
Numerical solutions for flows in partially saturated porous media pose challenges related to the non-linearity and elliptic-parabolic degeneracy of the governing Richards' equation. Iterative methods are therefore required to manage the complexity of the flow problem. Norms of successive corrections in the iterative procedure form sequences of positive numbers. Definitions of computational orders of convergence and theoretical results for abstract convergent sequences can thus be used to evaluate and compare different iterative methods. We analyze in this frame Newton's and $L$-scheme methods for an implicit finite element method (FEM) and the $L$-scheme for an explicit finite difference method (FDM). We also investigate the effect of the Anderson Acceleration (AA) on both the implicit and the explicit $L$-schemes. Considering a two-dimensional test problem, we found that the AA halves the number of iterations and renders the convergence of the FEM scheme two times faster. As for the FDM approach, AA does not reduce the number of iterations and even increases the computational effort. Instead, being explicit, the FDM $L$-scheme without AA is faster and as accurate as the FEM $L$-scheme with AA.
In this paper we develop a classical algorithm of complexity $O(K \, 2^n)$ to simulate parametrized quantum circuits (PQCs) of $n$ qubits, where $K$ is the total number of one-qubit and two-qubit control gates. The algorithm is developed by finding $2$-sparse unitary matrices of order $2^n$ explicitly corresponding to any single-qubit and two-qubit control gates in an $n$-qubit system. Finally, we determine analytical expression of Hamiltonians for any such gate and consequently a local Hamiltonian decomposition of any PQC is obtained. All results are validated with numerical simulations.
One of the central quantities of probabilistic seismic risk assessment studies is the fragility curve, which represents the probability of failure of a mechanical structure conditional on a scalar measure derived from the seismic ground motion. Estimating such curves is a difficult task because, for many structures of interest, few data are available and the data are only binary; i.e., they indicate the state of the structure, failure or non-failure. This framework concerns complex equipments such as electrical devices encountered in industrial installations. In order to address this challenging framework a wide range of the methods in the literature rely on a parametric log-normal model. Bayesian approaches allow for efficient learning of the model parameters. However, the choice of the prior distribution has a non-negligible influence on the posterior distribution and, therefore, on any resulting estimate. We propose a thorough study of this parametric Bayesian estimation problem when the data are limited and binary. Using the reference prior theory as a support, we suggest an objective approach for the prior choice. This approach leads to the Jeffreys prior which is explicitly derived for this problem for the first time. The posterior distribution is proven to be proper (i.e., it integrates to unity) with the Jeffreys prior and improper with some classical priors from the literature. The posterior distribution with the Jeffreys prior is also shown to vanish at the boundaries of the parameters domain, so sampling the posterior distribution of the parameters does not produce anomalously small or large values. Therefore, this does not produce degenerate fragility curves such as unit-step functions and the Jeffreys prior leads to robust credibility intervals. The numerical results obtained on two different case studies, including an industrial case, illustrate the theoretical predictions.
This paper discusses the error and cost aspects of ill-posed integral equations when given discrete noisy point evaluations on a fine grid. Standard solution methods usually employ discretization schemes that are directly induced by the measurement points. Thus, they may scale unfavorably with the number of evaluation points, which can result in computational inefficiency. To address this issue, we propose an algorithm that achieves the same level of accuracy while significantly reducing computational costs. Our approach involves an initial averaging procedure to sparsify the underlying grid. To keep the exposition simple, we focus only on one-dimensional ill-posed integral equations that have sufficient smoothness. However, the approach can be generalized to more complicated two- and three-dimensional problems with appropriate modifications.