In this paper, we consider numerical approximation to periodic measure of a time periodic stochastic differential equations (SDEs) under weakly dissipative condition. For this we first study the existence of the periodic measure $\rho_t$ and the large time behaviour of $\mathcal{U}(t+s,s,x) := \mathbb{E}\phi(X_{t}^{s,x})-\int\phi d\rho_t,$ where $X_t^{s,x}$ is the solution of the SDEs and $\phi$ is a test function being smooth and of polynomial growth at infinity. We prove $\mathcal{U}$ and all its spatial derivatives decay to 0 with exponential rate on time $t$ in the sense of average on initial time $s$. We also prove the existence and the geometric ergodicity of the periodic measure of the discretized semi-flow from the Euler-Maruyama scheme and moment estimate of any order when the time step is sufficiently small (uniform for all orders). We thereafter obtain that the weak error for the numerical scheme of infinite horizon is of the order $1$ in terms of the time step. We prove that the choice of step size can be uniform for all test functions $\phi$. Subsequently we are able to estimate the average periodic measure with ergodic numerical schemes.
In this work, we develop a high-order pressure-robust method for the rotation form of the stationary incompressible Navier-Stokes equations. The original idea is to change the velocity test functions in the discretization of trilinear and right hand side terms by using an H(div)-conforming velocity reconstruction operator. In order to match the rotation form and error analysis, a novel skew-symmetric discrete trilinear form containing the reconstruction operator is proposed, in which not only the velocity test function is changed. The corresponding well-posed discrete weak formulation stems straight from the classical inf-sup stable mixed conforming high-order finite elements, and it is proven to achieve the pressure-independent velocity errors. Optimal convergence rates of H1, L2-error for the velocity and L2-error for the Bernoulli pressure are completely established. Adequate numerical experiments are presented to demonstrate the theoretical results and the remarkable performance of the proposed method.
In this paper, we consider the density estimation problem associated with the stationary measure of ergodic It\^o diffusions from a discrete-time series that approximate the solutions of the stochastic differential equations. To take an advantage of the characterization of density function through the stationary solution of a parabolic-type Fokker-Planck PDE, we proceed as follows. First, we employ deep neural networks to approximate the drift and diffusion terms of the SDE by solving appropriate supervised learning tasks. Subsequently, we solve a steady-state Fokker-Plank equation associated with the estimated drift and diffusion coefficients with a neural-network-based least-squares method. We establish the convergence of the proposed scheme under appropriate mathematical assumptions, accounting for the generalization errors induced by regressing the drift and diffusion coefficients, and the PDE solvers. This theoretical study relies on a recent perturbation theory of Markov chain result that shows a linear dependence of the density estimation to the error in estimating the drift term, and generalization error results of nonparametric regression and of PDE regression solution obtained with neural-network models. The effectiveness of this method is reflected by numerical simulations of a two-dimensional Student's t distribution and a 20-dimensional Langevin dynamics.
Partial differential equations on manifolds have been widely studied and plays a crucial role in many subjects. In our previous work, a class of nonlocal models was introduced to approximate the Poisson equation on manifolds that embedded in high dimensional Euclid spaces with Dirichlet and Neumann boundaries. In this paper, we improve the accuracy of such model under Dirichlet boundary by adding a higher order term along a layer adjacent to the boundary. Such term is explicitly expressed by the normal derivative of solution and the mean curvature of the boundary, while the normal derivative is regarded as a variable. All the truncation errors that involve or do not involve such term have been re-analyzed and been significantly reduced. Our concentration is on the well-posedness analysis of the weak formulation corresponding to the nonlocal model and the convergence analysis to its PDE counterpart. The main result of our work is that, such manifold nonlocal model converges to the local Poisson problem in a rate of \mathcal{O}(\delta^2) in H^1 norm, where {\delta} is the parameter that denotes the range of support for the kernel of the nonlocal operators. Such convergence rate is currently optimal among all the nonlocal models according to the literature. Two numerical experiments are included to illustrate our convergence results on the other side.
Regula Falsi, or the method of false position, is a numerical method for finding an approximate solution to f(x) = 0 on a finite interval [a, b], where f is a real-valued continuous function on [a, b] and satisfies f(a)f(b) < 0. Previous studies proved the convergence of this method under certain assumptions about the function f, such as both the first and second derivatives of f do not change the sign on the interval [a, b]. In this paper, we remove those assumptions and prove the convergence of the method for all continuous functions.
The stochastic approximation (SA) algorithm is a widely used probabilistic method for finding a solution to an equation of the form $\mathbf{f}(\boldsymbol{\theta}) = \mathbf{0}$ where $\mathbf{f} : \mathbb{R}^d \rightarrow \mathbb{R}^d$, when only noisy measurements of $\mathbf{f}(\cdot)$ are available. In the literature to date, one can make a distinction between "synchronous" updating, whereby the entire vector of the current guess $\boldsymbol{\theta}_t$ is updated at each time, and "asynchronous" updating, whereby ony one component of $\boldsymbol{\theta}_t$ is updated. In convex and nonconvex optimization, there is also the notion of "batch" updating, whereby some but not all components of $\boldsymbol{\theta}_t$ are updated at each time $t$. In addition, there is also a distinction between using a "local" clock versus a "global" clock. In the literature to date, convergence proofs when a local clock is used make the assumption that the measurement noise is an i.i.d\ sequence, an assumption that does not hold in Reinforcement Learning (RL). In this note, we provide a general theory of convergence for batch asymchronous stochastic approximation (BASA), that works whether the updates use a local clock or a global clock, for the case where the measurement noises form a martingale difference sequence. This is the most general result to date and encompasses all others.
This paper investigates the stochastic distributed nonconvex optimization problem of minimizing a global cost function formed by the summation of $n$ local cost functions. We solve such a problem by involving zeroth-order (ZO) information exchange. In this paper, we propose a ZO distributed primal-dual coordinate method (ZODIAC) to solve the stochastic optimization problem. Agents approximate their own local stochastic ZO oracle along with coordinates with an adaptive smoothing parameter. We show that the proposed algorithm achieves the convergence rate of $\mathcal{O}(\sqrt{p}/\sqrt{T})$ for general nonconvex cost functions. We demonstrate the efficiency of proposed algorithms through a numerical example in comparison with the existing state-of-the-art centralized and distributed ZO algorithms.
We introduce a new method analyzing the cumulative sum (CUSUM) procedure in sequential change-point detection. When observations are phase-type distributed and the post-change distribution is given by exponential tilting of its pre-change distribution, the first passage analysis of the CUSUM statistic is reduced to that of a certain Markov additive process. By using the theory of the so-called scale matrix and further developing it, we derive exact expressions of the average run length, average detection delay, and false alarm probability under the CUSUM procedure. The proposed method is robust and applicable in a general setting with non-i.i.d. observations. Numerical results also are given.
Estimating the density of a continuous random variable X has been studied extensively in statistics, in the setting where n independent observations of X are given a priori and one wishes to estimate the density from that. Popular methods include histograms and kernel density estimators. In this review paper, we are interested instead in the situation where the observations are generated by Monte Carlo simulation from a model. Then, one can take advantage of variance reduction methods such as stratification, conditional Monte Carlo, and randomized quasi-Monte Carlo (RQMC), and obtain a more accurate density estimator than with standard Monte Carlo for a given computing budget. We discuss several ways of doing this, proposed in recent papers, with a focus on methods that exploit RQMC. A first idea is to directly combine RQMC with a standard kernel density estimator. Another one is to adapt a simulation-based derivative estimation method such as smoothed perturbation analysis or the likelihood ratio method to obtain a continuous estimator of the cdf, whose derivative is an unbiased estimator of the density. This can then be combined with RQMC. We summarize recent theoretical results with these approaches and give numerical illustrations of how they improve the convergence of the mean square integrated error.
Estimating the unknown density from which a given independent sample originates is more difficult than estimating the mean, in the sense that for the best popular non-parametric density estimators, the mean integrated square error converges more slowly than at the canonical rate of $\mathcal{O}(1/n)$. When the sample is generated from a simulation model and we have control over how this is done, we can do better. We examine an approach in which conditional Monte Carlo yields, under certain conditions, a random conditional density which is an unbiased estimator of the true density at any point. By averaging independent replications, we obtain a density estimator that converges at a faster rate than the usual ones. Moreover, combining this new type of estimator with randomized quasi-Monte Carlo to generate the samples typically brings a larger improvement on the error and convergence rate than for the usual estimators, because the new estimator is smoother as a function of the underlying uniform random numbers.
Stochastic gradient Markov chain Monte Carlo (SGMCMC) has become a popular method for scalable Bayesian inference. These methods are based on sampling a discrete-time approximation to a continuous time process, such as the Langevin diffusion. When applied to distributions defined on a constrained space, such as the simplex, the time-discretisation error can dominate when we are near the boundary of the space. We demonstrate that while current SGMCMC methods for the simplex perform well in certain cases, they struggle with sparse simplex spaces; when many of the components are close to zero. However, most popular large-scale applications of Bayesian inference on simplex spaces, such as network or topic models, are sparse. We argue that this poor performance is due to the biases of SGMCMC caused by the discretization error. To get around this, we propose the stochastic CIR process, which removes all discretization error and we prove that samples from the stochastic CIR process are asymptotically unbiased. Use of the stochastic CIR process within a SGMCMC algorithm is shown to give substantially better performance for a topic model and a Dirichlet process mixture model than existing SGMCMC approaches.