It is well known that the Euler method for approximating the solutions of a random ordinary differential equation $\mathrm{d}X_t/\mathrm{d}t = f(t, X_t, Y_t)$ driven by a stochastic process $\{Y_t\}_t$ with $\theta$-H\"older sample paths is estimated to be of strong order $\theta$ with respect to the time step, provided $f=f(t, x, y)$ is sufficiently regular and with suitable bounds. Here, it is proved that, in many typical cases, further conditions on the noise can be exploited so that the strong convergence is actually of order 1, regardless of the H\"older regularity of the sample paths. This applies for instance to additive or multiplicative It\^o process noises (such as Wiener, Ornstein-Uhlenbeck, and geometric Brownian motion processes); to point-process noises (such as Poisson point processes and Hawkes self-exciting processes, which even have jump-type discontinuities); and to transport-type processes with sample paths of bounded variation. The result is based on a novel approach, estimating the global error as an iterated integral over both large and small mesh scales, and switching the order of integration to move the critical regularity to the large scale. The work is complemented with numerical simulations illustrating the strong order 1 convergence in those cases, and with an example with fractional Brownian motion noise with Hurst parameter $0 < H < 1/2$ for which the order of convergence is $H + 1/2$, hence lower than the attained order 1 in the examples above, but still higher than the order $H$ of convergence expected from previous works.
This paper presents the error analysis of numerical methods on graded meshes for stochastic Volterra equations with weakly singular kernels. We first prove a novel regularity estimate for the exact solution via analyzing the associated convolution structure. This reveals that the exact solution exhibits an initial singularity in the sense that its H\"older continuous exponent on any neighborhood of $t=0$ is lower than that on every compact subset of $(0,T]$. Motivated by the initial singularity, we then construct the Euler--Maruyama method, fast Euler--Maruyama method, and Milstein method based on graded meshes. By establishing their pointwise-in-time error estimates, we give the grading exponents of meshes to attain the optimal uniform-in-time convergence orders, where the convergence orders improve those of the uniform mesh case. Numerical experiments are finally reported to confirm the sharpness of theoretical findings.
We propose a new randomized method for solving systems of nonlinear equations, which can find sparse solutions or solutions under certain simple constraints. The scheme only takes gradients of component functions and uses Bregman projections onto the solution space of a Newton equation. In the special case of euclidean projections, the method is known as nonlinear Kaczmarz method. Furthermore, if the component functions are nonnegative, we are in the setting of optimization under the interpolation assumption and the method reduces to SGD with the recently proposed stochastic Polyak step size. For general Bregman projections, our method is a stochastic mirror descent with a novel adaptive step size. We prove that in the convex setting each iteration of our method results in a smaller Bregman distance to exact solutions as compared to the standard Polyak step. Our generalization to Bregman projections comes with the price that a convex one-dimensional optimization problem needs to be solved in each iteration. This can typically be done with globalized Newton iterations. Convergence is proved in two classical settings of nonlinearity: for convex nonnegative functions and locally for functions which fulfill the tangential cone condition. Finally, we show examples in which the proposed method outperforms similar methods with the same memory requirements.
We present a multigrid algorithm to solve efficiently the large saddle-point systems of equations that typically arise in PDE-constrained optimization under uncertainty. The algorithm is based on a collective smoother that at each iteration sweeps over the nodes of the computational mesh, and solves a reduced saddle-point system whose size depends on the number $N$ of samples used to discretized the probability space. We show that this reduced system can be solved with optimal $O(N)$ complexity. We test the multigrid method on three problems: a linear-quadratic problem for which the multigrid method is used to solve directly the linear optimality system; a nonsmooth problem with box constraints and $L^1$-norm penalization on the control, in which the multigrid scheme is used within a semismooth Newton iteration; a risk-adverse problem with the smoothed CVaR risk measure where the multigrid method is called within a preconditioned Newton iteration. In all cases, the multigrid algorithm exhibits very good performances and robustness with respect to all parameters of interest.
System I is a proof language for a fragment of propositional logic where isomorphic propositions, such as $A\wedge B$ and $B\wedge A$, or $A\Rightarrow(B\wedge C)$ and $(A\Rightarrow B)\wedge(A\Rightarrow C)$ are made equal. System I enjoys the strong normalization property. This is sufficient to prove the existence of empty types, but not to prove the introduction property (every closed term in normal form is an introduction). Moreover, a severe restriction had to be made on the types of the variables in order to obtain the existence of empty types. We show here that adding $\eta$-expansion rules to System I permits to drop this restriction, and yields a strongly normalizing calculus which enjoys the full introduction property.
A novel space-time discretization for the (linear) scalar-valued dissipative wave equation is presented. It is a structured approach, namely, the discretization space is obtained tensorizing the Virtual Element (VE) discretization in space with the Discontinuous Galerkin (DG) method in time. As such, it combines the advantages of both the VE and the DG methods. The proposed scheme is implicit and it is proved to be unconditionally stable and accurate in space and time.
High-dimensional Partial Differential Equations (PDEs) are a popular mathematical modelling tool, with applications ranging from finance to computational chemistry. However, standard numerical techniques for solving these PDEs are typically affected by the curse of dimensionality. In this work, we tackle this challenge while focusing on stationary diffusion equations defined over a high-dimensional domain with periodic boundary conditions. Inspired by recent progress in sparse function approximation in high dimensions, we propose a new method called compressive Fourier collocation. Combining ideas from compressive sensing and spectral collocation, our method replaces the use of structured collocation grids with Monte Carlo sampling and employs sparse recovery techniques, such as orthogonal matching pursuit and $\ell^1$ minimization, to approximate the Fourier coefficients of the PDE solution. We conduct a rigorous theoretical analysis showing that the approximation error of the proposed method is comparable with the best $s$-term approximation (with respect to the Fourier basis) to the solution. Using the recently introduced framework of random sampling in bounded Riesz systems, our analysis shows that the compressive Fourier collocation method mitigates the curse of dimensionality with respect to the number of collocation points under sufficient conditions on the regularity of the diffusion coefficient. We also present numerical experiments that illustrate the accuracy and stability of the method for the approximation of sparse and compressible solutions.
We consider the solution to the biharmonic equation in mixed form discretized by the Hybrid High-Order (HHO) methods. The two resulting second-order elliptic problems can be decoupled via the introduction of a new unknown, corresponding to the boundary value of the solution of the first Laplacian problem. This technique yields a global linear problem that can be solved iteratively via a Krylov-type method. More precisely, at each iteration of the scheme, two second-order elliptic problems have to be solved, and a normal derivative on the boundary has to be computed. In this work, we specialize this scheme for the HHO discretization. To this aim, an explicit technique to compute the discrete normal derivative of an HHO solution of a Laplacian problem is proposed. Moreover, we show that the resulting discrete scheme is well-posed. Finally, a new preconditioner is designed to speed up the convergence of the Krylov method. Numerical experiments assessing the performance of the proposed iterative algorithm on both two- and three-dimensional test cases are presented.
We prove linear convergence for a new family of modified Dirichlet--Neumann methods applied to quasilinear parabolic equations, as well as the convergence of the Robin--Robin method. Such nonoverlapping domain decomposition methods are commonly employed for the parallelization of partial differential equation solvers. Convergence has been extensively studied for elliptic equations, but in the case of parabolic equations there are hardly any convergence results that are not relying on strong regularity assumptions. Hence, we construct a new framework for analyzing domain decomposition methods applied to quasilinear parabolic problems, based on fractional time derivatives and time-dependent Steklov--Poincar\'e operators. The convergence analysis is conducted without assuming restrictive regularity assumptions on the solutions or the numerical iterates. We also prove that these continuous convergence results extend to the discrete case obtained when combining domain decompositions with space-time finite elements.
Let $(X_t)_{t \ge 0}$ be solution of a one-dimensional stochastic differential equation. Our aim is to study the convergence rate for the estimation of the invariant density in intermediate regime, assuming that a discrete observation of the process $(X_t)_{t \in [0, T]}$ is available, when $T$ tends to $\infty$. We find the convergence rates associated to the kernel density estimator we proposed and a condition on the discretization step $\Delta_n$ which plays the role of threshold between the intermediate regime and the continuous case. In intermediate regime the convergence rate is $n^{- \frac{2 \beta}{2 \beta + 1}}$, where $\beta$ is the smoothness of the invariant density. After that, we complement the upper bounds previously found with a lower bound over the set of all the possible estimator, which provides the same convergence rate: it means it is not possible to propose a different estimator which achieves better convergence rates. This is obtained by the two hypothesis method; the most challenging part consists in bounding the Hellinger distance between the laws of the two models. The key point is a Malliavin representation for a score function, which allows us to bound the Hellinger distance through a quantity depending on the Malliavin weight.
We establish optimal error bounds on time-splitting methods for the nonlinear Schr\"odinger equation with low regularity potential and typical power-type nonlinearity $ f(\rho) = \rho^\sigma $, where $ \rho:=|\psi|^2 $ is the density with $ \psi $ the wave function and $ \sigma > 0 $ the exponent of the nonlinearity. For the first-order Lie-Trotter time-splitting method, optimal $ L^2 $-norm error bound is proved for $L^\infty$-potential and $ \sigma > 0 $, and optimal $H^1$-norm error bound is obtained for $ W^{1, 4} $-potential and $ \sigma \geq 1/2 $. For the second-order Strang time-splitting method, optimal $ L^2 $-norm error bound is established for $H^2$-potential and $ \sigma \geq 1 $, and optimal $H^1$-norm error bound is proved for $H^3$-potential and $ \sigma \geq 3/2 $. Compared to those error estimates of time-splitting methods in the literature, our optimal error bounds either improve the convergence rates under the same regularity assumptions or significantly relax the regularity requirements on potential and nonlinearity for optimal convergence orders. A key ingredient in our proof is to adopt a new technique called \textit{regularity compensation oscillation} (RCO), where low frequency modes are analyzed by phase cancellation, and high frequency modes are estimated by regularity of the solution. Extensive numerical results are reported to confirm our error estimates and to demonstrate that they are sharp.