The optimal error estimate that depending only on the polynomial degree of $ \varepsilon^{-1}$ is established for the temporal semi-discrete scheme of the Cahn-Hilliard equation, which is based on the scalar auxiliary variable (SAV) formulation. The key to our analysis is to convert the structure of the SAV time-stepping scheme back to a form compatible with the original format of the Cahn-Hilliard equation, which makes it feasible to use spectral estimates to handle the nonlinear term. Based on the transformation of the SAV numerical scheme, the optimal error estimate for the temporal semi-discrete scheme which depends only on the low polynomial order of $\varepsilon^{-1}$ instead of the exponential order, is derived by using mathematical induction, spectral arguments, and the superconvergence properties of some nonlinear terms. Numerical examples are provided to illustrate the discrete energy decay property and validate our theoretical convergence analysis.
We present an immersed boundary method to simulate the creeping motion of a rigid particle in a fluid described by the Stokes equations discretized thanks to a finite element strategy on unfitted meshes, called Phi-FEM, that uses the description of the solid with a level-set function. One of the advantages of our method is the use of standard finite element spaces and classical integration tools, while maintaining the optimal convergence (theoretically in the H1 norm for the velocity and L2 for pressure; numerically also in the L2 norm for the velocity).
In this paper, we introduce a new causal framework capable of dealing with probabilistic and non-probabilistic problems. Indeed, we provide a direct causal effect formula called Probabilistic vAriational Causal Effect (PACE) and its variations satisfying some ideas and postulates. Our formula of causal effect uses the idea of the total variation of a function integrated with probability theory. The probabilistic part is the natural availability of changing an exposure values given some variables. These variables interfere with the effect of the exposure on a given outcome. PACE has a parameter $d$ determining the degree of considering the natural availability of changing the exposure values. The lower values of $d$ refer to the scenarios for which rare cases are important. In contrast, with the higher values of $d$, our framework deals with the problems that are in nature probabilistic. Hence, instead of a single value for causal effect, we provide a causal effect vector by discretizing $d$. Further, we introduce the positive and negative PACE to measure the positive and the negative causal changes in the outcome while changing the exposure values. Furthermore, we provide an identifiability criterion for PACE to deal with observational studies. We also address the problem of computing counterfactuals in causal reasoning. We compare our framework to the Pearl, the mutual information, the conditional mutual information, and the Janzing et al. frameworks by investigating several examples.
Stochastic versions of proximal methods have gained much attention in statistics and machine learning. These algorithms tend to admit simple, scalable forms, and enjoy numerical stability via implicit updates. In this work, we propose and analyze a stochastic version of the recently proposed proximal distance algorithm, a class of iterative optimization methods that recover a desired constrained estimation problem as a penalty parameter $\rho \rightarrow \infty$. By uncovering connections to related stochastic proximal methods and interpreting the penalty parameter as the learning rate, we justify heuristics used in practical manifestations of the proximal distance method, establishing their convergence guarantees for the first time. Moreover, we extend recent theoretical devices to establish finite error bounds and a complete characterization of convergence rates regimes. We validate our analysis via a thorough empirical study, also showing that unsurprisingly, the proposed method outpaces batch versions on popular learning tasks.
Covariate measurement error in nonparametric regression is a common problem in nutritional epidemiology and geostatistics, and other fields. Over the last two decades, this problem has received substantial attention in the frequentist literature. Bayesian approaches for handling measurement error have only been explored recently and are surprisingly successful, although the lack of a proper theoretical justification regarding the asymptotic performance of the estimators. By specifying a Gaussian process prior on the regression function and a Dirichlet process Gaussian mixture prior on the unknown distribution of the unobserved covariates, we show that the posterior distribution of the regression function and the unknown covariates density attain optimal rates of contraction adaptively over a range of H\"{o}lder classes, up to logarithmic terms. This improves upon the existing classical frequentist results which require knowledge of the smoothness of the underlying function to deliver optimal risk bounds. We also develop a novel surrogate prior for approximating the Gaussian process prior that leads to efficient computation and preserves the covariance structure, thereby facilitating easy prior elicitation. We demonstrate the empirical performance of our approach and compare it with competitors in a wide range of simulation experiments and a real data example.
Multivariate point processes are widely applied to model event-type data such as natural disasters, online message exchanges, financial transactions or neuronal spike trains. One very popular point process model in which the probability of occurrences of new events depend on the past of the process is the Hawkes process. In this work we consider the nonlinear Hawkes process, which notably models excitation and inhibition phenomena between dimensions of the process. In a nonparametric Bayesian estimation framework, we obtain concentration rates of the posterior distribution on the parameters, under mild assumptions on the prior distribution and the model. These results also lead to convergence rates of Bayesian estimators. Another object of interest in event-data modelling is to recover the graph of interaction - or Granger connectivity graph - of the phenomenon. We provide consistency guarantees on Bayesian methods for estimating this quantity; in particular, we prove that the posterior distribution is consistent on the graph adjacency matrix of the process, as well as a Bayesian estimator based on an adequate loss function.
Time-fractional parabolic equations with a Caputo time derivative are considered. For such equations, we explore and further develop the new methodology of the a-posteriori error estimation and adaptive time stepping proposed in [7]. We improve the earlier time stepping algorithm based on this theory, and specifically address its stable and efficient implementation in the context of high-order methods. The considered methods include an L1-2 method and continuous collocation methods of arbitrary order, for which adaptive temporal meshes are shown to yield optimal convergence rates in the presence of solution singularities.
In this paper, we analyze an operator splitting scheme of the nonlinear heat equation in $\Omega\subset\mathbb{R}^d$ ($d\geq 1$): $\partial_t u = \Delta u + \lambda |u|^{p-1} u$ in $\Omega\times(0,\infty)$, $u=0$ in $\partial\Omega\times(0,\infty)$, $u ({\bf x},0) =\phi ({\bf x})$ in $\Omega$. where $\lambda\in\{-1,1\}$ and $\phi \in W^{1,q}(\Omega)\cap L^{\infty} (\Omega)$ with $2\leq p < \infty$ and $d(p-1)/2<q<\infty$. We establish the well-posedness of the approximation of $u$ in $L^r$-space ($r\geq q$), and furthermore, we derive its convergence rate of order $\mathcal{O}(\tau)$ for a time step $\tau>0$. Finally, we give some numerical examples to confirm the reliability of the analyzed result.
Artificial neural networks (ANNs) have very successfully been used in numerical simulations for a series of computational problems ranging from image classification/image recognition, speech recognition, time series analysis, game intelligence, and computational advertising to numerical approximations of partial differential equations (PDEs). Such numerical simulations suggest that ANNs have the capacity to very efficiently approximate high-dimensional functions and, especially, indicate that ANNs seem to admit the fundamental power to overcome the curse of dimensionality when approximating the high-dimensional functions appearing in the above named computational problems. There are a series of rigorous mathematical approximation results for ANNs in the scientific literature. Some of them prove convergence without convergence rates and some even rigorously establish convergence rates but there are only a few special cases where mathematical results can rigorously explain the empirical success of ANNs when approximating high-dimensional functions. The key contribution of this article is to disclose that ANNs can efficiently approximate high-dimensional functions in the case of numerical approximations of Black-Scholes PDEs. More precisely, this work reveals that the number of required parameters of an ANN to approximate the solution of the Black-Scholes PDE grows at most polynomially in both the reciprocal of the prescribed approximation accuracy $\varepsilon > 0$ and the PDE dimension $d \in \mathbb{N}$. We thereby prove, for the first time, that ANNs do indeed overcome the curse of dimensionality in the numerical approximation of Black-Scholes PDEs.
A filtered Lie splitting scheme is proposed for the time integration of the cubic nonlinear Schr\"odinger equation on the two-dimensional torus $\mathbb{T}^2$. The scheme is analyzed in a framework of discrete Bourgain spaces, which allows us to consider initial data with low regularity; more precisely initial data in $H^s(\mathbb{T}^2)$ with $s>0$. In this way, the usual stability restriction to smooth Sobolev spaces with index $s>1$ is overcome. Rates of convergence of order $\tau^{s/2}$ in $L^2(\mathbb{T}^2)$ at this regularity level are proved. Numerical examples illustrate that these convergence results are sharp.
We study the random reshuffling (RR) method for smooth nonconvex optimization problems with a finite-sum structure. Though this method is widely utilized in practice such as the training of neural networks, its convergence behavior is only understood in several limited settings. In this paper, under the well-known Kurdyka-Lojasiewicz (KL) inequality, we establish strong limit-point convergence results for RR with appropriate diminishing step sizes, namely, the whole sequence of iterates generated by RR is convergent and converges to a single stationary point in an almost sure sense. In addition, we derive the corresponding rate of convergence, depending on the KL exponent and the suitably selected diminishing step sizes. When the KL exponent lies in $[0,\frac12]$, the convergence is at a rate of $\mathcal{O}(t^{-1})$ with $t$ counting the iteration number. When the KL exponent belongs to $(\frac12,1)$, our derived convergence rate is of the form $\mathcal{O}(t^{-q})$ with $q\in (0,1)$ depending on the KL exponent. The standard KL inequality-based convergence analysis framework only applies to algorithms with a certain descent property. We conduct a novel convergence analysis for the non-descent RR method with diminishing step sizes based on the KL inequality, which generalizes the standard KL framework. We summarize our main steps and core ideas in an informal analysis framework, which is of independent interest. As a direct application of this framework, we also establish similar strong limit-point convergence results for the reshuffled proximal point method.