Convergence rates for $L_2$ approximation in a Hilbert space $H$ are a central theme in numerical analysis. The present work is inspired by Schaback (Math. Comp., 1999), who showed, in the context of best pointwise approximation for radial basis function interpolation, that the convergence rate for sufficiently smooth functions can be doubled, compared to the best rate for functions in the "native space" $H$. Motivated by this, we obtain a general result for $H$-orthogonal projection onto a finite dimensional subspace of $H$: namely, that any known $L_2$ convergence rate for all functions in $H$ translates into a doubled $L_2$ convergence rate for functions in a smoother normed space $B$, along with a similarly improved error bound in the $H$-norm, provided that $L_2$, $H$ and $B$ are suitably related. As a special case we improve the known $L_2$ and $H$-norm convergence rates for kernel interpolation in reproducing kernel Hilbert spaces, with particular attention to a recent study (Kaarnioja, Kazashi, Kuo, Nobile, Sloan, Numer. Math., 2022) of periodic kernel-based interpolation at lattice points applied to parametric partial differential equations. A second application is to radial basis function interpolation for general conditionally positive definite basis functions, where again the $L_2$ convergence rate is doubled, and the convergence rate in the native space norm is similarly improved, for all functions in a smoother normed space $B$.
In this paper, we introduce a highly accurate and efficient numerical solver for the radial Kohn--Sham equation. The equation is discretized using a high-order finite element method, with its performance further improved by incorporating a parameter-free moving mesh technique. This approach greatly reduces the number of elements required to achieve the desired precision. In practice, the mesh redistribution involves no more than three steps, ensuring the algorithm remains computationally efficient. Remarkably, with a maximum of $13$ elements, we successfully reproduce the NIST database results for elements with atomic numbers ranging from $1$ to $92$.
In decision-making, maxitive functions are used for worst-case and best-case evaluations. Maxitivity gives rise to a rich structure that is well-studied in the context of the pointwise order. In this article, we investigate maxitivity with respect to general preorders and provide a representation theorem for such functionals. The results are illustrated for different stochastic orders in the literature, including the usual stochastic order, the increasing convex/concave order, and the dispersive order.
We consider linear models with scalar responses and covariates from a separable Hilbert space. The aim is to detect change points in the error distribution, based on sequential residual empirical distribution functions. Expansions for those estimated functions are more challenging in models with infinite-dimensional covariates than in regression models with scalar or vector-valued covariates due to a slower rate of convergence of the parameter estimators. Yet the suggested change point test is asymptotically distribution-free and consistent for one-change point alternatives. In the latter case we also show consistency of a change point estimator.
Interpreting data with mathematical models is an important aspect of real-world applied mathematical modeling. Very often we are interested to understand the extent to which a particular data set informs and constrains model parameters. This question is closely related to the concept of parameter identifiability, and in this article we present a series of computational exercises to introduce tools that can be used to assess parameter identifiability, estimate parameters and generate model predictions. Taking a likelihood-based approach, we show that very similar ideas and algorithms can be used to deal with a range of different mathematical modelling frameworks. The exercises and results presented in this article are supported by a suite of open access codes that can be accessed on GitHub.
The sequential probability ratio test (SPRT) by Wald (1945) is a cornerstone of sequential analysis. Based on desired type-I, II error levels $\alpha, \beta \in (0,1)$, it stops when the likelihood ratio statistic crosses certain upper and lower thresholds, guaranteeing optimality of the expected sample size. However, these thresholds are not closed form and the test is often applied with approximate thresholds $(1-\beta)/\alpha$ and $\beta/(1-\alpha)$ (approximate SPRT). When $\beta > 0$, this neither guarantees type I,II error control at $\alpha,\beta$ nor optimality. When $\beta=0$ (power-one SPRT), it guarantees type I error control at $\alpha$ that is in general conservative, and thus not optimal. The looseness in both cases is caused by overshoot: the test statistic overshoots the thresholds at the stopping time. One standard way to address this is to calculate the right thresholds numerically, but many papers and software packages do not do this. In this paper, we describe a different way to improve the approximate SPRT: we change the test statistic to avoid overshoot. Our technique uniformly improves power-one SPRTs $(\beta=0)$ for simple nulls and alternatives, or for one-sided nulls and alternatives in exponential families. When $\beta > 0$, our techniques provide valid type I and type II error guarantees, while needing less samples than Wald's approximated thresholds in all considered simulations. These improved sequential tests can also be used for deriving tighter parametric confidence sequences, and can be extended to nontrivial settings like sampling without replacement and conformal martingales.
We prove explicit bounds on the exponential rate of convergence for the momentum stochastic gradient descent scheme (MSGD) for arbitrary, fixed hyperparameters (learning rate, friction parameter) and its continuous-in-time counterpart in the context of non-convex optimization. In the small step-size regime and in the case of flat minima or large noise intensities, these bounds prove faster convergence of MSGD compared to plain stochastic gradient descent (SGD). The results are shown for objective functions satisfying a local Polyak-Lojasiewicz inequality and under assumptions on the variance of MSGD that are satisfied in overparametrized settings. Moreover, we analyze the optimal choice of the friction parameter and show that the MSGD process almost surely converges to a local minimum.
We propose a novel, highly efficient, second-order accurate, long-time unconditionally stable numerical scheme for a class of finite-dimensional nonlinear models that are of importance in geophysical fluid dynamics. The scheme is highly efficient in the sense that only a (fixed) symmetric positive definite linear problem (with varying right hand sides) is involved at each time-step. The solutions to the scheme are uniformly bounded for all time. We show that the scheme is able to capture the long-time dynamics of the underlying geophysical model, with the global attractors as well as the invariant measures of the scheme converge to those of the original model as the step size approaches zero. In our numerical experiments, we take an indirect approach, using long-term statistics to approximate the invariant measures. Our results suggest that the convergence rate of the long-term statistics, as a function of terminal time, is approximately first order using the Jensen-Shannon metric and half-order using the L1 metric. This implies that very long time simulation is needed in order to capture a few significant digits of long time statistics (climate) correct. Nevertheless, the second order scheme's performance remains superior to that of the first order one, requiring significantly less time to reach a small neighborhood of statistical equilibrium for a given step size.
We develop two novel couplings between general pure-jump L\'evy processes in $\R^d$ and apply them to obtain upper bounds on the rate of convergence in an appropriate Wasserstein distance on the path space for a wide class of L\'evy processes attracted to a multidimensional stable process in the small-time regime. We also establish general lower bounds based on certain universal properties of slowly varying functions and the relationship between the Wasserstein and Toscani--Fourier distances of the marginals. Our upper and lower bounds typically have matching rates. In particular, the rate of convergence is polynomial for the domain of normal attraction and slower than a slowly varying function for the domain of non-normal attraction.
The gradient bounds of generalized barycentric coordinates play an essential role in the $H^1$ norm approximation error estimate of generalized barycentric interpolations. Similarly, the $H^k$ norm, $k>1$, estimate needs upper bounds of high-order derivatives, which are not available in the literature. In this paper, we derive such upper bounds for the Wachspress generalized barycentric coordinates on simple convex $d$-dimensional polytopes, $d\ge 1$. The result can be used to prove optimal convergence for Wachspress-based polytopal finite element approximation of, for example, fourth-order elliptic equations. Another contribution of this paper is to compare various shape-regularity conditions for simple convex polytopes, and to clarify their relations using knowledge from convex geometry.
This paper presents a convolution tensor decomposition based model reduction method for solving the Allen-Cahn equation. The Allen-Cahn equation is usually used to characterize phase separation or the motion of anti-phase boundaries in materials. Its solution is time-consuming when high-resolution meshes and large time scale integration are involved. To resolve these issues, the convolution tensor decomposition method is developed, in conjunction with a stabilized semi-implicit scheme for time integration. The development enables a powerful computational framework for high-resolution solutions of Allen-Cahn problems, and allows the use of relatively large time increments for time integration without violating the discrete energy law. To further improve the efficiency and robustness of the method, an adaptive algorithm is also proposed. Numerical examples have confirmed the efficiency of the method in both 2D and 3D problems. Orders-of-magnitude speedups were obtained with the method for high-resolution problems, compared to the finite element method. The proposed computational framework opens numerous opportunities for simulating complex microstructure formation in materials on large-volume high-resolution meshes at a deeply reduced computational cost.