In this paper, we discuss some numerical realizations of Shannon's sampling theorem. First we show the poor convergence of classical Shannon sampling sums by presenting sharp upper and lower bounds of the norm of the Shannon sampling operator. In addition, it is known that in the presence of noise in the samples of a bandlimited function, the convergence of Shannon sampling series may even break down completely. To overcome these drawbacks, one can use oversampling and regularization with a convenient window function. Such a window function can be chosen either in frequency domain or in time domain. We especially put emphasis on the comparison of these two approaches in terms of error decay rates. It turns out that the best numerical results are obtained by oversampling and regularization in time domain using a $\sinh$-type window function or a continuous Kaiser--Bessel window function, which results in an interpolating approximation with localized sampling. Several numerical experiments illustrate the theoretical results.
We consider a linear implicit-explicit (IMEX) time discretization of the Cahn-Hilliard equation with a source term, endowed with Dirichlet boundary conditions. For every time step small enough, we build an exponential attractor of the discrete-in-time dynamical system associated to the discretization. We prove that, as the time step tends to 0, this attractor converges for the symmmetric Hausdorff distance to an exponential attractor of the continuous-in-time dynamical system associated with the PDE. We also prove that the fractal dimension of the exponential attractor (and consequently, of the global attractor) is bounded by a constant independent of the time step. The results also apply to the classical Cahn-Hilliard equation with Neumann boundary conditions.
Differential geometric approaches are ubiquitous in several fields of mathematics, physics and engineering, and their discretizations enable the development of network-based mathematical and computational frameworks, which are essential for large-scale data science. The Forman-Ricci curvature (FRC) - a statistical measure based on Riemannian geometry and designed for networks - is known for its high capacity for extracting geometric information from complex networks. However, extracting information from dense networks is still challenging due to the combinatorial explosion of high-order network structures. Motivated by this challenge we sought a set-theoretic representation theory for high-order network cells and FRC, as well as their associated concepts and properties, which together provide an alternative and efficient formulation for computing high-order FRC in complex networks. We provide a pseudo-code, a software implementation coined FastForman, as well as a benchmark comparison with alternative implementations. Crucially, our representation theory reveals previous computational bottlenecks and also accelerates the computation of FRC. As a consequence, our findings open new research possibilities in complex systems where higher-order geometric computations are required.
In this paper, we introduce the flexible interpretable gamma (FIG) distribution which has been derived by Weibullisation of the body-tail generalised normal distribution. The parameters of the FIG have been verified graphically and mathematically as having interpretable roles in controlling the left-tail, body, and right-tail shape. The generalised gamma (GG) distribution has become a staple model for positive data in statistics due to its interpretable parameters and tractable equations. Although there are many generalised forms of the GG which can provide better fit to data, none of them extend the GG so that the parameters are interpretable. Additionally, we present some mathematical characteristics and prove the identifiability of the FIG parameters. Finally, we apply the FIG model to hand grip strength and insurance loss data to assess its flexibility relative to existing models.
To minimize the average of a set of log-convex functions, the stochastic Newton method iteratively updates its estimate using subsampled versions of the full objective's gradient and Hessian. We contextualize this optimization problem as sequential Bayesian inference on a latent state-space model with a discriminatively-specified observation process. Applying Bayesian filtering then yields a novel optimization algorithm that considers the entire history of gradients and Hessians when forming an update. We establish matrix-based conditions under which the effect of older observations diminishes over time, in a manner analogous to Polyak's heavy ball momentum. We illustrate various aspects of our approach with an example and review other relevant innovations for the stochastic Newton method.
Ghost, or fictitious points allow to capture boundary conditions that are not located on the finite difference grid discretization. We explore in this paper the impact of ghost points on the stability of the explicit Euler finite difference scheme in the context of the diffusion equation. In particular, we consider the case of a one-touch option under the Black-Scholes model. The observations and results are however valid for a much wider range of financial contracts and models.
In this paper we develop a numerical method for efficiently approximating solutions of certain Zakai equations in high dimensions. The key idea is to transform a given Zakai SPDE into a PDE with random coefficients. We show that under suitable regularity assumptions on the coefficients of the Zakai equation, the corresponding random PDE admits a solution random field which, for almost all realizations of the random coefficients, can be written as a classical solution of a linear parabolic PDE. This makes it possible to apply the Feynman--Kac formula to obtain an efficient Monte Carlo scheme for computing approximate solutions of Zakai equations. The approach achieves good results in up to 25 dimensions with fast run times.
In the context of finite sums minimization, variance reduction techniques are widely used to improve the performance of state-of-the-art stochastic gradient methods. Their practical impact is clear, as well as their theoretical properties. Stochastic proximal point algorithms have been studied as an alternative to stochastic gradient algorithms since they are more stable with respect to the choice of the stepsize but a proper variance reduced version is missing. In this work, we propose the first study of variance reduction techniques for stochastic proximal point algorithms. We introduce a stochastic proximal version of SVRG, SAGA, and some of their variants for smooth and convex functions. We provide several convergence results for the iterates and the objective function values. In addition, under the Polyak-{\L}ojasiewicz (PL) condition, we obtain linear convergence rates for the iterates and the function values. Our numerical experiments demonstrate the advantages of the proximal variance reduction methods over their gradient counterparts, especially about the stability with respect to the choice of the step size.
In this paper, we study a numerical artifact of solving the nonlinear shallow water equations with a discontinuous bottom topography. For various first-order schemes, the numerical solution of the momentum will form a spurious spike at the discontinuous points of the bottom, which should not exist in the exact solution. The height of the spike cannot be reduced even after the mesh is refined. For subsonic problems, this numerical artifact may cause the wrong convergence to a function far away from the exact solution. To explain the formation of the spurious spike, we perform a convergence analysis by proving a Lax--Wendroff type theorem. It is shown that the spurious spike is caused by the numerical viscosity in the computation of the water height at the discontinuous bottom. The height of the spike is proportional to the magnitude of the viscosity constant in the Lax--Friedrichs flux. Motivated by this conclusion, we propose a modified scheme by adopting the central flux at the bottom discontinuity in the equation of mass conservation, and show that this numerical artifact can be removed in many cases. For various numerical tests with nontransonic Riemann solutions, we observe that the modified scheme is able to retrieve the correct convergence.
In this paper, we introduce several geometric characterizations for strong minima of optimization problems. Applying these results to nuclear norm minimization problems allows us to obtain new necessary and sufficient quantitative conditions for this important property. Our characterizations for strong minima are weaker than the Restricted Injectivity and Nondegenerate Source Condition, which are usually used to identify solution uniqueness of nuclear norm minimization problems. Consequently, we obtain the minimum (tight) bound on the number of measurements for (strong) exact recovery of low-rank matrices.
This paper focuses on investigating the density convergence of a fully discrete finite difference method when applied to numerically solve the stochastic Cahn--Hilliard equation driven by multiplicative space-time white noises. The main difficulty lies in the control of the drift coefficient that is neither globally Lipschitz nor one-sided Lipschitz. To handle this difficulty, we propose a novel localization argument and derive the strong convergence rate of the numerical solution to estimate the total variation distance between the exact and numerical solutions. This along with the existence of the density of the numerical solution finally yields the convergence of density in $L^1(\mathbb{R})$ of the numerical solution. Our results partially answer positively to the open problem emerged in [J. Cui and J. Hong, J. Differential Equations (2020)] on computing the density of the exact solution numerically.