In this paper, we study the variable-order (VO) time-fractional diffusion equations. For a VO function $\alpha(t)\in(0,1)$, we develop an exponential-sum-approximation (ESA) technique to approach the VO Caputo fractional derivative. The ESA technique keeps both the quadrature exponents and the number of exponentials in the summation unchanged at the different time levels. Approximating parameters are properly selected to achieve efficient accuracy. Compared with the general direct method, the proposed method reduces the storage requirement from $\mathcal{O}(n)$ to $\mathcal{O}(\log^2 n)$ and the computational cost from $\mathcal{O}(n^2)$ to $\mathcal{O}(n\log^2 n)$, respectively, with $n$ being the number of the time levels. When this fast algorithm is exploited to construct a fast ESA scheme for the VO time-fractional diffusion equations, the computational complexity of the proposed scheme is only of $\mathcal{O}(mn\log^2 n)$ with $\mathcal{O}(m\log^2n)$ storage requirement, where $m$ denotes the number of spatial grids. Theoretically, the unconditional stability and error analysis of the fast ESA scheme are given. The effectiveness of the proposed algorithm is verified by numerical examples.
We propose an operator preconditioner for general elliptic pseudodifferential equations in a domain $\Omega$, where $\Omega$ is either in $\mathbb{R}^n$ or in a Riemannian manifold. For linear systems of equations arising from low-order Galerkin discretizations, we obtain condition numbers that are independent of the mesh size and of the choice of bases for test and trial functions. The basic ingredient is a classical formula by Boggio for the fractional Laplacian, which is extended analytically. In the special case of the weakly and hypersingular operators on a line segment or a screen, our approach gives a unified, independent proof for a series of recent results by Hiptmair, Jerez-Hanckes, N\'{e}d\'{e}lec and Urz\'{u}a-Torres. We also study the increasing relevance of the regularity assumptions on the mesh with the order of the operator. Numerical examples validate our theoretical findings and illustrate the performance of the proposed preconditioner on quasi-uniform, graded and adaptively generated meshes.
In this paper, we develop a numerical method for the L\'evy-Fokker-Planck equation with the fractional diffusive scaling. There are two main challenges. One comes from a two-fold nonlocality, that is, the need to apply the fractional Laplacian operator to a power law decay distribution. The other arises from long-time/small mean-free-path scaling, which introduces stiffness to the equation. To resolve the first difficulty, we use a change of variable to convert the unbounded domain into a bounded one and then apply the Chebyshev polynomial based pseudo-spectral method. To treat the multiple scales, we propose an asymptotic preserving scheme based on a novel micro-macro decomposition that uses the structure of the test function in proving the fractional diffusion limit analytically. Finally, the efficiency and accuracy of our scheme are illustrated by a suite of numerical examples.
We consider the problem of parameter estimation for a class of continuous-time state space models. In particular, we explore the case of a partially observed diffusion, with data also arriving according to a diffusion process. Based upon a standard identity of the score function, we consider two particle filter based methodologies to estimate the score function. Both methods rely on an online estimation algorithm for the score function of $\mathcal{O}(N^2)$ cost, with $N\in\mathbb{N}$ the number of particles. The first approach employs a simple Euler discretization and standard particle smoothers and is of cost $\mathcal{O}(N^2 + N\Delta_l^{-1})$ per unit time, where $\Delta_l=2^{-l}$, $l\in\mathbb{N}_0$, is the time-discretization step. The second approach is new and based upon a novel diffusion bridge construction. It yields a new backward type Feynman-Kac formula in continuous-time for the score function and is presented along with a particle method for its approximation. Considering a time-discretization, the cost is $\mathcal{O}(N^2\Delta_l^{-1})$ per unit time. To improve computational costs, we then consider multilevel methodologies for the score function. We illustrate our parameter estimation method via stochastic gradient approaches in several numerical examples.
The $p$-step backwards difference formula (BDF) for solving the system of ODEs can result in a kind of all-at-once linear systems, which are solved via the parallel-in-time preconditioned Krylov subspace solvers (see McDonald, Pestana, and Wathen [SIAM J. Sci. Comput., 40(2) (2018): A1012-A1033] and Lin and Ng [arXiv:2002.01108, 17 pages]. However, these studies ignored that the $p$-step BDF ($p\geq 2$) is not selfstarting, when they are exploited to solve time-dependent PDEs. In this note, we focus on the 2-step BDF which is often superior to the trapezoidal rule for solving the Riesz fractional diffusion equations, but its resultant all-at-once discretized system is a block triangular Toeplitz system with a low-rank perturbation. Meanwhile, we first give an estimation of the condition number of the all-at-once systems and then adapt the previous work to construct two block circulant (BC) preconditioners. Both the invertibility of these two BC preconditioners and the eigenvalue distributions of preconditioned matrices are discussed in details. The efficient implementation of these BC preconditioners is also presented especially for handling the computation of dense structured Jacobi matrices. Finally, numerical experiments involving both the one- and two-dimensional Riesz fractional diffusion equations are reported to support our theoretical findings.
The numerical integration of phase-field equations is a delicate task which needs to recover at the discrete level intrinsic properties of the solution such as energy dissipation and maximum principle. Although the theory of energy dissipation for classical phase field models is well established, the corresponding theory for time-fractional phase-field models is still incomplete. In this article, we study certain nonlocal-in-time energies using the first-order stabilized semi-implicit L1 scheme. In particular, we will establish a discrete fractional energy law and a discrete weighted energy law. The extension for a $(2-{\alpha})$-order L1 scalar auxiliary variable scheme will be investigated. Moreover, we demonstrate that the energy bound is preserved for the L1 schemes with nonuniform time steps. Several numerical experiments are carried to verify our theoretical analysis.
We propose and compare methods for the analysis of extreme events in complex systems governed by PDEs that involve random parameters, in situations where we are interested in quantifying the probability that a scalar function of the system's solution is above a threshold. If the threshold is large, this probability is small and its accurate estimation is challenging. To tackle this difficulty, we blend theoretical results from large deviation theory (LDT) with numerical tools from PDE-constrained optimization. Our methods first compute parameters that minimize the LDT-rate function over the set of parameters leading to extreme events, using adjoint methods to compute the gradient of this rate function. The minimizers give information about the mechanism of the extreme events as well as estimates of their probability. We then propose a series of methods to refine these estimates, either via importance sampling or geometric approximation of the extreme event sets. Results are formulated for general parameter distributions and detailed expressions are provided when Gaussian distributions. We give theoretical and numerical arguments showing that the performance of our methods is insensitive to the extremeness of the events we are interested in. We illustrate the application of our approach to quantify the probability of extreme tsunami events on shore. Tsunamis are typically caused by a sudden, unpredictable change of the ocean floor elevation during an earthquake. We model this change as a random process, which takes into account the underlying physics. We use the one-dimensional shallow water equation to model tsunamis numerically. In the context of this example, we present a comparison of our methods for extreme event probability estimation, and find which type of ocean floor elevation change leads to the largest tsunamis on shore.
Classically transmission conditions between subdomains are optimized for a simplified two subdomain decomposition to obtain optimized Schwarz methods for many subdomains. We investigate here if such a simplified optimization suffices for the magnetotelluric approximation of Maxwell's equation which leads to a complex diffusion problem. We start with a direct analysis for 2 and 3 subdomains, and present asymptotically optimized transmission conditions in each case. We then optimize transmission conditions numerically for 4, 5 and 6 subdomains and observe the same asymptotic behavior of optimized transmission conditions. We finally use the technique of limiting spectra to optimize for a very large number of subdomains in a strip decomposition. Our analysis shows that the asymptotically best choice of transmission conditions is the same in all these situations, only the constants differ slightly. It is therefore enough for such diffusive type approximations of Maxwell's equations, which include the special case of the Laplace and screened Laplace equation, to optimize transmission parameters in the simplified two subdomain decomposition setting to obtain good transmission conditions for optimized Schwarz methods for more general decompositions.
Matrices with low numerical rank are omnipresent in many signal processing and data analysis applications. The pivoted QLP (p-QLP) algorithm constructs a highly accurate approximation to an input low-rank matrix. However, it is computationally prohibitive for large matrices. In this paper, we introduce a new algorithm termed Projection-based Partial QLP (PbP-QLP) that efficiently approximates the p-QLP with high accuracy. Fundamental in our work is the exploitation of randomization and in contrast to the p-QLP, PbP-QLP does not use the pivoting strategy. As such, PbP-QLP can harness modern computer architectures, even better than competing randomized algorithms. The efficiency and effectiveness of our proposed PbP-QLP algorithm are investigated through various classes of synthetic and real-world data matrices.
We consider the multivariate max-linear regression problem where the model parameters $\boldsymbol{\beta}_{1},\dotsc,\boldsymbol{\beta}_{k}\in\mathbb{R}^{p}$ need to be estimated from $n$ independent samples of the (noisy) observations $y = \max_{1\leq j \leq k} \boldsymbol{\beta}_{j}^{\mathsf{T}} \boldsymbol{x} + \mathrm{noise}$. The max-linear model vastly generalizes the conventional linear model, and it can approximate any convex function to an arbitrary accuracy when the number of linear models $k$ is large enough. However, the inherent nonlinearity of the max-linear model renders the estimation of the regression parameters computationally challenging. Particularly, no estimator based on convex programming is known in the literature. We formulate and analyze a scalable convex program as the estimator for the max-linear regression problem. Under the standard Gaussian observation setting, we present a non-asymptotic performance guarantee showing that the convex program recovers the parameters with high probability. When the $k$ linear components are equally likely to achieve the maximum, our result shows that a sufficient number of observations scales as $k^{2}p$ up to a logarithmic factor. This significantly improves on the analogous prior result based on alternating minimization (Ghosh et al., 2019). Finally, through a set of Monte Carlo simulations, we illustrate that our theoretical result is consistent with empirical behavior, and the convex estimator for max-linear regression is as competitive as the alternating minimization algorithm in practice.
In order to avoid the curse of dimensionality, frequently encountered in Big Data analysis, there was a vast development in the field of linear and nonlinear dimension reduction techniques in recent years. These techniques (sometimes referred to as manifold learning) assume that the scattered input data is lying on a lower dimensional manifold, thus the high dimensionality problem can be overcome by learning the lower dimensionality behavior. However, in real life applications, data is often very noisy. In this work, we propose a method to approximate $\mathcal{M}$ a $d$-dimensional $C^{m+1}$ smooth submanifold of $\mathbb{R}^n$ ($d \ll n$) based upon noisy scattered data points (i.e., a data cloud). We assume that the data points are located "near" the lower dimensional manifold and suggest a non-linear moving least-squares projection on an approximating $d$-dimensional manifold. Under some mild assumptions, the resulting approximant is shown to be infinitely smooth and of high approximation order (i.e., $O(h^{m+1})$, where $h$ is the fill distance and $m$ is the degree of the local polynomial approximation). The method presented here assumes no analytic knowledge of the approximated manifold and the approximation algorithm is linear in the large dimension $n$. Furthermore, the approximating manifold can serve as a framework to perform operations directly on the high dimensional data in a computationally efficient manner. This way, the preparatory step of dimension reduction, which induces distortions to the data, can be avoided altogether.