In this paper, the Lie symmetry analysis is proposed for a space-time convection-diffusion fractional differential equations with the Riemann-Liouville derivative by (2+1) independent variables and one dependent variable. We find a reduction form of our governed fractional differential equation using the similarity solution of our Lie symmetry. One-dimensional optimal system of Lie symmetry algebras is found. We present a computational method via the spectral method based on Bernstein's operational matrices to solve the two-dimensional fractional heat equation with some initial conditions.
We study the scaling limits of stochastic gradient descent (SGD) with constant step-size in the high-dimensional regime. We prove limit theorems for the trajectories of summary statistics (i.e., finite-dimensional functions) of SGD as the dimension goes to infinity. Our approach allows one to choose the summary statistics that are tracked, the initialization, and the step-size. It yields both ballistic (ODE) and diffusive (SDE) limits, with the limit depending dramatically on the former choices. Interestingly, we find a critical scaling regime for the step-size below which the effective ballistic dynamics matches gradient flow for the population loss, but at which, a new correction term appears which changes the phase diagram. About the fixed points of this effective dynamics, the corresponding diffusive limits can be quite complex and even degenerate. We demonstrate our approach on popular examples including estimation for spiked matrix and tensor models and classification via two-layer networks for binary and XOR-type Gaussian mixture models. These examples exhibit surprising phenomena including multimodal timescales to convergence as well as convergence to sub-optimal solutions with probability bounded away from zero from random (e.g., Gaussian) initializations.
High-dimensional Partial Differential Equations (PDEs) are a popular mathematical modelling tool, with applications ranging from finance to computational chemistry. However, standard numerical techniques for solving these PDEs are typically affected by the curse of dimensionality. In this work, we tackle this challenge while focusing on stationary diffusion equations defined over a high-dimensional domain with periodic boundary conditions. Inspired by recent progress in sparse function approximation in high dimensions, we propose a new method called compressive Fourier collocation. Combining ideas from compressive sensing and spectral collocation, our method replaces the use of structured collocation grids with Monte Carlo sampling and employs sparse recovery techniques, such as orthogonal matching pursuit and $\ell^1$ minimization, to approximate the Fourier coefficients of the PDE solution. We conduct a rigorous theoretical analysis showing that the approximation error of the proposed method is comparable with the best $s$-term approximation (with respect to the Fourier basis) to the solution. Using the recently introduced framework of random sampling in bounded Riesz systems, our analysis shows that the compressive Fourier collocation method mitigates the curse of dimensionality with respect to the number of collocation points under sufficient conditions on the regularity of the diffusion coefficient. We also present numerical experiments that illustrate the accuracy and stability of the method for the approximation of sparse and compressible solutions.
In this paper, we are concerned with the numerical solution for the two-dimensional time fractional Fokker-Planck equation with tempered fractional derivative of order $\alpha$. Although some of its variants are considered in many recent numerical analysis papers, there are still some significant differences. Here we first provide the regularity estimates of the solution. And then a modified $L$1 scheme inspired by the middle rectangle quadrature formula on graded meshes is employed to compensate for the singularity of the solution at $t\rightarrow 0^{+}$, while the five-point difference scheme is used in space. Stability and convergence are proved in the sence of $L^{\infty}$ norm, then a sharp error estimate $\mathscr{O}(\tau^{\min\{2-\alpha, r\alpha\}})$ is derived on graded meshes. Furthermore, unlike the bounds proved in the previous works, the constant multipliers in our analysis do not blow up as the Caputo fractional derivative $\alpha$ approaches the classical value of 1. Finally, we perform the numerical experiments to verify the effectiveness and convergence order of the presented algorithms.
In this paper, a higher order finite difference scheme is proposed for Generalized Fractional Diffusion Equations (GFDEs). The fractional diffusion equation is considered in terms of the generalized fractional derivatives (GFDs) which uses the scale and weight functions in the definition. The GFD reduces to the Riemann-Liouville, Caputo derivatives and other fractional derivatives in a particular case. Due to importance of the scale and the weight functions in describing behaviour of real-life physical systems, we present the solutions of the GFDEs by considering various scale and weight functions. The convergence and stability analysis are also discussed for finite difference scheme (FDS) to validate the proposed method. We consider test examples for numerical simulation of FDS to justify the proposed numerical method.
In this paper, we consider the prediction of the helium concentrations as function of a spatially variable source term perturbed by fractional Brownian motion. For the direct problem, we show that it is well-posed and has a unique mild solution under some conditions. For the inverse problem, the uniqueness and the instability are given. In the meanwhile, we determine the statistical properties of the source from the expectation and covariance of the final-time data u(r,T). Finally, numerical implements are given to verify the effectiveness of the proposed reconstruction.
We introduce a new type of Krasnoselskii's result. Using a simple differentiability condition, we relax the nonexpansive condition in Krasnoselskii's theorem. More clearly, we analyze the convergence of the sequence $x_{n+1}=\frac{x_n+g(x_n)}{2}$ based on some differentiability condition of $g$ and present some fixed point results. We introduce some iterative sequences that for any real differentiable function $g$ and any starting point $x_0\in \mathbb [a,b]$ converge monotonically to the nearest root of $g$ in $[a,b]$ that lay to the right or left side of $x_0$. Based on this approach, we present an efficient and novel method for finding the real roots of real functions. We prove that no root will be missed in our method. It is worth mentioning that our iterative method is free from the derivative evaluation which can be regarded as an advantage of this method in comparison with many other methods. Finally, we illustrate our results with some numerical examples.
Two-dimensional finite element complexes with various smoothness, including the de Rham complex, the curldiv complex, the elasticity complex, and the divdiv complex, are systematically constructed in this work. First smooth scalar finite elements in two dimensions are developed based on a non-overlapping decomposition of the simplicial lattice and the Bernstein basis of the polynomial space. Smoothness at vertices is more than doubled than that at edges. Then the finite element de Rham complexes with various smoothness are devised using smooth finite elements with smoothness parameters satisfying certain relations. Finally, finite element elasticity complexes and finite element divdiv complexes are derived from finite element de Rham complexes by using the Bernstein-Gelfand-Gelfand (BGG) framework. Additionally, some finite element divdiv complexes are constructed without BGG framework. Dimension count plays an important role for verifying the exactness of two-dimensional finite element complexes.
Measuring the stability of conclusions derived from Ordinary Least Squares linear regression is critically important, but most metrics either only measure local stability (i.e. against infinitesimal changes in the data), or are only interpretable under statistical assumptions. Recent work proposes a simple, global, finite-sample stability metric: the minimum number of samples that need to be removed so that rerunning the analysis overturns the conclusion, specifically meaning that the sign of a particular coefficient of the estimated regressor changes. However, besides the trivial exponential-time algorithm, the only approach for computing this metric is a greedy heuristic that lacks provable guarantees under reasonable, verifiable assumptions; the heuristic provides a loose upper bound on the stability and also cannot certify lower bounds on it. We show that in the low-dimensional regime where the number of covariates is a constant but the number of samples is large, there are efficient algorithms for provably estimating (a fractional version of) this metric. Applying our algorithms to the Boston Housing dataset, we exhibit regression analyses where we can estimate the stability up to a factor of $3$ better than the greedy heuristic, and analyses where we can certify stability to dropping even a majority of the samples.
An evolving surface finite element discretisation is analysed for the evolution of a closed two-dimensional surface governed by a system coupling a generalised forced mean curvature flow and a reaction--diffusion process on the surface, inspired by a gradient flow of a coupled energy. Two algorithms are proposed, both based on a system coupling the diffusion equation to evolution equations for geometric quantities in the velocity law for the surface. One of the numerical methods is proved to be convergent in the $H^1$ norm with optimal-order for finite elements of degree at least two. We present numerical experiments illustrating the convergence behaviour and demonstrating the qualitative properties of the flow: preservation of mean convexity, loss of convexity, weak maximum principles, and the occurrence of self-intersections.
Covariance matrix estimation is a fundamental statistical task in many applications, but the sample covariance matrix is sub-optimal when the sample size is comparable to or less than the number of features. Such high-dimensional settings are common in modern genomics, where covariance matrix estimation is frequently employed as a method for inferring gene networks. To achieve estimation accuracy in these settings, existing methods typically either assume that the population covariance matrix has some particular structure, for example sparsity, or apply shrinkage to better estimate the population eigenvalues. In this paper, we study a new approach to estimating high-dimensional covariance matrices. We first frame covariance matrix estimation as a compound decision problem. This motivates defining a class of decision rules and using a nonparametric empirical Bayes g-modeling approach to estimate the optimal rule in the class. Simulation results and gene network inference in an RNA-seq experiment in mouse show that our approach is comparable to or can outperform a number of state-of-the-art proposals.