Solving high-dimensional partial differential equations necessitates methods free of exponential scaling in the dimension of the problem. This work introduces a tensor network approach for the Kolmogorov backward equation via approximating directly the Markov operator. We show that the high-dimensional Markov operator can be obtained under a functional hierarchical tensor (FHT) ansatz with a hierarchical sketching algorithm. When the terminal condition admits an FHT ansatz, the proposed operator outputs an FHT ansatz for the PDE solution through an efficient functional tensor network contraction procedure. In addition, the proposed operator-based approach also provides an efficient way to solve the Kolmogorov forward equation when the initial distribution is in an FHT ansatz. We apply the proposed approach successfully to two challenging time-dependent Ginzburg-Landau models with hundreds of variables.
We present an algorithm which uses Fujiwara's inequality to bound algebraic functions over ellipses of a certain type, allowing us to concretely implement a rigorous Gauss-Legendre integration method for algebraic functions over a line segment. We consider path splitting strategies to improve convergence of the method and show that these yield significant practical and asymptotic benefits. We implemented these methods to compute period matrices of algebraic Riemann surfaces and these are available in SageMath.
This work presents two numerical schemes for the variable-exponent fractional diffusion-wave equation, which describes, e.g. the propagation of mechanical diffusive waves in viscoelastic media with varying material properties. The main difficulty we overcome lies in that the variable-exponent Abel kernel may not be positive definite or monotonic, and the stability and error estimate of both schemes are proved, with $\alpha(0)$-order and second-order accuracy in time, respectively. Numerical experiments are presented to substantiate the theoretical findings.
We obtain rates of convergence of numerical approximations of abstract linear evolution equations. Our estimates extend known results like Theorem 3.5 in \cite{thomee} to more general equations and accommodate more advanced numerical approximation techniques. As an example, we consider parabolic equations on surfaces, and surface finite element approximations.
Starting from the Kirchhoff-Huygens representation and Duhamel's principle of time-domain wave equations, we propose novel butterfly-compressed Hadamard integrators for self-adjoint wave equations in both time and frequency domain in an inhomogeneous medium. First, we incorporate the leading term of Hadamard's ansatz into the Kirchhoff-Huygens representation to develop a short-time valid propagator. Second, using the Fourier transform in time, we derive the corresponding Eulerian short-time propagator in frequency domain; on top of this propagator, we further develop a time-frequency-time (TFT) method for the Cauchy problem of time-domain wave equations. Third, we further propose the time-frequency-time-frequency (TFTF) method for the corresponding point-source Helmholtz equation, which provides Green's functions of the Helmholtz equation for all angular frequencies within a given frequency band. Fourth, to implement TFT and TFTF methods efficiently, we introduce butterfly algorithms to compress oscillatory integral kernels at different frequencies. As a result, the proposed methods can construct wave field beyond caustics implicitly and advance spatially overturning waves in time naturally with quasi-optimal computational complexity and memory usage. Furthermore, once constructed the Hadamard integrators can be employed to solve both time-domain wave equations with various initial conditions and frequency-domain wave equations with different point sources. Numerical examples for two-dimensional wave equations illustrate the accuracy and efficiency of the proposed methods.
This work deals with developing two fast randomized algorithms for computing the generalized tensor singular value decomposition (GTSVD) based on the tubal product (t-product). The random projection method is utilized to compute the important actions of the underlying data tensors and use them to get small sketches of the original data tensors, which are easier to be handled. Due to the small size of the sketch tensors, deterministic approaches are applied to them to compute their GTSVDs. Then, from the GTSVD of the small sketch tensors, the GTSVD of the original large-scale data tensors is recovered. Some experiments are conducted to show the effectiveness of the proposed approach.
Underdetermined generalized absolute value equations (GAVE) has real applications. The underdetermined GAVE may have no solution, one solution, finitely multiple solutions or infinitely many solutions. This paper aims to give some sufficient conditions which guarantee the existence or nonexistence of solutions for the underdetermined GAVE. Particularly, sufficient conditions under which certain or each sign pattern possesses infinitely many solutions of the underdetermined GAVE are given. In addition, iterative methods are developed to solve a solution of the underdetermined GAVE. Some existing results about the square GAVE are extended.
It is one of the most challenging issues in applied mathematics to approximately solve high-dimensional partial differential equations (PDEs) and most of the numerical approximation methods for PDEs in the scientific literature suffer from the so-called curse of dimensionality in the sense that the number of computational operations employed in the corresponding approximation scheme to obtain an approximation precision $\varepsilon>0$ grows exponentially in the PDE dimension and/or the reciprocal of $\varepsilon$. Recently, certain deep learning based approximation methods for PDEs have been proposed and various numerical simulations for such methods suggest that deep neural network (DNN) approximations might have the capacity to indeed overcome the curse of dimensionality in the sense that the number of real parameters used to describe the approximating DNNs grows at most polynomially in both the PDE dimension $d\in\mathbb{N}$ and the reciprocal of the prescribed accuracy $\varepsilon>0$. There are now also a few rigorous results in the scientific literature which substantiate this conjecture by proving that DNNs overcome the curse of dimensionality in approximating solutions of PDEs. Each of these results establishes that DNNs overcome the curse of dimensionality in approximating suitable PDE solutions at a fixed time point $T>0$ and on a compact cube $[a,b]^d$ in space but none of these results provides an answer to the question whether the entire PDE solution on $[0,T]\times [a,b]^d$ can be approximated by DNNs without the curse of dimensionality. It is precisely the subject of this article to overcome this issue. More specifically, the main result of this work in particular proves for every $a\in\mathbb{R}$, $ b\in (a,\infty)$ that solutions of certain Kolmogorov PDEs can be approximated by DNNs on the space-time region $[0,T]\times [a,b]^d$ without the curse of dimensionality.
We propose a new simple and explicit numerical scheme for time-homogeneous stochastic differential equations. The scheme is based on sampling increments at each time step from a skew-symmetric probability distribution, with the level of skewness determined by the drift and volatility of the underlying process. We show that as the step-size decreases the scheme converges weakly to the diffusion of interest. We then consider the problem of simulating from the limiting distribution of an ergodic diffusion process using the numerical scheme with a fixed step-size. We establish conditions under which the numerical scheme converges to equilibrium at a geometric rate, and quantify the bias between the equilibrium distributions of the scheme and of the true diffusion process. Notably, our results do not require a global Lipschitz assumption on the drift, in contrast to those required for the Euler--Maruyama scheme for long-time simulation at fixed step-sizes. Our weak convergence result relies on an extension of the theory of Milstein \& Tretyakov to stochastic differential equations with non-Lipschitz drift, which could also be of independent interest. We support our theoretical results with numerical simulations.
We propose a two-step procedure to model and predict high-dimensional functional time series, where the number of function-valued time series $p$ is large in relation to the length of time series $n$. Our first step performs an eigenanalysis of a positive definite matrix, which leads to a one-to-one linear transformation for the original high-dimensional functional time series, and the transformed curve series can be segmented into several groups such that any two subseries from any two different groups are uncorrelated both contemporaneously and serially. Consequently in our second step those groups are handled separately without the information loss on the overall linear dynamic structure. The second step is devoted to establishing a finite-dimensional dynamical structure for all the transformed functional time series within each group. Furthermore the finite-dimensional structure is represented by that of a vector time series. Modelling and forecasting for the original high-dimensional functional time series are realized via those for the vector time series in all the groups. We investigate the theoretical properties of our proposed methods, and illustrate the finite-sample performance through both extensive simulation and two real datasets.
Approximation of solutions to partial differential equations (PDE) is an important problem in computational science and engineering. Using neural networks as an ansatz for the solution has proven a challenge in terms of training time and approximation accuracy. In this contribution, we discuss how sampling the hidden weights and biases of the ansatz network from data-agnostic and data-dependent probability distributions allows us to progress on both challenges. In most examples, the random sampling schemes outperform iterative, gradient-based optimization of physics-informed neural networks regarding training time and accuracy by several orders of magnitude. For time-dependent PDE, we construct neural basis functions only in the spatial domain and then solve the associated ordinary differential equation with classical methods from scientific computing over a long time horizon. This alleviates one of the greatest challenges for neural PDE solvers because it does not require us to parameterize the solution in time. For second-order elliptic PDE in Barron spaces, we prove the existence of sampled networks with $L^2$ convergence to the solution. We demonstrate our approach on several time-dependent and static PDEs. We also illustrate how sampled networks can effectively solve inverse problems in this setting. Benefits compared to common numerical schemes include spectral convergence and mesh-free construction of basis functions.