We propose a collocation method based on multivariate polynomial splines over triangulation or tetrahedralization for the numerical solution of partial differential equations. We start with a detailed explanation of the method for the Poisson equation and then extend the study to the second-order elliptic PDE in non-divergence form. We shall show that the numerical solution can approximate the exact PDE solution very well. Then we present a large amount of numerical experimental results to demonstrate the performance of the method over the 2D and 3D settings. In addition, we present a comparison with the existing multivariate spline methods in \cite{ALW06} and \cite{LW17} to show that the new method produces a similar and sometimes more accurate approximation in a more efficient fashion.
We introduce a new overlapping Domain Decomposition Method (DDM) to solve the fully nonlinear Monge-Amp\`ere equation. While DDMs have been extensively studied for linear problems, their application to fully nonlinear partial differential equations (PDE) remains limited in the literature. To address this gap, we establish a proof of global convergence of these new iterative algorithms using a discrete comparison principle argument. Several numerical tests are performed to validate the convergence theorem. These numerical experiments involve examples of varying regularity. Computational experiments show that method is efficient, robust, and requires relatively few iterations to converge. The results reveal great potential for DDM methods to lead to highly efficient and parallelizable solvers for large-scale problems that are computationally intractable using existing solution methods.
Stability and optimal convergence analysis of a non-uniform implicit-explicit L1 finite element method (IMEX-L1-FEM) is studied for a class of time-fractional linear partial differential/integro-differential equations with non-self-adjoint elliptic part having (space-time) variable coefficients. The proposed scheme is based on a combination of an IMEX-L1 method on graded mesh in the temporal direction and a finite element method in the spatial direction. With the help of a discrete fractional Gr\"{o}nwall inequality, optimal error estimates in $L^2$- and $H^1$-norms are derived for the problem with initial data $u_0 \in H_0^1(\Omega)\cap H^2(\Omega)$. Under higher regularity condition $u_0 \in \dot{H}^3(\Omega)$, a super convergence result is established and as a consequence, $L^\infty$ error estimate is obtained for 2D problems. Numerical experiments are presented to validate our theoretical findings.
For valuing European options, a straightforward model is the well-known Black-Scholes formula. Contrary to market reality, this model assumed that interest rate and volatility are constant. To modify the Black-Scholes model, Heston and Cox-Ingersoll-Ross (CIR) offered the stochastic volatility and the stochastic interest rate models, respectively. The combination of the Heston, and the CIR models is called the Heston-Cox-Ingersoll-Ross (HCIR) model. Another essential issue that arises when purchasing or selling a good or service is the consideration of transaction costs which was ignored in the Black-Scholes technique. Leland improved the simplistic Black-Scholes strategy to take transaction costs into account. The main purpose of this paper is to apply the alternating direction implicit (ADI) method at a uniform grid for solving the HCIR model with transaction costs in the European style and comparing it with the explicit finite difference (EFD) scheme. Also, as evidence for numerical convergence, we convert the HCIR model with transaction costs to a linear PDE (HCIR) by ignoring transaction costs, then we estimate the solution of HCIR PDE using the ADI method which is a class of finite difference schemes, and compare it with analytical solution and EFD scheme. For multi-dimensional Black-Scholes equations, the ADI method, which is a category of finite difference techniques, is appropriate. When the dimensionality of the space increases, finite difference techniques frequently become more complex to perform, comprehend, and apply. Consequently, we employ the ADI approach to divide a multi-dimensional problem into several simpler, quite manageable sub-problems to overcome the dimensionality curse.
Recently, the use of deep equilibrium methods has emerged as a new approach for solving imaging and other ill-posed inverse problems. While learned components may be a key factor in the good performance of these methods in practice, a theoretical justification from a regularization point of view is still lacking. In this paper, we address this issue by providing stability and convergence results for the class of equilibrium methods. In addition, we derive convergence rates and stability estimates in the symmetric Bregman distance. We strengthen our results for regularization operators with contractive residuals. Furthermore, we use the presented analysis to gain insight into the practical behavior of these methods, including a lower bound on the performance of the regularized solutions. In addition, we show that the convergence analysis leads to the design of a new type of loss function which has several advantages over previous ones. Numerical simulations are used to support our findings.
Learned inverse problem solvers exhibit remarkable performance in applications like image reconstruction tasks. These data-driven reconstruction methods often follow a two-step scheme. First, one trains the often neural network-based reconstruction scheme via a dataset. Second, one applies the scheme to new measurements to obtain reconstructions. We follow these steps but parameterize the reconstruction scheme with invertible residual networks (iResNets). We demonstrate that the invertibility enables investigating the influence of the training and architecture choices on the resulting reconstruction scheme. For example, assuming local approximation properties of the network, we show that these schemes become convergent regularizations. In addition, the investigations reveal a formal link to the linear regularization theory of linear inverse problems and provide a nonlinear spectral regularization for particular architecture classes. On the numerical side, we investigate the local approximation property of selected trained architectures and present a series of experiments on the MNIST dataset that underpin and extend our theoretical findings.
The paper studies a scalar auxiliary variable (SAV) method to solve the Cahn-Hilliard equation with degenerate mobility posed on a smooth closed surface {\Gamma}. The SAV formulation is combined with adaptive time stepping and a geometrically unfitted trace finite element method (TraceFEM), which embeds {\Gamma} in R3. The stability is proven to hold in an appropriate sense for both first- and second-order in time variants of the method. The performance of our SAV method is illustrated through a series of numerical experiments, which include systematic comparison with a stabilized semi-explicit method.
In this work, we propose a novel strategy for the numerical solution of linear convection diffusion equation (CDE) over unfitted domains. In the proposed numerical scheme, strategies from high order Hybridized Discontinuous Galerkin method and eXtended Finite Element method is combined with the level set definition of the boundaries. The proposed scheme and hence, is named as eXtended Hybridizable Discontinuous Galerkin (XHDG) method. In this regard, the Hybridizable Discontinuous Galerkin (HDG) method is eXtended to the unfitted domains; i.e, the computational mesh does not need to fit to the domain boundary; instead, the boundary is defined by a level set function and cuts through the background mesh arbitrarily. The original unknown structure of HDG and its hybrid nature ensuring the local conservation of fluxes is kept, while developing a modified bilinear form for the elements cut by the boundary. At every cut element, an auxiliary nodal trace variable on the boundary is introduced, which is eliminated afterwards while imposing the boundary conditions. Both stationary and time dependent CDEs are studied over a range of flow regimes from diffusion to convection dominated; using high order $(p \leq 4)$ XHDG through benchmark numerical examples over arbitrary unfitted domains. Results proved that XHDG inherits optimal $(p + 1)$ and super $(p + 2)$ convergence properties of HDG while removing the fitting mesh restriction.
In this article, we discuss the error analysis for a certain class of monotone finite volume schemes approximating nonlocal scalar conservation laws, modeling traffic flow and crowd dynamics, without any additional assumptions on monotonicity or linearity of the kernel $\mu$ or the flux $f$. We first prove a novel Kuznetsov-type lemma for this class of PDEs and thereby show that the finite volume approximations converge to the entropy solution at the rate of $\sqrt{\Delta t}$ in $L^1(\mathbb{R})$. To the best of our knowledge, this is the first proof of any type of convergence rate for this class of conservation laws. We also present numerical experiments to illustrate this result.
The growing body of research shows how to replace classical partial differential equation (PDE) integrators with neural networks. The popular strategy is to generate the input-output pairs with a PDE solver, train the neural network in the regression setting, and use the trained model as a cheap surrogate for the solver. The bottleneck in this scheme is the number of expensive queries of a PDE solver needed to generate the dataset. To alleviate the problem, we propose a computationally cheap augmentation strategy based on general covariance and simple random coordinate transformations. Our approach relies on the fact that physical laws are independent of the coordinate choice, so the change in the coordinate system preserves the type of a parametric PDE and only changes PDE's data (e.g., initial conditions, diffusion coefficient). For tried neural networks and partial differential equations, proposed augmentation improves test error by 23% on average. The worst observed result is a 17% increase in test error for multilayer perceptron, and the best case is a 80% decrease for dilated residual network.
The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.