The time continuous Volterra equations valued in $\mathbb{R}$ with nonnegative resolvent kernels have two basic monotone properties. The first is that any two solution curves do not intersect with suitable given signals. The second is that the solutions to the autonomous equations are monotone. The so-called CM-preserving schemes (Comm. Math. Sci., 2021,19(5), 1301-1336) have been proposed to preserve the complete monotonicity property and thus these monotonicity properties but they are restricted to uniform meshes. In this work, through an analogue of the convolution on nonuniform meshes, we introduce the concept of ``right complementary monotone'' (R-CMM) kernels in the discrete level for nonuniform meshes, which is an analogue of the CM-preserving property but much more flexible. We prove that the discrete solutions preserve these two monotone properties if the discretized kernel satisfies R-CMM property. Technically, we highly rely on the resolvent kernels to achieve this.
Empirical neural tangent kernels (eNTKs) can provide a good understanding of a given network's representation: they are often far less expensive to compute and applicable more broadly than infinite width NTKs. For networks with O output units (e.g. an O-class classifier), however, the eNTK on N inputs is of size $NO \times NO$, taking $O((NO)^2)$ memory and up to $O((NO)^3)$ computation. Most existing applications have therefore used one of a handful of approximations yielding $N \times N$ kernel matrices, saving orders of magnitude of computation, but with limited to no justification. We prove that one such approximation, which we call "sum of logits", converges to the true eNTK at initialization for any network with a wide final "readout" layer. Our experiments demonstrate the quality of this approximation for various uses across a range of settings.
This paper proposed a novel radial basis function neural network (RBFNN) to solve various partial differential equations (PDEs). In the proposed RBF neural networks, the physics-informed kernel functions (PIKFs), which are derived according to the governing equations of the considered PDEs, are used to be the activation functions instead of the traditional RBFs. Similar to the well-known physics-informed neural networks (PINNs), the proposed physics-informed kernel function neural networks (PIKFNNs) also include the physical information of the considered PDEs in the neural network. The difference is that the PINNs put this physical information in the loss function, and the proposed PIKFNNs put the physical information of the considered governing equations in the activation functions. By using the derived physics-informed kernel functions satisfying the considered governing equations of homogeneous, nonhomogeneous, transient PDEs as the activation functions, only the boundary/initial data are required to train the neural network. Finally, the feasibility and accuracy of the proposed PIKFNNs are validated by several benchmark examples referred to high-wavenumber wave propagation problem, infinite domain problem, nonhomogeneous problem, long-time evolution problem, inverse problem, spatial structural derivative diffusion model, and so on.
In this work, we examine a numerical phase-field fracture framework in which the crack irreversibility constraint is treated with a primal-dual active set method and a linearization is used in the degradation function to enhance the numerical stability. The first goal is to carefully derive from a complementarity system our primal-dual active set formulation, which has been used in the literature in numerous studies, but for phase-field fracture without its detailed mathematical derivation yet. Based on the latter, we formulate a modified combined active-set Newton approach that significantly reduces the computational cost in comparison to comparable prior algorithms for quasi-monolithic settings. For many practical problems, Newton converges fast, but active set needs many iterations, for which three different efficiency improvements are suggested in this paper. Afterwards, we design an iteration on the linearization in order to iterate the problem to the monolithic limit. Our new algorithms are implemented in the programming framework pfm-cracks [T. Heister, T. Wick; pfm-cracks: A parallel-adaptive framework for phase-field fracture propagation, Software Impacts, Vol. 6 (2020), 100045]. In the numerical examples, we conduct performance studies and investigate efficiency enhancements. The main emphasis is on the cost complexity by keeping the accuracy of numerical solutions and goal functionals. Our algorithmic suggestions are substantiated with the help of several benchmarks in two and three spatial dimensions. Therein, predictor-corrector adaptivity and parallel performance studies are explored as well.
Modern SAT solvers are designed to handle problems expressed in Conjunctive Normal Form (CNF) so that non-CNF problems must be CNF-ized upfront, typically by using variants of either Tseitin or Plaisted and Greenbaum transformations. When passing from solving to enumeration, however, the capability of producing partial satisfying assignments that are as small as possible becomes crucial, which raises the question of whether such CNF encodings are also effective for enumeration. In this paper, we investigate both theoretically and empirically the effectiveness of CNF conversions for SAT enumeration. On the negative side, we show that: (i) Tseitin transformation prevents the solver from producing short partial assignments, thus seriously affecting the effectiveness of enumeration; (ii) Plaisted and Greenbaum transformation overcomes this problem only in part. On the positive side, we show that combining Plaisted and Greenbaum transformation with NNF preprocessing upfront -- which is typically not used in solving -- can fully overcome the problem and can drastically reduce both the number of partial assignments and the execution time.
We describe an algorithm, based on Euler's method, for solving Volterra integro-differential equations. The algorithm approximates the relevant integral by means of the composite Trapezium Rule, using the discrete nodes of the independent variable as the required nodes for the integration variable. We have developed an error control device, using Richardson extrapolation, and we have achieved accuracy better than 1e-12 for all numerical examples considered.
A stable and high-order accurate solver for linear and nonlinear parabolic equations is presented. An additive Runge-Kutta method is used for the time stepping, which integrates the linear stiff terms by an implicit singly diagonally implicit Runge-Kutta (ESDIRK) method and the nonlinear terms by an explicit Runge-Kutta (ERK) method. In each time step, the implicit solve is performed by the recently developed Hierarchical Poincar\'e-Steklov (HPS) method. This is a fast direct solver for elliptic equations that decomposes the space domain into a hierarchical tree of subdomains and builds spectral collocation solvers locally on the subdomains. These ideas are naturally combined in the presented method since the singly diagonal coefficient in ESDIRK and a fixed time-step ensures that the coefficient matrix in the implicit solve of HPS remains the same for all time stages. This means that the precomputed inverse can be efficiently reused, leading to a scheme with complexity (in two dimensions) $\mathcal{O}(N^{1.5})$ for the precomputation where the solution operator to the elliptic problems is built, and then $\mathcal{O}(N)$ for each time step. The stability of the method is proved for first order in time and any order in space, and numerical evidence substantiates a claim of stability for a much broader class of time discretization methods.Numerical experiments supporting the accuracy of efficiency of the method in one and two dimensions are presented.
Interface problems depict many fundamental physical phenomena and widely apply in the engineering. However, it is challenging to develop efficient fully decoupled numerical methods for solving degenerate interface problems in which the coefficient of a PDE is discontinuous and greater than or equal to zero on the interface. The main motivation in this paper is to construct fully decoupled numerical methods for solving nonlinear degenerate interface problems with ``double singularities". An efficient fully decoupled numerical method is proposed for nonlinear degenerate interface problems. The scheme combines deep neural network on the singular subdomain with finite difference method on the regular subdomain. The key of the new approach is to split nonlinear degenerate partial differential equation with interface into two independent boundary value problems based on deep learning. The outstanding advantages of the proposed schemes are that not only the convergence order of the degenerate interface problems on whole domain is determined by the finite difference scheme on the regular subdomain, but also can calculate $\mathbf{VERY}$ $\mathbf{BIG}$ jump ratio(such as $10^{12}:1$ or $1:10^{12}$) for the interface problems including degenerate and non-degenerate cases. The expansion of the solutions does not contains any undetermined parameters in the numerical method. In this way, two independent nonlinear systems are constructed in other subdomains and can be computed in parallel. The flexibility, accuracy and efficiency of the methods are validated from various experiments in both 1D and 2D. Specially, not only our method is suitable for solving degenerate interface case, but also for non-degenerate interface case. Some application examples with complicated multi-connected and sharp edge interface examples including degenerate and nondegenerate cases are also presented.
In this article we develop a high order accurate method to solve the incompressible boundary layer equations in a provably stable manner.~We first derive continuous energy estimates,~and then proceed to the discrete setting.~We formulate the discrete approximation using high-order finite difference methods on summation-by-parts form and implement the boundary conditions weakly using the simultaneous approximation term method.~By applying the discrete energy method and imitating the continuous analysis,~the discrete estimate that resembles the continuous counterpart is obtained proving stability.~We also show that these newly derived boundary conditions removes the singularities associated with the null-space of the nonlinear discrete spatial operator.~Numerical experiments that verifies the high-order accuracy of the scheme and coincides with the theoretical results are presented.~The numerical results are compared with the well-known Blasius similarity solution as well as that resulting from the solution of the incompressible Navier Stokes equations.
We investigate error bounds for numerical solutions of divergence structure linear elliptic PDEs on compact manifolds without boundary. Our focus is on a class of monotone finite difference approximations, which provide a strong form of stability that guarantees the existence of a bounded solution. In many settings including the Dirichlet problem, it is easy to show that the resulting solution error is proportional to the formal consistency error of the scheme. We make the surprising observation that this need not be true for PDEs posed on compact manifolds without boundary. We propose a particular class of approximation schemes built around an underlying monotone scheme with consistency error $O(h^\alpha)$. By carefully constructing barrier functions, we prove that the solution error is bounded by $O(h^{\alpha/(d+1)})$ in dimension $d$. We also provide a specific example where this predicted convergence rate is observed numerically. Using these error bounds, we further design a family of provably convergent approximations to the solution gradient.
The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.