Partial differential equations (PDEs) have become an essential tool for modeling complex physical systems. Such equations are typically solved numerically via mesh-based methods, such as finite element methods, the outputs of which consist of the solutions on a set of mesh nodes over the spatial domain. However, these simulations are often prohibitively costly to survey the input space. In this paper, we propose an efficient emulator that simultaneously predicts the outputs on a set of mesh nodes, with theoretical justification of its uncertainty quantification. The novelty of the proposed method lies in the incorporation of the mesh node coordinates into the statistical model. In particular, the proposed method segments the mesh nodes into multiple clusters via a Dirichlet process prior and fits a Gaussian process model in each. Most importantly, by revealing the underlying clustering structures, the proposed method can extract valuable flow physics present in the systems that can be used to guide further investigations. Real examples are demonstrated to show that our proposed method has smaller prediction errors than its main competitors, with competitive computation time, and identifies interesting clusters of mesh nodes that exhibit coherent input-output relationships and possess physical significance, such as satisfying boundary conditions. An R package for the proposed methodology is provided in an open repository.
We introduce a novel algorithm that converges to level-set convex viscosity solutions of high-dimensional Hamilton-Jacobi equations. The algorithm is applicable to a broad class of curvature motion PDEs, as well as a recently developed Hamilton-Jacobi equation for the Tukey depth, which is a statistical depth measure of data points. A main contribution of our work is a new monotone scheme for approximating the direction of the gradient, which allows for monotone discretizations of pure partial derivatives in the direction of, and orthogonal to, the gradient. We provide a convergence analysis of the algorithm on both regular Cartesian grids and unstructured point clouds in any dimension and present numerical experiments that demonstrate the effectiveness of the algorithm in approximating solutions of the affine flow in two dimensions and the Tukey depth measure of high-dimensional datasets such as MNIST and FashionMNIST.
Spectral deferred corrections (SDC) are a class of iterative methods for the numerical solution of ordinary differential equations. SDC can be interpreted as a Picard iteration to solve a fully implicit collocation problem, preconditioned with a low-order method. It has been widely studied for first-order problems, using explicit, implicit or implicit-explicit Euler and other low-order methods as preconditioner. For first-order problems, SDC achieves arbitrary order of accuracy and possesses good stability properties. While numerical results for SDC applied to the second-order Lorentz equations exist, no theoretical results are available for SDC applied to second-order problems. We present an analysis of the convergence and stability properties of SDC using velocity-Verlet as the base method for general second-order initial value problems. Our analysis proves that the order of convergence depends on whether the force in the system depends on the velocity. We also demonstrate that the SDC iteration is stable under certain conditions. Finally, we show that SDC can be computationally more efficient than a simple Picard iteration or a fourth-order Runge-Kutta-Nystr\"om method.
Neural ordinary differential equations (neural ODEs) are a popular family of continuous-depth deep learning models. In this work, we consider a large family of parameterized ODEs with continuous-in-time parameters, which include time-dependent neural ODEs. We derive a generalization bound for this class by a Lipschitz-based argument. By leveraging the analogy between neural ODEs and deep residual networks, our approach yields in particular a generalization bound for a class of deep residual networks. The bound involves the magnitude of the difference between successive weight matrices. We illustrate numerically how this quantity affects the generalization capability of neural networks.
The Reynolds equation, combined with the Elrod algorithm for including the effect of cavitation, resembles a nonlinear convection-diffusion-reaction (CDR) equation. Its solution by finite elements is prone to oscillations in convection-dominated regions, which are present whenever cavitation occurs. We propose a stabilized finite-element method that is based on the variational multiscale method and exploits the concept of orthogonal subgrid scales. We demonstrate that this approach only requires one additional term in the weak form to obtain a stable method that converges optimally when performing mesh refinement.
The generalized Golub-Kahan bidiagonalization has been used to solve saddle-point systems where the leading block is symmetric and positive definite. We extend this iterative method for the case where the symmetry condition no longer holds. We do so by relying on the known connection the algorithm has with the Conjugate Gradient method and following the line of reasoning that adapts the latter into the Full Orthogonalization Method. We propose appropriate stopping criteria based on the residual and an estimate of the energy norm for the error associated with the primal variable. Numerical comparison with GMRES highlights the advantages of our proposed strategy regarding its low memory requirements and the associated implications.
Traditional static functional data analysis is facing new challenges due to streaming data, where data constantly flow in. A major challenge is that storing such an ever-increasing amount of data in memory is nearly impossible. In addition, existing inferential tools in online learning are mainly developed for finite-dimensional problems, while inference methods for functional data are focused on the batch learning setting. In this paper, we tackle these issues by developing functional stochastic gradient descent algorithms and proposing an online bootstrap resampling procedure to systematically study the inference problem for functional linear regression. In particular, the proposed estimation and inference procedures use only one pass over the data; thus they are easy to implement and suitable to the situation where data arrive in a streaming manner. Furthermore, we establish the convergence rate as well as the asymptotic distribution of the proposed estimator. Meanwhile, the proposed perturbed estimator from the bootstrap procedure is shown to enjoy the same theoretical properties, which provide the theoretical justification for our online inference tool. As far as we know, this is the first inference result on the functional linear regression model with streaming data. Simulation studies are conducted to investigate the finite-sample performance of the proposed procedure. An application is illustrated with the Beijing multi-site air-quality data.
Relational concept analysis (RCA) is an extension of formal concept analysis allowing to deal with several related contexts simultaneously. It has been designed for learning description logic theories from data and used within various applications. A puzzling observation about RCA is that it returns a single family of concept lattices although, when the data feature circular dependencies, other solutions may be considered acceptable. The semantics of RCA, provided in an operational way, does not shed light on this issue. In this report, we define these acceptable solutions as those families of concept lattices which belong to the space determined by the initial contexts (well-formed), cannot scale new attributes (saturated), and refer only to concepts of the family (self-supported). We adopt a functional view on the RCA process by defining the space of well-formed solutions and two functions on that space: one expansive and the other contractive. We show that the acceptable solutions are the common fixed points of both functions. This is achieved step-by-step by starting from a minimal version of RCA that considers only one single context defined on a space of contexts and a space of lattices. These spaces are then joined into a single space of context-lattice pairs, which is further extended to a space of indexed families of context-lattice pairs representing the objects manip
We aim to establish Bowen's equations for upper capacity invariance pressure and Pesin-Pitskel invariance pressure of discrete-time control systems. We first introduce a new invariance pressure called induced invariance pressure on partitions that specializes the upper capacity invariance pressure on partitions, and then show that the two types of invariance pressures are related by a Bowen's equation. Besides, to establish Bowen's equation for Pesin-Pitskel invariance pressure on partitions we also introduce a new notion called BS invariance dimension on subsets. Moreover, a variational principle for BS invariance dimension on subsets is established.
We propose a general framework for solving forward and inverse problems constrained by partial differential equations, where we interpolate neural networks onto finite element spaces to represent the (partial) unknowns. The framework overcomes the challenges related to the imposition of boundary conditions, the choice of collocation points in physics-informed neural networks, and the integration of variational physics-informed neural networks. A numerical experiment set confirms the framework's capability of handling various forward and inverse problems. In particular, the trained neural network generalises well for smooth problems, beating finite element solutions by some orders of magnitude. We finally propose an effective one-loop solver with an initial data fitting step (to obtain a cheap initialisation) to solve inverse problems.
We explore a linear inhomogeneous elasticity equation with random Lam\'e parameters. The latter are parameterized by a countably infinite number of terms in separated expansions. The main aim of this work is to estimate expected values (considered as an infinite dimensional integral on the parametric space corresponding to the random coefficients) of linear functionals acting on the solution of the elasticity equation. To achieve this, the expansions of the random parameters are truncated, a high-order quasi-Monte Carlo (QMC) is combined with a sparse grid approach to approximate the high dimensional integral, and a Galerkin finite element method (FEM) is introduced to approximate the solution of the elasticity equation over the physical domain. The error estimates from (1) truncating the infinite expansion, (2) the Galerkin FEM, and (3) the QMC sparse grid quadrature rule are all studied. For this purpose, we show certain required regularity properties of the continuous solution with respect to both the parametric and physical variables. To achieve our theoretical regularity and convergence results, some reasonable assumptions on the expansions of the random coefficients are imposed. Finally, some numerical results are delivered.