This paper proposes a strategy to solve the problems of the conventional s-version of finite element method (SFEM) fundamentally. Because SFEM can reasonably model an analytical domain by superimposing meshes with different spatial resolutions, it has intrinsic advantages of local high accuracy, low computation time, and simple meshing procedure. However, it has disadvantages such as accuracy of numerical integration and matrix singularity. Although several additional techniques have been proposed to mitigate these limitations, they are computationally expensive or ad-hoc, and detract from its strengths. To solve these issues, we propose a novel strategy called B-spline based SFEM. To improve the accuracy of numerical integration, we employed cubic B-spline basis functions with $C^2$-continuity across element boundaries as the global basis functions. To avoid matrix singularity, we applied different basis functions to different meshes. Specifically, we employed the Lagrange basis functions as local basis functions. The numerical results indicate that using the proposed method, numerical integration can be calculated with sufficient accuracy without any additional techniques used in conventional SFEM. Furthermore, the proposed method avoids matrix singularity and is superior to conventional methods in terms of convergence for solving linear equations. Therefore, the proposed method has the potential to reduce computation time while maintaining a comparable accuracy to conventional SFEM.
We consider the task of estimating functions belonging to a specific class of nonsmooth functions, namely so-called tame functions. These functions appear in a wide range of applications: training deep learning, value functions of mixed-integer programs, or wave functions of small molecules. We show that tame functions are approximable by piecewise polynomials on any full-dimensional cube. We then present the first ever mixed-integer programming formulation of piecewise polynomial regression. Together, these can be used to estimate tame functions. We demonstrate promising computational results.
Nonlinear Fokker-Planck equations play a major role in modeling large systems of interacting particles with a proved effectiveness in describing real world phenomena ranging from classical fields such as fluids and plasma to social and biological dynamics. Their mathematical formulation has often to face with physical forces having a significant random component or with particles living in a random environment which characterization may be deduced through experimental data and leading consequently to uncertainty-dependent equilibrium states. In this work, to address the problem of effectively solving stochastic Fokker-Planck systems, we will construct a new equilibrium preserving scheme through a micro-macro approach based on stochastic Galerkin methods. The resulting numerical method, contrarily to the direct application of a stochastic Galerkin projection in the parameter space of the unknowns of the underlying Fokker-Planck model, leads to highly accurate description of the uncertainty dependent large time behavior. Several numerical tests in the context of collective behavior for social and life sciences are presented to assess the validity of the present methodology against standard ones.
Designing efficient and high-accuracy numerical methods for complex dynamic incompressible magnetohydrodynamics (MHD) equations remains a challenging problem in various analysis and design tasks. This is mainly due to the nonlinear coupling of the magnetic and velocity fields occurring with convection and Lorentz forces, and multiple physical constraints, which will lead to the limitations of numerical computation. In this paper, we develop the MHDnet as a physics-preserving learning approach to solve MHD problems, where three different mathematical formulations are considered and named $B$ formulation, $A_1$ formulation, and $A_2$ formulation. Then the formulations are embedded into the MHDnet that can preserve the underlying physical properties and divergence-free condition. Moreover, MHDnet is designed by the multi-modes feature merging with multiscale neural network architecture, which can accelerate the convergence of the neural networks (NN) by alleviating the interaction of magnetic fluid coupling across different frequency modes. Furthermore, the pressure fields of three formulations, as the hidden state, can be obtained without extra data and computational cost. Several numerical experiments are presented to demonstrate the performance of the proposed MHDnet compared with different NN architectures and numerical formulations.
We present the numerical analysis of a finite element method (FEM) for one-dimensional Dirichlet problems involving the logarithmic Laplacian (the pseudo-differential operator that appears as a first-order expansion of the fractional Laplacian as the exponent $s\to 0^+$). Our analysis exhibits new phenomena in this setting; in particular, using recently obtained regularity results, we prove rigorous error estimates and provide a logarithmic order of convergence in the energy norm using suitable \emph{log}-weighted spaces. Numerical evidence suggests that this type of rate cannot be improved. Moreover, we show that the stiffness matrix of logarithmic problems can be obtained as the derivative of the fractional stiffness matrix evaluated at $s=0$. Lastly, we investigate the relationship between the discrete eigenvalue problem and its convergence to the continuous one.
High-dimensional Partial Differential Equations (PDEs) are a popular mathematical modelling tool, with applications ranging from finance to computational chemistry. However, standard numerical techniques for solving these PDEs are typically affected by the curse of dimensionality. In this work, we tackle this challenge while focusing on stationary diffusion equations defined over a high-dimensional domain with periodic boundary conditions. Inspired by recent progress in sparse function approximation in high dimensions, we propose a new method called compressive Fourier collocation. Combining ideas from compressive sensing and spectral collocation, our method replaces the use of structured collocation grids with Monte Carlo sampling and employs sparse recovery techniques, such as orthogonal matching pursuit and $\ell^1$ minimization, to approximate the Fourier coefficients of the PDE solution. We conduct a rigorous theoretical analysis showing that the approximation error of the proposed method is comparable with the best $s$-term approximation (with respect to the Fourier basis) to the solution. Using the recently introduced framework of random sampling in bounded Riesz systems, our analysis shows that the compressive Fourier collocation method mitigates the curse of dimensionality with respect to the number of collocation points under sufficient conditions on the regularity of the diffusion coefficient. We also present numerical experiments that illustrate the accuracy and stability of the method for the approximation of sparse and compressible solutions.
The Dean-Kawasaki equation - one of the most fundamental SPDEs of fluctuating hydrodynamics - has been proposed as a model for density fluctuations in weakly interacting particle systems. In its original form it is highly singular and fails to be renormalizable even by approaches such as regularity structures and paracontrolled distributions, hindering mathematical approaches to its rigorous justification. It has been understood recently that it is natural to introduce a suitable regularization, e.g., by applying a formal spatial discretization or by truncating high-frequency noise. In the present work, we prove that a regularization in form of a formal discretization of the Dean-Kawasaki equation indeed accurately describes density fluctuations in systems of weakly interacting diffusing particles: We show that in suitable weak metrics, the law of fluctuations as predicted by the discretized Dean-Kawasaki SPDE approximates the law of fluctuations of the original particle system, up to an error that is of arbitrarily high order in the inverse particle number and a discretization error. In particular, the Dean-Kawasaki equation provides a means for efficient and accurate simulations of density fluctuations in weakly interacting particle systems.
This paper describes an exact solution to the drag-based adjoint Euler equations in two and three dimensions that is valid for irrotational flows.
This paper concerns a class of DC composite optimization problems which, as an extension of convex composite optimization problems and DC programs with nonsmooth components, often arises in robust factorization models of low-rank matrix recovery. For this class of nonconvex and nonsmooth problems, we propose an inexact linearized proximal algorithm (iLPA) by computing in each step an inexact minimizer of a strongly convex majorization constructed with a partial linearization of their objective functions at the current iterate, and establish the convergence of the generated iterate sequence under the Kurdyka-\L\"ojasiewicz (KL) property of a potential function. In particular, by leveraging the composite structure, we provide a verifiable condition for the potential function to have the KL property of exponent $1/2$ at the limit point, so for the iterate sequence to have a local R-linear convergence rate. Finally, we apply the proposed iLPA to a robust factorization model for matrix completions with outliers and non-uniform sampling, and numerical comparison with the Polyak subgradient method confirms its superiority in terms of computing time and quality of solutions.
We propose data thinning, an approach for splitting an observation into two or more independent parts that sum to the original observation, and that follow the same distribution as the original observation, up to a (known) scaling of a parameter. This very general proposal is applicable to any convolution-closed distribution, a class that includes the Gaussian, Poisson, negative binomial, gamma, and binomial distributions, among others. Data thinning has a number of applications to model selection, evaluation, and inference. For instance, cross-validation via data thinning provides an attractive alternative to the usual approach of cross-validation via sample splitting, especially in settings in which the latter is not applicable. In simulations and in an application to single-cell RNA-sequencing data, we show that data thinning can be used to validate the results of unsupervised learning approaches, such as k-means clustering and principal components analysis, for which traditional sample splitting is unattractive or unavailable.
With advances in scientific computing and mathematical modeling, complex scientific phenomena such as galaxy formations and rocket propulsion can now be reliably simulated. Such simulations can however be very time-intensive, requiring millions of CPU hours to perform. One solution is multi-fidelity emulation, which uses data of different fidelities to train an efficient predictive model which emulates the expensive simulator. For complex scientific problems and with careful elicitation from scientists, such multi-fidelity data may often be linked by a directed acyclic graph (DAG) representing its scientific model dependencies. We thus propose a new Graphical Multi-fidelity Gaussian Process (GMGP) model, which embeds this DAG structure (capturing scientific dependencies) within a Gaussian process framework. We show that the GMGP has desirable modeling traits via two Markov properties, and admits a scalable algorithm for recursive computation of the posterior mean and variance along at each depth level of the DAG. We also present a novel experimental design methodology over the DAG given an experimental budget, and propose a nonlinear extension of the GMGP via deep Gaussian processes. The advantages of the GMGP are then demonstrated via a suite of numerical experiments and an application to emulation of heavy-ion collisions, which can be used to study the conditions of matter in the Universe shortly after the Big Bang. The proposed model has broader uses in data fusion applications with graphical structure, which we further discuss.