In this paper, two novel classes of implicit exponential Runge-Kutta (ERK) methods are studied for solving highly oscillatory systems. Firstly, we analyze the symplectic conditions for two kinds of exponential integrators and obtain the symplectic method. In order to effectively solve highly oscillatory problems, we try to design the highly accurate implicit ERK integrators. By comparing the Taylor series of numerical solution with exact solution, it can be verified that the order conditions of two new kinds of exponential methods are identical to classical Runge-Kutta (RK) methods, which implies that using the coefficients of RK methods, some highly accurate numerical methods are directly formulated. Furthermore, we also investigate the linear stability regions for these exponential methods. Finally, numerical results not only display the long time energy preservation of the symplectic method, but also illustrate the accuracy and efficiency of these formulated methods in comparison with standard ERK methods.
Multiscale Finite Element Methods (MsFEMs) are now well-established finite element type approaches dedicated to multiscale problems. They first compute local, oscillatory, problem-dependent basis functions that generate a suitable discretization space, and next perform a Galerkin approximation of the problem on that space. We investigate here how these approaches can be implemented in a non-intrusive way, in order to facilitate their dissemination within industrial codes or non-academic environments. We develop an abstract framework that covers a wide variety of MsFEMs for linear second-order partial differential equations. Non-intrusive MsFEM approaches are developed within the full generality of this framework, which may moreover be beneficial to steering software development and improving the theoretical understanding and analysis of MsFEMs.
We present Surjective Sequential Neural Likelihood (SSNL) estimation, a novel method for simulation-based inference in models where the evaluation of the likelihood function is not tractable and only a simulator that can generate synthetic data is available. SSNL fits a dimensionality-reducing surjective normalizing flow model and uses it as a surrogate likelihood function which allows for conventional Bayesian inference using either Markov chain Monte Carlo methods or variational inference. By embedding the data in a low-dimensional space, SSNL solves several issues previous likelihood-based methods had when applied to high-dimensional data sets that, for instance, contain non-informative data dimensions or lie along a lower-dimensional manifold. We evaluate SSNL on a wide variety of experiments and show that it generally outperforms contemporary methods used in simulation-based inference, for instance, on a challenging real-world example from astrophysics which models the magnetic field strength of the sun using a solar dynamo model.
The imposition of inhomogeneous Dirichlet (essential) boundary conditions is a fundamental challenge in the application of Galerkin-type methods based on non-interpolatory functions, i.e., functions which do not possess the Kronecker delta property. Such functions typically are used in various meshfree methods, as well as methods based on the isogeometric paradigm. The present paper analyses a model problem consisting of the Poisson equation subject to non-standard boundary conditions. Namely, instead of classical boundary conditions, the model problem involves Dirichlet- and Neumann-type nonlocal boundary conditions. Variational formulations with strongly and weakly imposed inhomogeneous Dirichlet-type nonlocal conditions are derived and compared within an extensive numerical study in the isogeometric framework based on non-uniform rational B-splines (NURBS). The attention in the numerical study is paid mainly to the influence of the nonlocal boundary conditions on the properties of the considered discretisation methods.
A non-intrusive model order reduction (MOR) method that combines features of the dynamic mode decomposition (DMD) and the radial basis function (RBF) network is proposed to predict the dynamics of parametric nonlinear systems. In many applications, we have limited access to the information of the whole system, which motivates non-intrusive model reduction. One bottleneck is capturing the dynamics of the solution without knowing the physics inside the "black-box" system. DMD is a powerful tool to mimic the dynamics of the system and give a reliable approximation of the solution in the time domain using only the dominant DMD modes. However, DMD cannot reproduce the parametric behavior of the dynamics. Our contribution focuses on extending DMD to parametric DMD by RBF interpolation. Specifically, a RBF network is first trained using snapshot matrices at limited parameter samples. The snapshot matrix at any new parameter sample can be quickly learned from the RBF network. DMD will use the newly generated snapshot matrix at the online stage to predict the time patterns of the dynamics corresponding to the new parameter sample. The proposed framework and algorithm are tested and validated by numerical examples including models with parametrized and time-varying inputs.
Recently, the multi-step inertial randomized Kaczmarz (MIRK) method for solving large-scale linear systems was proposed in [17]. In this paper, we incorporate the greedy probability criterion into the MIRK method, along with the introduction of a tighter threshold parameter for this criterion. We prove that the proposed greedy MIRK (GMIRK) method enjoys an improved deterministic linear convergence compared to both the MIRK method and the greedy randomized Kaczmarz method. Furthermore, we exhibit that the multi-step inertial extrapolation approach can be seen geometrically as an orthogonal projection method, and establish its relationship with the sketch-and-project method [15] and the oblique projection technique [22]. Numerical experiments are provided to confirm our results.
This article introduces a new Neural Network stochastic model to generate a 1-dimensional stochastic field with turbulent velocity statistics. Both the model architecture and training procedure ground on the Kolmogorov and Obukhov statistical theories of fully developed turbulence, so guaranteeing descriptions of 1) energy distribution, 2) energy cascade and 3) intermittency across scales in agreement with experimental observations. The model is a Generative Adversarial Network with multiple multiscale optimization criteria. First, we use three physics-based criteria: the variance, skewness and flatness of the increments of the generated field that retrieve respectively the turbulent energy distribution, energy cascade and intermittency across scales. Second, the Generative Adversarial Network criterion, based on reproducing statistical distributions, is used on segments of different length of the generated field. Furthermore, to mimic multiscale decompositions frequently used in turbulence's studies, the model architecture is fully convolutional with kernel sizes varying along the multiple layers of the model. To train our model we use turbulent velocity signals from grid turbulence at Modane wind tunnel.
In this paper, we propose a test procedure based on the LASSO methodology to test the global null hypothesis of no dependence between a response variable and $p$ predictors, where $n$ observations with $n < p$ are available. The proposed procedure is similar to the F-test for a linear model, which evaluates significance based on the ratio of explained to unexplained variance. However, the F-test is not suitable for models where $p \geq n$. This limitation is due to the fact that when $p \geq n$, the unexplained variance is zero and thus the F-statistic can no longer be calculated. In contrast, the proposed extension of the LASSO methodology overcomes this limitation by using the number of non-zero coefficients in the LASSO model as a test statistic after suitably specifying the regularization parameter. The method allows reliable analysis of high-dimensional datasets with as few as $n = 40$ observations. The performance of the method is tested by means of a power study.
We consider parametrized linear-quadratic optimal control problems and provide their online-efficient solutions by combining greedy reduced basis methods and machine learning algorithms. To this end, we first extend the greedy control algorithm, which builds a reduced basis for the manifold of optimal final time adjoint states, to the setting where the objective functional consists of a penalty term measuring the deviation from a desired state and a term describing the control energy. Afterwards, we apply machine learning surrogates to accelerate the online evaluation of the reduced model. The error estimates proven for the greedy procedure are further transferred to the machine learning models and thus allow for efficient a posteriori error certification. We discuss the computational costs of all considered methods in detail and show by means of two numerical examples the tremendous potential of the proposed methodology.
Monte Carlo methods represent a cornerstone of computer science. They allow to sample high dimensional distribution functions in an efficient way. In this paper we consider the extension of Automatic Differentiation (AD) techniques to Monte Carlo process, addressing the problem of obtaining derivatives (and in general, the Taylor series) of expectation values. Borrowing ideas from the lattice field theory community, we examine two approaches. One is based on reweighting while the other represents an extension of the Hamiltonian approach typically used by the Hybrid Monte Carlo (HMC) and similar algorithms. We show that the Hamiltonian approach can be understood as a change of variables of the reweighting approach, resulting in much reduced variances of the coefficients of the Taylor series. This work opens the door to find other variance reduction techniques for derivatives of expectation values.
The direct deep learning simulation for multi-scale problems remains a challenging issue. In this work, a novel higher-order multi-scale deep Ritz method (HOMS-DRM) is developed for thermal transfer equation of authentic composite materials with highly oscillatory and discontinuous coefficients. In this novel HOMS-DRM, higher-order multi-scale analysis and modeling are first employed to overcome limitations of prohibitive computation and Frequency Principle when direct deep learning simulation. Then, improved deep Ritz method are designed to high-accuracy and mesh-free simulation for macroscopic homogenized equation without multi-scale property and microscopic lower-order and higher-order cell problems with highly discontinuous coefficients. Moreover, the theoretical convergence of the proposed HOMS-DRM is rigorously demonstrated under appropriate assumptions. Finally, extensive numerical experiments are presented to show the computational accuracy of the proposed HOMS-DRM. This study offers a robust and high-accuracy multi-scale deep learning framework that enables the effective simulation and analysis of multi-scale problems of authentic composite materials.