We present a stable spectral vanishing viscosity for discontinuous Galerkin schemes, with applications to turbulent and supersonic flows. The idea behind the SVV is to spatially filter the dissipative fluxes, such that it concentrates in higher wavenumbers, where the flow is typically under-resolved, leaving low wavenumbers dissipation-free. Moreover, we derive a stable approximation of the Guermond-Popov fluxes with the Bassi-Rebay 1 scheme, used to introduce density regularization in shock capturing simulations. This filtering uses a Cholesky decomposition of the fluxes that ensures the entropy stability of the scheme, which also includes a stable approximation of boundary conditions for adiabatic walls. For turbulent flows, we test the method with the three-dimensional Taylor-Green vortex and show that energy is correctly dissipated, and the scheme is stable when a kinetic energy preserving split-form is used in combination with a low dissipation Riemann solver. Finally, we test the shock capturing capabilities of our method with the Shu-Osher and the supersonic forward facing step cases, obtaining good results without spurious oscillations even with coarse meshes.
We deal with a long-standing problem about how to design an energy-stable numerical scheme for solving the motion of a closed curve under {\sl anisotropic surface diffusion} with a general anisotropic surface energy $\gamma(\boldsymbol{n})$ in two dimensions, where $\boldsymbol{n}$ is the outward unit normal vector. By introducing a novel symmetric positive definite surface energy matrix $Z_k(\boldsymbol{n})$ depending on the Cahn-Hoffman $\boldsymbol{\xi}$-vector and a stabilizing function $k(\boldsymbol{n})$, we first reformulate the anisotropic surface diffusion into a conservative form and then derive a new symmetrized variational formulation for the anisotropic surface diffusion with weakly or strongly anisotropic surface energies. A semi-discretization in space for the symmetrized variational formulation is proposed and its area (or mass) conservation and energy dissipation are proved. The semi-discretization is then discretized in time by either an implicit structural-preserving scheme (SP-PFEM) which preserves the area in the discretized level or a semi-implicit energy-stable method (ES-PFEM) which needs only solve a linear system at each time step. Under a relatively simple and mild condition on $\gamma(\boldsymbol{n})$, we show that both SP-PFEM and ES-PFEM are unconditionally energy-stable for almost all anisotropic surface energies $\gamma(\boldsymbol{n})$ arising in practical applications. Specifically, for several commonly-used anisotropic surface energies, we construct $Z_k(\boldsymbol{n})$ explicitly. Finally, extensive numerical results are reported to demonstrate the high performance of the proposed numerical schemes.
We provide the first stochastic convergence rates for a family of adaptive quadrature rules used to normalize the posterior distribution in Bayesian models. Our results apply to the uniform relative error in the approximate posterior density, the coverage probabilities of approximate credible sets, and approximate moments and quantiles, therefore guaranteeing fast asymptotic convergence of approximate summary statistics used in practice. The family of quadrature rules includes adaptive Gauss-Hermite quadrature, and we apply this rule in two challenging low-dimensional examples. Further, we demonstrate how adaptive quadrature can be used as a crucial component of a modern approximate Bayesian inference procedure for high-dimensional additive models. The method is implemented and made publicly available in the aghq package for the R language, available on CRAN.
In this work, the generalized broken soliton-like (gBS-like) equation is derived through the generalized bilinear method. The neural network model, which can fit the explicit solution with zero error, is found. The interference wave solution of the gBS-like equation is obtained by using the bilinear neural network method (BNNM) and physical informed neural networks (PINNs). Interference waves are shown well via three-dimensional plots and density plots. Compared with PINNs, the bilinear neural network method is not only more accurate but also faster.
Domain decomposition methods are a set of widely used tools for parallelization of partial differential equation solvers. Convergence is well studied for elliptic equations, but in the case of parabolic equations there are hardly any results for general Lipschitz domains in two or more dimensions. The aim of this work is therefore to construct a new framework for analyzing nonoverlapping domain decomposition methods for the heat equation in a space-time Lipschitz cylinder. The framework is based on a variational formulation, inspired by recent studies of space-time finite elements using Sobolev spaces with fractional time regularity. In this framework, the time-dependent Steklov--Poincar\'e operators are introduced and their essential properties are proven. We then derive the interface interpretations of the Dirichlet--Neumann, Neumann--Neumann and Robin--Robin methods and show that these methods are well defined. Finally, we prove convergence of the Robin--Robin method and introduce a modified method with stronger convergence properties.
The encoder network of an autoencoder is an approximation of the nearest point projection onto the manifold spanned by the decoder. A concern with this approximation is that, while the output of the encoder is always unique, the projection can possibly have infinitely many values. This implies that the latent representations learned by the autoencoder can be misleading. Borrowing from geometric measure theory, we introduce the idea of using the reach of the manifold spanned by the decoder to determine if an optimal encoder exists for a given dataset and decoder. We develop a local generalization of this reach and propose a numerical estimator thereof. We demonstrate that this allows us to determine which observations can be expected to have a unique, and thereby trustworthy, latent representation. As our local reach estimator is differentiable, we investigate its usage as a regularizer and show that this leads to learned manifolds for which projections are more often unique than without regularization.
We develop a hybrid spatial discretization for the wave equation in second order form, based on high-order accurate finite difference methods and discontinuous Galerkin methods. The hybridization combines computational efficiency of finite difference methods on Cartesian grids and geometrical flexibility of discontinuous Galerkin methods on unstructured meshes. The two spatial discretizations are coupled by a penalty technique at the interface such that the overall semidiscretization satisfies a discrete energy estimate to ensure stability. In addition, optimal convergence is obtained in the sense that when combining a fourth order finite difference method with a discontinuous Galerkin method using third order local polynomials, the overall convergence rate is fourth order. Furthermore, we use a novel approach to derive an error estimate for the semidiscretization by combining the energy method and the normal mode analysis for a corresponding one dimensional model problem. The stability and accuracy analysis are verified in numerical experiments.
This paper investigates, a new class of fractional order Runge-Kutta (FORK) methods for numerical approximation to the solution of fractional differential equations (FDEs). By using the Caputo generalized Taylor formula and the total differential for Caputo fractional derivative, we construct explicit and implicit FORK methods, as the well-known Runge-Kutta schemes for ordinary differential equations. In the proposed method, due to the dependence of fractional derivatives to a fixed base point $t_0,$ we had to modify the right-hand side of the given equation in all steps of the FORK methods. Some coefficients for explicit and implicit FORK schemes are presented. The convergence analysis of the proposed method is also discussed. Numerical experiments clarify the effectiveness and robustness of the method.
In some inferential statistical methods, such as tests and confidence intervals, it is important to describe the stochastic behavior of statistical functionals, aside from their large sample properties. We study such behavior in terms of the usual stochastic order. For this purpose, we introduce a generalized family of stochastic orders, which is referred to as transform orders, showing that it provides a flexible framework for deriving stochastic monotonicity results. Given that our general definition makes it possible to obtain some well-known ordering relations as particular cases, we can easily apply our method to different families of functionals. These include some prominent inequality measures, such as the generalized entropy, the Gini index, and its generalizations. We also illustrate the applicability of our approach by determining the least favorable distribution, and the behavior of some bootstrap statistics, in some goodness-of-fit testing procedures.
The goal of Bayesian deep learning is to provide uncertainty quantification via the posterior distribution. However, exact inference over the weight space is computationally intractable due to the ultra-high dimensions of the neural network. Variational inference (VI) is a promising approach, but naive application on weight space does not scale well and often underperform on predictive accuracy. In this paper, we propose a new adaptive variational Bayesian algorithm to train neural networks on weight space that achieves high predictive accuracy. By showing that there is an equivalence to Stochastic Gradient Hamiltonian Monte Carlo(SGHMC) with preconditioning matrix, we then propose an MCMC within EM algorithm, which incorporates the spike-and-slab prior to capture the sparsity of the neural network. The EM-MCMC algorithm allows us to perform optimization and model pruning within one-shot. We evaluate our methods on CIFAR-10, CIFAR-100 and ImageNet datasets, and demonstrate that our dense model can reach the state-of-the-art performance and our sparse model perform very well compared to previously proposed pruning schemes.
The existing discrete variational derivative method is only second-order accurate and fully implicit. In this paper, we propose a framework to construct an arbitrary high-order implicit (original) energy stable scheme and a second-order semi-implicit (modified) energy stable scheme. Combined with the Runge--Kutta process, we can build an arbitrary high-order and unconditionally (original) energy stable scheme based on the discrete variational derivative method. The new energy stable scheme is implicit and leads to a large sparse nonlinear algebraic system at each time step, which can be efficiently solved by using an inexact Newton type algorithm. To avoid solving nonlinear algebraic systems, we then present a relaxed discrete variational derivative method, which can construct second-order, linear, and unconditionally (modified) energy stable schemes. Several numerical simulations are performed to investigate the efficiency, stability, and accuracy of the newly proposed schemes.