A grid-overlay finite difference method is proposed for the numerical approximation of the fractional Laplacian on arbitrary bounded domains. The method uses an unstructured simplicial mesh and an overlay uniform grid for the underlying domain and constructs the approximation based on a uniform-grid finite difference approximation and a data transfer from the unstructured mesh to the uniform grid. The method takes full advantage of both uniform-grid finite difference approximation in efficient matrix-vector multiplication via the fast Fourier transform and unstructured meshes for complex geometries. It is shown that its stiffness matrix is similar to a symmetric and positive definite matrix and thus invertible if the data transfer has full column rank and positive column sums. Piecewise linear interpolation is studied as a special example for the data transfer. It is proved that the full column rank and positive column sums of linear interpolation is guaranteed if the spacing of the uniform grid is smaller than or equal to a positive bound proportional to the minimum element height of the unstructured mesh. Moreover, a sparse preconditioner is proposed for the iterative solution of the resulting linear system for the homogeneous Dirichlet problem of the fractional Laplacian. Numerical examples demonstrate that the new method has similar convergence behavior as existing finite difference and finite element methods and that the sparse preconditioning is effective. Furthermore, the new method can readily be incorporated with existing mesh adaptation strategies. Numerical results obtained by combining with the so-called MMPDE moving mesh method are also presented.
Recent advancements in evaluating matrix-exponential functions have opened the doors to the practical use of exponential time-integration methods in numerical weather prediction (NWP). The success of exponential methods in shallow water simulations has led to the question of whether they can be beneficial in a 3D atmospheric model. In this paper, we take the first step forward by evaluating the behavior of exponential time-integration methods in the Navy's compressible deep-atmosphere nonhydrostatic global model (NEPTUNE-Navy Environmental Prediction sysTem Utilizing a Nonhydrostatic Engine). Simulations are conducted on a set of idealized test cases designed to assess key features of a nonhydrostatic model and demonstrate that exponential integrators capture the desired large and small-scale traits, yielding results comparable to those found in the literature. We propose a new upper boundary absorbing layer independent of reference state and shown to be effective in both idealized and real-data simulations. A real-data forecast using an exponential method with full physics is presented, providing a positive outlook for using exponential integrators for NWP.
The problem of optimal recovering high-order mixed derivatives of bivariate functions with finite smoothness is studied. On the basis of the truncation method, an algorithm for numerical differentiation is constructed, which is order-optimal both in the sense of accuracy and in terms of the amount of involved Galerkin information.
Anomalous diffusion in the presence or absence of an external force field is often modelled in terms of the fractional evolution equations, which can involve the hyper-singular source term. For this case, conventional time stepping methods may exhibit a severe order reduction. Although a second-order numerical algorithm is provided for the subdiffusion model with a simple hyper-singular source term $t^{\mu}$, $-2<\mu<-1$ in [arXiv:2207.08447], the convergence analysis remain to be proved. To fill in these gaps, we present a simple and robust smoothing method for the hyper-singular source term, where the Hadamard finite-part integral is introduced. This method is based on the smoothing/ID$m$-BDF$k$ method proposed by the authors [Shi and Chen, SIAM J. Numer. Anal., to appear] for subdiffusion equation with a weakly singular source term. We prove that the $k$th-order convergence rate can be restored for the diffusion-wave case $\gamma \in (1,2)$ and sketch the proof for the subdiffusion case $\gamma \in (0,1)$, even if the source term is hyper-singular and the initial data is not compatible. Numerical experiments are provided to confirm the theoretical results.
The possibility of dynamically modifying the computational load of neural models at inference time is crucial for on-device processing, where computational power is limited and time-varying. Established approaches for neural model compression exist, but they provide architecturally static models. In this paper, we investigate the use of early-exit architectures, that rely on intermediate exit branches, applied to large-vocabulary speech recognition. This allows for the development of dynamic models that adjust their computational cost to the available resources and recognition performance. Unlike previous works, besides using pre-trained backbones we also train the model from scratch with an early-exit architecture. Experiments on public datasets show that early-exit architectures from scratch not only preserve performance levels when using fewer encoder layers, but also improve task accuracy as compared to using single-exit models or using pre-trained models. Additionally, we investigate an exit selection strategy based on posterior probabilities as an alternative to frame-based entropy.
We consider the weighted least squares spline approximation of a noisy dataset. By interpreting the weights as a probability distribution, we maximize the associated entropy subject to the constraint that the mean squared error is prescribed to a desired (small) value. Acting on this error yields a robust regression method that automatically detects and removes outliers from the data during the fitting procedure, by assigning them a very small weight. We discuss the use of both spline functions and spline curves. A number of numerical illustrations have been included to disclose the potentialities of the maximal-entropy approach in different application fields.
Testing cross-sectional independence in panel data models is of fundamental importance in econometric analysis with high-dimensional panels. Recently, econometricians began to turn their attention to the problem in the presence of serial dependence. The existing procedure for testing cross-sectional independence with serial correlation is based on the sum of the sample cross-sectional correlations, which generally performs well when the alternative has dense cross-sectional correlations, but suffers from low power against sparse alternatives. To deal with sparse alternatives, we propose a test based on the maximum of the squared sample cross-sectional correlations. Furthermore, we propose a combined test to combine the p-values of the max based and sum based tests, which performs well under both dense and sparse alternatives. The combined test relies on the asymptotic independence of the max based and sum based test statistics, which we show rigorously. We show that the proposed max based and combined tests have attractive theoretical properties and demonstrate the superior performance via extensive simulation results. We apply the two new tests to analyze the weekly returns on the securities in the S\&P 500 index under the Fama-French three-factor model, and confirm the usefulness of the proposed combined test in detecting cross-sectional independence.
A general a posteriori error analysis applies to five lowest-order finite element methods for two fourth-order semi-linear problems with trilinear non-linearity and a general source. A quasi-optimal smoother extends the source term to the discrete trial space, and more importantly, modifies the trilinear term in the stream-function vorticity formulation of the incompressible 2D Navier-Stokes and the von K\'{a}rm\'{a}n equations. This enables the first efficient and reliable a posteriori error estimates for the 2D Navier-Stokes equations in the stream-function vorticity formulation for Morley, two discontinuous Galerkin, $C^0$ interior penalty, and WOPSIP discretizations with piecewise quadratic polynomials.
Solving multiphysics-based inverse problems for geological carbon storage monitoring can be challenging when multimodal time-lapse data are expensive to collect and costly to simulate numerically. We overcome these challenges by combining computationally cheap learned surrogates with learned constraints. Not only does this combination lead to vastly improved inversions for the important fluid-flow property, permeability, it also provides a natural platform for inverting multimodal data including well measurements and active-source time-lapse seismic data. By adding a learned constraint, we arrive at a computationally feasible inversion approach that remains accurate. This is accomplished by including a trained deep neural network, known as a normalizing flow, which forces the model iterates to remain in-distribution, thereby safeguarding the accuracy of trained Fourier neural operators that act as surrogates for the computationally expensive multiphase flow simulations involving partial differential equation solves. By means of carefully selected experiments, centered around the problem of geological carbon storage, we demonstrate the efficacy of the proposed constrained optimization method on two different data modalities, namely time-lapse well and time-lapse seismic data. While permeability inversions from both these two modalities have their pluses and minuses, their joint inversion benefits from either, yielding valuable superior permeability inversions and CO2 plume predictions near, and far away, from the monitoring wells.
This paper introduces novel bulk-surface splitting schemes of first and second order for the wave equation with kinetic and acoustic boundary conditions of semi-linear type. For kinetic boundary conditions, we propose a reinterpretation of the system equations as a coupled system. This means that the bulk and surface dynamics are modeled separately and connected through a coupling constraint. This allows the implementation of splitting schemes, which show first-order convergence in numerical experiments. On the other hand, acoustic boundary conditions naturally separate bulk and surface dynamics. Here, Lie and Strang splitting schemes reach first- and second-order convergence, respectively, as we reveal numerically.
Iterative refinement (IR) is a popular scheme for solving a linear system of equations based on gradually improving the accuracy of an initial approximation. Originally developed to improve upon the accuracy of Gaussian elimination, interest in IR has been revived because of its suitability for execution on fast low-precision hardware such as analog devices and graphics processing units. IR generally converges when the error associated with the solution method is small, but is known to diverge when this error is large. We propose and analyze a novel enhancement to the IR algorithm by adding a line search optimization step that guarantees the algorithm will not diverge. Numerical experiments verify our theoretical results and illustrate the effectiveness of our proposed scheme.