We formulate and analyze a multiscale method for an elliptic problem with an oscillatory coefficient based on a skeletal (hybrid) formulation. More precisely, we employ hybrid discontinuous Galerkin approaches and combine them with the localized orthogonal decomposition methodology to obtain a coarse-scale skeletal method that effectively includes fine-scale information. This work is a first step to reliably merge hybrid skeletal formulations and localized orthogonal decomposition and unite the advantages of both strategies. Numerical experiments are presented to illustrate the theoretical findings.
Recent advancements in evaluating matrix-exponential functions have opened the doors to the practical use of exponential time-integration methods in numerical weather prediction (NWP). The success of exponential methods in shallow water simulations has led to the question of whether they can be beneficial in a 3D atmospheric model. In this paper, we take the first step forward by evaluating the behavior of exponential time-integration methods in the Navy's compressible deep-atmosphere nonhydrostatic global model (NEPTUNE-Navy Environmental Prediction sysTem Utilizing a Nonhydrostatic Engine). Simulations are conducted on a set of idealized test cases designed to assess key features of a nonhydrostatic model and demonstrate that exponential integrators capture the desired large and small-scale traits, yielding results comparable to those found in the literature. We propose a new upper boundary absorbing layer independent of reference state and shown to be effective in both idealized and real-data simulations. A real-data forecast using an exponential method with full physics is presented, providing a positive outlook for using exponential integrators for NWP.
We propose, analyze and realize a variational multiclass segmentation scheme that partitions a given image into multiple regions exhibiting specific properties. Our method determines multiple functions that encode the segmentation regions by minimizing an energy functional combining information from different channels. Multichannel image data can be obtained by lifting the image into a higher dimensional feature space using specific multichannel filtering or may already be provided by the imaging modality under consideration, such as an RGB image or multimodal medical data. Experimental results show that the proposed method performs well in various scenarios. In particular, promising results are presented for two medical applications involving classification of brain abscess and tumor growth, respectively. As main theoretical contributions, we prove the existence of global minimizers of the proposed energy functional and show its stability and convergence with respect to noisy inputs. In particular, these results also apply to the special case of binary segmentation, and these results are also novel in this particular situation.
Stochastic memoization is a higher-order construct of probabilistic programming languages that is key in Bayesian nonparametrics, a modular approach that allows us to extend models beyond their parametric limitations and compose them in an elegant and principled manner. Stochastic memoization is simple and useful in practice, but semantically elusive, particularly regarding dataflow transformations. As the naive implementation resorts to the state monad, which is not commutative, it is not clear if stochastic memoization preserves the dataflow property -- i.e., whether we can reorder the lines of a program without changing its semantics, provided the dataflow graph is preserved. In this paper, we give an operational and categorical semantics to stochastic memoization and name generation in the context of a minimal probabilistic programming language, for a restricted class of functions. Our contribution is a first model of stochastic memoization of constant Bernoulli functions with a non-enumerable type, which validates data flow transformations, bridging the gap between traditional probability theory and higher-order probability models. Our model uses a presheaf category and a novel probability monad on it.
This paper presents a reduced algorithm to the classical projection method for the solution of $d$-dimensional quasiperiodic problems, particularly Schr\"{o}dinger eigenvalue problems. Using the properties of the Schr\"{o}dinger operator in higher-dimensional space via a projection matrix of size $d\times n$, we rigorously prove that the generalized Fourier coefficients of the eigenfunctions decay exponentially along a fixed direction associated with the projection matrix. An efficient reduction strategy of the basis space is then proposed to reduce the degrees of freedom from $O(N^{n})$ to $O(N^{n-d}D^d)$, where $N$ is the number of Fourier grids in one dimension and the truncation coefficient $D$ is much less than $N$. Correspondingly, the computational complexity of the proposed algorithm for solving the first $k$ eigenpairs using the Krylov subspace method decreases from $O(kN^{2n})$ to $O(kN^{2(n-d)}D^{2d})$. Rigorous error estimates of the proposed reduced projection method are provided, indicating that a small $D$ is sufficient to achieve the same level of accuracy as the classical projection method. We present numerical examples of quasiperiodic Schr\"{o}dinger eigenvalue problems in one and two dimensions to demonstrate the accuracy and efficiency of our proposed method.
The accurate representation of precipitation in Earth system models (ESMs) is crucial for reliable projections of the ecological and socioeconomic impacts in response to anthropogenic global warming. The complex cross-scale interactions of processes that produce precipitation are challenging to model, however, inducing potentially strong biases in ESM fields, especially regarding extremes. State-of-the-art bias correction methods only address errors in the simulated frequency distributions locally at every individual grid cell. Improving unrealistic spatial patterns of the ESM output, which would require spatial context, has not been possible so far. Here, we show that a post-processing method based on physically constrained generative adversarial networks (cGANs) can correct biases of a state-of-the-art, CMIP6-class ESM both in local frequency distributions and in the spatial patterns at once. While our method improves local frequency distributions equally well as gold-standard bias-adjustment frameworks, it strongly outperforms any existing methods in the correction of spatial patterns, especially in terms of the characteristic spatial intermittency of precipitation extremes.
Solving multiphysics-based inverse problems for geological carbon storage monitoring can be challenging when multimodal time-lapse data are expensive to collect and costly to simulate numerically. We overcome these challenges by combining computationally cheap learned surrogates with learned constraints. Not only does this combination lead to vastly improved inversions for the important fluid-flow property, permeability, it also provides a natural platform for inverting multimodal data including well measurements and active-source time-lapse seismic data. By adding a learned constraint, we arrive at a computationally feasible inversion approach that remains accurate. This is accomplished by including a trained deep neural network, known as a normalizing flow, which forces the model iterates to remain in-distribution, thereby safeguarding the accuracy of trained Fourier neural operators that act as surrogates for the computationally expensive multiphase flow simulations involving partial differential equation solves. By means of carefully selected experiments, centered around the problem of geological carbon storage, we demonstrate the efficacy of the proposed constrained optimization method on two different data modalities, namely time-lapse well and time-lapse seismic data. While permeability inversions from both these two modalities have their pluses and minuses, their joint inversion benefits from either, yielding valuable superior permeability inversions and CO2 plume predictions near, and far away, from the monitoring wells.
Current physics-informed (standard or operator) neural networks still rely on accurately learning the initial conditions of the system they are solving. In contrast, standard numerical methods evolve such initial conditions without needing to learn these. In this study, we propose to improve current physics-informed deep learning strategies such that initial conditions do not need to be learned and are represented exactly in the predicted solution. Moreover, this method guarantees that when a DeepONet is applied multiple times to time step a solution, the resulting function is continuous.
Iterative refinement (IR) is a popular scheme for solving a linear system of equations based on gradually improving the accuracy of an initial approximation. Originally developed to improve upon the accuracy of Gaussian elimination, interest in IR has been revived because of its suitability for execution on fast low-precision hardware such as analog devices and graphics processing units. IR generally converges when the error associated with the solution method is small, but is known to diverge when this error is large. We propose and analyze a novel enhancement to the IR algorithm by adding a line search optimization step that guarantees the algorithm will not diverge. Numerical experiments verify our theoretical results and illustrate the effectiveness of our proposed scheme.
Many applications in computational physics involve approximating problems with microstructure, characterized by multiple spatial scales in their data. However, these numerical solutions are often computationally expensive due to the need to capture fine details at small scales. As a result, simulating such phenomena becomes unaffordable for many-query applications, such as parametrized systems with multiple scale-dependent features. Traditional projection-based reduced order models (ROMs) fail to resolve these issues, even for second-order elliptic PDEs commonly found in engineering applications. To address this, we propose an alternative nonintrusive strategy to build a ROM, that combines classical proper orthogonal decomposition (POD) with a suitable neural network (NN) model to account for the small scales. Specifically, we employ sparse mesh-informed neural networks (MINNs), which handle both spatial dependencies in the solutions and model parameters simultaneously. We evaluate the performance of this strategy on benchmark problems and then apply it to approximate a real-life problem involving the impact of microcirculation in transport phenomena through the tissue microenvironment.
This paper introduces a comprehensive framework to adjust a discrete test statistic for improving its hypothesis testing procedure. The adjustment minimizes the Wasserstein distance to a null-approximating continuous distribution, tackling some fundamental challenges inherent in combining statistical significances derived from discrete distributions. The related theory justifies Lancaster's mid-p and mean-value chi-squared statistics for Fisher's combination as special cases. However, in order to counter the conservative nature of Lancaster's testing procedures, we propose an updated null-approximating distribution. It is achieved by further minimizing the Wasserstein distance to the adjusted statistics within a proper distribution family. Specifically, in the context of Fisher's combination, we propose an optimal gamma distribution as a substitute for the traditionally used chi-squared distribution. This new approach yields an asymptotically consistent test that significantly improves type I error control and enhances statistical power.