Compatible finite element discretisations for the atmospheric equations of motion have recently attracted considerable interest. Semi-implicit timestepping methods require the repeated solution of a large saddle-point system of linear equations. Preconditioning this system is challenging since the velocity mass matrix is non-diagonal, leading to a dense Schur complement. Hybridisable discretisations overcome this issue: weakly enforcing continuity of the velocity field with Lagrange multipliers leads to a sparse system of equations, which has a similar structure to the pressure Schur complement in traditional approaches. We describe how the hybridised sparse system can be preconditioned with a non-nested two-level preconditioner. To solve the coarse system, we use the multigrid pressure solver that is employed in the approximate Schur complement method previously proposed by the some of the authors. Our approach significantly reduces the number of solver iterations. The method shows excellent performance and scales to large numbers of cores in the Met Office next-generation climate- and weather prediction model LFRic.
Invariant finite-difference schemes for the one-dimensional shallow water equations in the presence of a magnetic field for various bottom topographies are constructed. Based on the results of the group classification recently carried out by the authors, finite-difference analogues of the conservation laws of the original differential model are obtained. Some typical problems are considered numerically, for which a comparison is made between the cases of a magnetic field presence and when it is absent (the standard shallow water model). The invariance of difference schemes in Lagrangian coordinates and the energy preservation on the obtained numerical solutions are also discussed.
Partial differential equations (PDEs) with uncertain or random inputs have been considered in many studies of uncertainty quantification. In forward uncertainty quantification, one is interested in analyzing the stochastic response of the PDE subject to input uncertainty, which usually involves solving high-dimensional integrals of the PDE output over a sequence of stochastic variables. In practical computations, one typically needs to discretize the problem in several ways: approximating an infinite-dimensional input random field with a finite-dimensional random field, spatial discretization of the PDE using, e.g., finite elements, and approximating high-dimensional integrals using cubatures such as quasi-Monte Carlo methods. In this paper, we focus on the error resulting from dimension truncation of an input random field. We show how Taylor series can be used to derive theoretical dimension truncation rates for a wide class of problems and we provide a simple checklist of conditions that a parametric mathematical model needs to satisfy in order for our dimension truncation error bound to hold. Some of the novel features of our approach include that our results are applicable to non-affine parametric operator equations, dimensionally-truncated conforming finite element discretized solutions of parametric PDEs, and even compositions of PDE solutions with smooth nonlinear quantities of interest. As a specific application of our method, we derive an improved dimension truncation error bound for elliptic PDEs with lognormally parameterized diffusion coefficients. Numerical examples support our theoretical findings.
Turbulent fluctuations of the atmospheric refraction index, so-called optical turbulence, can significantly distort propagating laser beams. Therefore, modeling the strength of these fluctuations ($C_n^2$) is highly relevant for the successful development and deployment of future free-space optical communication links. In this letter, we propose a physics-informed machine learning (ML) methodology, $\Pi$-ML, based on dimensional analysis and gradient boosting to estimate $C_n^2$. Through a systematic feature importance analysis, we identify the normalized variance of potential temperature as the dominating feature for predicting $C_n^2$. For statistical robustness, we train an ensemble of models which yields high performance on the out-of-sample data of $R^2=0.958\pm0.001$.
By combining a logarithm transformation with a corrected Milstein-type method, the present article proposes an explicit, unconditional boundary and dynamics preserving scheme for the stochastic susceptible-infected-susceptible (SIS) epidemic model that takes value in (0,N). The scheme applied to the model is first proved to have a strong convergence rate of order one. Further, the dynamic behaviors are analyzed for the numerical approximations and it is shown that the scheme can unconditionally preserve both the domain and the dynamics of the model. More precisely, the proposed scheme gives numerical approximations living in the domain (0,N) and reproducing the extinction and persistence properties of the original model for any time discretization step-size h > 0, without any additional requirements on the model parameters. Numerical experiments are presented to verify our theoretical results.
Stochastic inverse problems are typically encountered when it is wanted to quantify the uncertainty affecting the inputs of computer models. They consist in estimating input distributions from noisy, observable outputs, and such problems are increasingly examined in Bayesian contexts where the targeted inputs are affected by stochastic uncertainties. In this regard, a stochastic input can be qualified as meaningful if it explains most of the output uncertainty. While such inverse problems are characterized by identifiability conditions, constraints of "signal to noise", that can formalize this meaningfulness, should be accounted for within the definition of the model, prior to inference. This article investigates the possibility of forcing a solution to be meaningful in the context of parametric uncertainty quantification, through the tools of global sensitivity analysis and information theory (variance, entropy, Fisher information). Such forcings have mainly the nature of constraints placed on the input covariance, and can be made explicit by considering linear or linearizable models. Simulated experiments indicate that, when injected into the modeling process, these constraints can limit the influence of measurement or process noise on the estimation of the input distribution, and let hope for future extensions in a full non-linear framework, for example through the use of linear Gaussian mixtures.
The multivariate inverse hypergeometric (MIH) distribution is an extension of the negative multinomial (NM) model that accounts for sampling without replacement in a finite population. Even though most studies on longitudinal count data with a specific number of `failures' occur in a finite setting, the NM model is typically chosen over the more accurate MIH model. This raises the question: How much information is lost when inferring with the approximate NM model instead of the true MIH model? The loss is quantified by a measure called deficiency in statistics. In this paper, asymptotic bounds for the deficiencies between MIH and NM experiments are derived, as well as between MIH and the corresponding multivariate normal experiments with the same mean-covariance structure. The findings are supported by a local approximation for the log-ratio of the MIH and NM probability mass functions, and by Hellinger distance bounds.
A variant of the standard notion of branching bisimilarity for processes with discrete relative timing is proposed which is coarser than the standard notion. Using a version of ACP (Algebra of Communicating Processes) with abstraction for processes with discrete relative timing, it is shown that the proposed variant allows of both the functional correctness and the performance properties of the PAR (Positive Acknowledgement with Retransmission) protocol to be analyzed. In the version of ACP concerned, the difference between the standard notion of branching bisimilarity and its proposed variant is characterized by a single axiom schema.
We propose a new model to address the overlooked problem of node clustering in simple hypergraphs. Simple hypergraphs are suitable when a node may not appear multiple times in the same hyperedge, such as in co-authorship datasets. Our model assumes the existence of latent node groups and hyperedges are conditionally independent given these groups. We first establish the generic identifiability of the model parameters. We then develop a variational approximation Expectation-Maximization algorithm for parameter inference and node clustering, and derive a statistical criterion for model selection. To illustrate the performance of our R package HyperSBM, we compare it with other node clustering methods using synthetic data generated from the model, as well as from a line clustering experiment and a co-authorship dataset. As a by-product, our synthetic experiments demonstrate that the detectability thresholds for non-uniform sparse hypergraphs cannot be deduced from the uniform case.
We consider an unknown multivariate function representing a system-such as a complex numerical simulator-taking both deterministic and uncertain inputs. Our objective is to estimate the set of deterministic inputs leading to outputs whose probability (with respect to the distribution of the uncertain inputs) of belonging to a given set is less than a given threshold. This problem, which we call Quantile Set Inversion (QSI), occurs for instance in the context of robust (reliability-based) optimization problems, when looking for the set of solutions that satisfy the constraints with sufficiently large probability. To solve the QSI problem, we propose a Bayesian strategy based on Gaussian process modeling and the Stepwise Uncertainty Reduction (SUR) principle, to sequentially choose the points at which the function should be evaluated to efficiently approximate the set of interest. We illustrate the performance and interest of the proposed SUR strategy through several numerical experiments.
The topological (resp. geodesic) complexity of a topological (resp. metric) space is roughly the smallest number of continuous rules required to choose paths (resp. shortest paths) between any points of the space. We prove that the geodesic complexity of a cube exceeds its topological complexity by exactly 2. The proof involves a careful analysis of cut loci of the cube.