We describe Bayes factors functions based on z, t, $\chi^2$, and F statistics and the prior distributions used to define alternative hypotheses. The non-local alternative prior distributions are centered on standardized effects, which index the Bayes factor function. The prior densities include a dispersion parameter that models the variation of effect sizes across replicated experiments. We examine the convergence rates of Bayes factor functions under true null and true alternative hypotheses. Several examples illustrate the application of the Bayes factor functions to replicated experimental designs and compare the conclusions from these analyses to other default Bayes factor methods.
The Laguerre functions $l_{n,\tau}^\alpha$, $n=0,1,\dots$, are constructed from generalized Laguerre polynomials. The functions $l_{n,\tau}^\alpha$ depend on two parameters: scale $\tau>0$ and order of generalization $\alpha>-1$, and form an orthogonal basis in $L_2[0,\infty)$. Let the spectrum of a square matrix $A$ lie in the open left half-plane. Then the matrix exponential $H_A(t)=e^{At}$, $t>0$, belongs to $L_2[0,\infty)$. Hence the matrix exponential $H_A$ can be expanded in a series $H_A=\sum_{n=0}^\infty S_{n,\tau,\alpha,A}\,l_{n,\tau}^\alpha$. An estimate of the norm $\Bigl\lVert H_A-\sum_{n=0}^N S_{n,\tau,\alpha,A}\,l_{n,\tau}^\alpha\Bigr\rVert_{L_2[0,\infty)}$ is proposed. Finding the minimum of this estimate over $\tau$ and $\alpha$ is discussed. Numerical examples show that the optimal $\alpha$ is often almost 0, which essentially simplifies the problem.
We study the spectral properties of flipped Toeplitz matrices of the form $H_n(f)=Y_nT_n(f)$, where $T_n(f)$ is the $n\times n$ Toeplitz matrix generated by the function $f$ and $Y_n$ is the $n\times n$ exchange (or flip) matrix having $1$ on the main anti-diagonal and $0$ elsewhere. In particular, under suitable assumptions on $f$, we establish an alternating sign relationship between the eigenvalues of $H_n(f)$, the eigenvalues of $T_n(f)$, and the quasi-uniform samples of $f$. Moreover, after fine-tuning a few known theorems on Toeplitz matrices, we use them to provide localization results for the eigenvalues of $H_n(f)$. Our study is motivated by the convergence analysis of the minimal residual (MINRES) method for the solution of real non-symmetric Toeplitz linear systems of the form $T_n(f)\mathbf x=\mathbf b$ after pre-multiplication of both sides by $Y_n$, as suggested by Pestana and Wathen.
Consider a risk portfolio with aggregate loss random variable $S=X_1+\dots +X_n$ defined as the sum of the $n$ individual losses $X_1, \dots, X_n$. The expected allocation, $E[X_i \times 1_{\{S = k\}}]$, for $i = 1, \dots, n$ and $k \in \mathbb{N}$, is a vital quantity for risk allocation and risk-sharing. For example, one uses this value to compute peer-to-peer contributions under the conditional mean risk-sharing rule and capital allocated to a line of business under the Euler risk allocation paradigm. This paper introduces an ordinary generating function for expected allocations, a power series representation of the expected allocation of an individual risk given the total risks in the portfolio when all risks are discrete. First, we provide a simple relationship between the ordinary generating function for expected allocations and the probability generating function. Then, leveraging properties of ordinary generating functions, we reveal new theoretical results on closed-formed solutions to risk allocation problems, especially when dealing with Katz or compound Katz distributions. Then, we present an efficient algorithm to recover the expected allocations using the fast Fourier transform, providing a new practical tool to compute expected allocations quickly. The latter approach is exceptionally efficient for a portfolio of independent risks.
Normal modal logics extending the logic K4.3 of linear transitive frames are known to lack the Craig interpolation property, except some logics of bounded depth such as S5. We turn this `negative' fact into a research question and pursue a non-uniform approach to Craig interpolation by investigating the following interpolant existence problem: decide whether there exists a Craig interpolant between two given formulas in any fixed logic above K4.3. Using a bisimulation-based characterisation of interpolant existence for descriptive frames, we show that this problem is decidable and coNP-complete for all finitely axiomatisable normal modal logics containing K4.3. It is thus not harder than entailment in these logics, which is in sharp contrast to other recent non-uniform interpolation results. We also extend our approach to Priorean temporal logics (with both past and future modalities) over the standard time flows-the integers, rationals, reals, and finite strict linear orders-none of which is blessed with the Craig interpolation property.
Subsurface storage of CO$_2$ is an important means to mitigate climate change, and to investigate the fate of CO$_2$ over several decades in vast reservoirs, numerical simulation based on realistic models is essential. Faults and other complex geological structures introduce modeling challenges as their effects on storage operations are uncertain due to limited data. In this work, we present a computational framework for forward propagation of uncertainty, including stochastic upscaling and copula representation of flow functions for a CO$_2$ storage site using the Vette fault zone in the Smeaheia formation in the North Sea as a test case. The upscaling method leads to a reduction of the number of stochastic dimensions and the cost of evaluating the reservoir model. A viable model that represents the upscaled data needs to capture dependencies between variables, and allow sampling. Copulas provide representation of dependent multidimensional random variables and a good fit to data, allow fast sampling, and coupling to the forward propagation method via independent uniform random variables. The non-stationary correlation within some of the upscaled flow function are accurately captured by a data-driven transformation model. The uncertainty in upscaled flow functions and other parameters are propagated to uncertain leakage estimates using numerical reservoir simulation of a two-phase system. The expectations of leakage are estimated by an adaptive stratified sampling technique, where samples are sequentially concentrated to regions of the parameter space to greedily maximize variance reduction. We demonstrate cost reduction compared to standard Monte Carlo of one or two orders of magnitude for simpler test cases with only fault and reservoir layer permeabilities assumed uncertain, and factors 2--8 cost reduction for stochastic multi-phase flow properties and more complex stochastic models.
In the first part of the paper we study absolute error of sampling discretization of the integral $L_p$-norm for functional classes of continuous functions. We use chaining technique to provide a general bound for the error of sampling discretization of the $L_p$-norm on a given functional class in terms of entropy numbers in the uniform norm of this class. The general result yields new error bounds for sampling discretization of the $L_p$-norms on classes of multivariate functions with mixed smoothness. In the second part of the paper we apply the obtained bounds to study universal sampling discretization and the problem of optimal sampling recovery.
Exact methods for exponentiation of matrices of dimension $N$ can be computationally expensive in terms of execution time ($N^{3}$) and memory requirements ($N^{2}$) not to mention numerical precision issues. A type of matrix often exponentiated in the sciences is the rate matrix. Here we explore five methods to exponentiate rate matrices some of which apply even more broadly to other matrix types. Three of the methods leverage a mathematical analogy between computing matrix elements of a matrix exponential and computing transition probabilities of a dynamical processes (technically a Markov jump process, MJP, typically simulated using Gillespie). In doing so, we identify a novel MJP-based method relying on restricting the number of ``trajectory" jumps based on the magnitude of the matrix elements with favorable computational scaling. We then discuss this method's downstream implications on mixing properties of Monte Carlo posterior samplers. We also benchmark two other methods of matrix exponentiation valid for any matrix (beyond rate matrices and, more generally, positive definite matrices) related to solving differential equations: Runge-Kutta integrators and Krylov subspace methods. Under conditions where both the largest matrix element and the number of non-vanishing elements scale linearly with $N$ -- reasonable conditions for rate matrices often exponentiated -- computational time scaling with the most competitive methods (Krylov and one of the MJP-based methods) reduces to $N^2$ with total memory requirements of $N$.
Spatial statistics is traditionally based on stationary models on $\mathbb{R^d}$ like Mat\'ern fields. The adaptation of traditional spatial statistical methods, originally designed for stationary models in Euclidean spaces, to effectively model phenomena on linear networks such as stream systems and urban road networks is challenging. The current study aims to analyze the incidence of traffic accidents on road networks using three different methodologies and compare the model performance for each methodology. Initially, we analyzed the application of spatial triangulation precisely on road networks instead of traditional continuous regions. However, this approach posed challenges in areas with complex boundaries, leading to the emergence of artificial spatial dependencies. To address this, we applied an alternative computational method to construct nonstationary barrier models. Finally, we explored a recently proposed class of Gaussian processes on compact metric graphs, the Whittle-Mat\'ern fields, defined by a fractional SPDE on the metric graph. The latter fields are a natural extension of Gaussian fields with Mat\'ern covariance functions on Euclidean domains to non-Euclidean metric graph settings. A ten-year period (2010-2019) of daily traffic-accident records from Barcelona, Spain have been used to evaluate the three models referred above. While comparing model performance we observed that the Whittle-Mat\'ern fields defined directly on the network outperformed the network triangulation and barrier models. Due to their flexibility, the Whittle-Mat\'ern fields can be applied to a wide range of environmental problems on linear networks such as spatio-temporal modeling of water contamination in stream networks or modeling air quality or accidents on urban road networks.
For finite abstract simplicial complex $\Sigma$, initial realization $\alpha$ in $\mathbb{E}^d$, and desired edge lengths $L$, we give practical sufficient conditions for the existence of a non-self-intersecting perturbation of $\alpha$ realizing the lengths $L$. We provide software to verify these conditions by computer and optionally assist in the creation of an initial realization from abstract simplicial data. Applications include proving the existence of a planar embedding of a graph with specified edge lengths or proving the existence of polyhedra (or higher-dimensional polytopes) with specified edge lengths.
Finite-dimensional truncations are routinely used to approximate partial differential equations (PDEs), either to obtain numerical solutions or to derive reduced-order models. The resulting discretized equations are known to violate certain physical properties of the system. In particular, first integrals of the PDE may not remain invariant after discretization. Here, we use the method of reduced-order nonlinear solutions (RONS) to ensure that the conserved quantities of the PDE survive its finite-dimensional truncation. In particular, we develop two methods: Galerkin RONS and finite volume RONS. Galerkin RONS ensures the conservation of first integrals in Galerkin-type truncations, whether used for direct numerical simulations or reduced-order modeling. Similarly, finite volume RONS conserves any number of first integrals of the system, including its total energy, after finite volume discretization. Both methods are applicable to general time-dependent PDEs and can be easily incorporated in existing Galerkin-type or finite volume code. We demonstrate the efficacy of our methods on two examples: direct numerical simulations of the shallow water equation and a reduced-order model of the nonlinear Schrodinger equation. As a byproduct, we also generalize RONS to phenomena described by a system of PDEs.