Several distributions and families of distributions are proposed to model skewed data, think, e.g., of skew-normal and related distributions. Lambert W random variables offer an alternative approach where, instead of constructing a new distribution, a certain transform is proposed (Goerg, 2011). Such an approach allows the construction of a Lambert W skewed version from any distribution. We choose Lambert W normal distribution as a natural starting point and also include Lambert W exponential distribution due to the simplicity and shape of the exponential distribution, which, after skewing, may produce a reasonably heavy tail for loss models. In the theoretical part, we focus on the mathematical properties of obtained distributions, including the range of skewness. In the practical part, the suitability of corresponding Lambert W transformed distributions is evaluated on real insurance data. The results are compared with those obtained using common loss distributions.
The efficient representation of random fields on geometrically complex domains is crucial for Bayesian modelling in engineering and machine learning. Today's prevalent random field representations are either intended for unbounded domains or are too restrictive in terms of possible field properties. Because of these limitations, techniques leveraging the historically established link between stochastic PDEs (SPDEs) and random fields have been gaining interest. The SPDE representation is especially appealing for engineering applications which already have a finite element discretisation for solving the physical conservation equations. In contrast to the dense covariance matrix of a random field, its inverse, the precision matrix, is usually sparse and equal to the stiffness matrix of an elliptic SPDE. We use the SPDE representation to develop a scalable framework for large-scale statistical finite element analysis and Gaussian process (GP) regression on complex geometries. The statistical finite element method (statFEM) introduced by Girolami et al. (2022) is a novel approach for synthesising measurement data and finite element models. In both statFEM and GP regression, we use the SPDE formulation to obtain the relevant prior probability densities with a sparse precision matrix. The properties of the priors are governed by the parameters and possibly fractional order of the SPDE so that we can model on bounded domains and manifolds anisotropic, non-stationary random fields with arbitrary smoothness. The observation models for statFEM and GP regression are such that the posterior probability densities are Gaussians with a closed-form mean and precision. The respective mean vector and precision matrix and can be evaluated using only sparse matrix operations. We demonstrate the versatility of the proposed framework and its convergence properties with Poisson and thin-shell examples.
In statistical inference, a discrepancy between the parameter-to-observable map that generates the data and the parameter-to-observable map that is used for inference can lead to misspecified likelihoods and thus to incorrect estimates. In many inverse problems, the parameter-to-observable map is the composition of a linear state-to-observable map called an `observation operator' and a possibly nonlinear parameter-to-state map called the `model'. We consider such Bayesian inverse problems where the discrepancy in the parameter-to-observable map is due to the use of an approximate model that differs from the best model, i.e. to nonzero `model error'. Multiple approaches have been proposed to address such discrepancies, each leading to a specific posterior. We show how to use local Lipschitz stability estimates of posteriors with respect to likelihood perturbations to bound the Kullback--Leibler divergence of the posterior of each approach with respect to the posterior associated to the best model. Our bounds lead to criteria for choosing observation operators that mitigate the effect of model error for Bayesian inverse problems of this type. We illustrate the feasibility of one such criterion on an advection-diffusion-reaction PDE inverse problem, and use this example to discuss the importance and challenges of model error-aware inference.
This work considers Bayesian experimental design for the inverse boundary value problem of linear elasticity in a two-dimensional setting. The aim is to optimize the positions of compactly supported pressure activations on the boundary of the examined body in order to maximize the value of the resulting boundary deformations as data for the inverse problem of reconstructing the Lam\'e parameters inside the object. We resort to a linearized measurement model and adopt the framework of Bayesian experimental design, under the assumption that the prior and measurement noise distributions are mutually independent Gaussians. This enables the use of the standard Bayesian A-optimality criterion for deducing optimal positions for the pressure activations. The (second) derivatives of the boundary measurements with respect to the Lam\'e parameters and the positions of the boundary pressure activations are deduced to allow minimizing the corresponding objective function, i.e., the trace of the covariance matrix of the posterior distribution, by a gradient-based optimization algorithm. Two-dimensional numerical experiments are performed to demonstrate the functionality of our approach.
We extend the error bounds from [SIMAX, Vol. 43, Iss. 2, pp. 787-811 (2022)] for the Lanczos method for matrix function approximation to the block algorithm. Numerical experiments suggest that our bounds are fairly robust to changing block size and have the potential for use as a practical stopping criteria. Further experiments work towards a better understanding of how certain hyperparameters should be chosen in order to maximize the quality of the error bounds, even in the previously studied block-size one case.
The non-identifiability of the competing risks model requires researchers to work with restrictions on the model to obtain informative results. We present a new identifiability solution based on an exclusion restriction. Many areas of applied research use methods that rely on exclusion restrcitions. It appears natural to also use them for the identifiability of competing risks models. By imposing the exclusion restriction couple with an Archimedean copula, we are able to avoid any parametric restriction on the marginal distributions. We introduce a semiparametric estimation approach for the nonparametric marginals and the parametric copula. Our simulation results demonstrate the usefulness of the suggested model, as the degree of risk dependence can be estimated without parametric restrictions on the marginal distributions.
Many researchers have identified distribution shift as a likely contributor to the reproducibility crisis in behavioral and biomedical sciences. The idea is that if treatment effects vary across individual characteristics and experimental contexts, then studies conducted in different populations will estimate different average effects. This paper uses ``generalizability" methods to quantify how much of the effect size discrepancy between an original study and its replication can be explained by distribution shift on observed unit-level characteristics. More specifically, we decompose this discrepancy into ``components" attributable to sampling variability (including publication bias), observable distribution shifts, and residual factors. We compute this decomposition for several directly-replicated behavioral science experiments and find little evidence that observable distribution shifts contribute appreciably to non-replicability. In some cases, this is because there is too much statistical noise. In other cases, there is strong evidence that controlling for additional moderators is necessary for reliable replication.
In this paper, efficient alternating direction implicit (ADI) schemes are proposed to solve three-dimensional heat equations with irregular boundaries and interfaces. Starting from the well-known Douglas-Gunn ADI scheme, a modified ADI scheme is constructed to mitigate the issue of accuracy loss in solving problems with time-dependent boundary conditions. The unconditional stability of the new ADI scheme is also rigorously proven with the Fourier analysis. Then, by combining the ADI schemes with a 1D kernel-free boundary integral (KFBI) method, KFBI-ADI schemes are developed to solve the heat equation with irregular boundaries. In 1D sub-problems of the KFBI-ADI schemes, the KFBI discretization takes advantage of the Cartesian grid and preserves the structure of the coefficient matrix so that the fast Thomas algorithm can be applied to solve the linear system efficiently. Second-order accuracy and unconditional stability of the KFBI-ADI schemes are verified through several numerical tests for both the heat equation and a reaction-diffusion equation. For the Stefan problem, which is a free boundary problem of the heat equation, a level set method is incorporated into the ADI method to capture the time-dependent interface. Numerical examples for simulating 3D dendritic solidification phenomenons are also presented.
Using diffusion models to solve inverse problems is a growing field of research. Current methods assume the degradation to be known and provide impressive results in terms of restoration quality and diversity. In this work, we leverage the efficiency of those models to jointly estimate the restored image and unknown parameters of the degradation model. In particular, we designed an algorithm based on the well-known Expectation-Minimization (EM) estimation method and diffusion models. Our method alternates between approximating the expected log-likelihood of the inverse problem using samples drawn from a diffusion model and a maximization step to estimate unknown model parameters. For the maximization step, we also introduce a novel blur kernel regularization based on a Plug \& Play denoiser. Diffusion models are long to run, thus we provide a fast version of our algorithm. Extensive experiments on blind image deblurring demonstrate the effectiveness of our method when compared to other state-of-the-art approaches.
Hawkes processes are often applied to model dependence and interaction phenomena in multivariate event data sets, such as neuronal spike trains, social interactions, and financial transactions. In the nonparametric setting, learning the temporal dependence structure of Hawkes processes is generally a computationally expensive task, all the more with Bayesian estimation methods. In particular, for generalised nonlinear Hawkes processes, Monte-Carlo Markov Chain methods applied to compute the doubly intractable posterior distribution are not scalable to high-dimensional processes in practice. Recently, efficient algorithms targeting a mean-field variational approximation of the posterior distribution have been proposed. In this work, we first unify existing variational Bayes approaches under a general nonparametric inference framework, and analyse the asymptotic properties of these methods under easily verifiable conditions on the prior, the variational class, and the nonlinear model. Secondly, we propose a novel sparsity-inducing procedure, and derive an adaptive mean-field variational algorithm for the popular sigmoid Hawkes processes. Our algorithm is parallelisable and therefore computationally efficient in high-dimensional setting. Through an extensive set of numerical simulations, we also demonstrate that our procedure is able to adapt to the dimensionality of the parameter of the Hawkes process, and is partially robust to some type of model mis-specification.
Permutation tests are widely used for statistical hypothesis testing when the sampling distribution of the test statistic under the null hypothesis is analytically intractable or unreliable due to finite sample sizes. One critical challenge in the application of permutation tests in genomic studies is that an enormous number of permutations are often needed to obtain reliable estimates of very small $p$-values, leading to intensive computational effort. To address this issue, we develop algorithms for the accurate and efficient estimation of small $p$-values in permutation tests for paired and independent two-group genomic data, and our approaches leverage a novel framework for parameterizing the permutation sample spaces of those two types of data respectively using the Bernoulli and conditional Bernoulli distributions, combined with the cross-entropy method. The performance of our proposed algorithms is demonstrated through the application to two simulated datasets and two real-world gene expression datasets generated by microarray and RNA-Seq technologies and comparisons to existing methods such as crude permutations and SAMC, and the results show that our approaches can achieve orders of magnitude of computational efficiency gains in estimating small $p$-values. Our approaches offer promising solutions for the improvement of computational efficiencies of existing permutation test procedures and the development of new testing methods using permutations in genomic data analysis.