The Bayesian statistical framework provides a systematic approach to enhance the regularization model by incorporating prior information about the desired solution. For the Bayesian linear inverse problems with Gaussian noise and Gaussian prior, we propose a new iterative regularization algorithm that belongs to subspace projection regularization (SPR) methods. By treating the forward model matrix as a linear operator between the two underlying finite dimensional Hilbert spaces with new introduced inner products, we first introduce an iterative process that can generate a series of valid solution subspaces. The SPR method then projects the original problem onto these solution subspaces to get a series of low dimensional linear least squares problems, where an efficient procedure is developed to update the solutions of them to approximate the desired solution of the original problem. With the new designed early stopping rules, this iterative algorithm can obtain a regularized solution with a satisfied accuracy. Several theoretical results about the algorithm are established to reveal the regularization properties of it. We use both small-scale and large-scale inverse problems to test the proposed algorithm and demonstrate its robustness and efficiency. The most computationally intensive operations in the proposed algorithm only involve matrix-vector products, making it highly efficient for large-scale problems.
An essential problem in statistics and machine learning is the estimation of expectations involving PDFs with intractable normalizing constants. The self-normalized importance sampling (SNIS) estimator, which normalizes the IS weights, has become the standard approach due to its simplicity. However, the SNIS has been shown to exhibit high variance in challenging estimation problems, e.g, involving rare events or posterior predictive distributions in Bayesian statistics. Further, most of the state-of-the-art adaptive importance sampling (AIS) methods adapt the proposal as if the weights had not been normalized. In this paper, we propose a framework that considers the original task as estimation of a ratio of two integrals. In our new formulation, we obtain samples from a joint proposal distribution in an extended space, with two of its marginals playing the role of proposals used to estimate each integral. Importantly, the framework allows us to induce and control a dependency between both estimators. We propose a construction of the joint proposal that decomposes in two (multivariate) marginals and a coupling. This leads to a two-stage framework suitable to be integrated with existing or new AIS and/or variational inference (VI) algorithms. The marginals are adapted in the first stage, while the coupling can be chosen and adapted in the second stage. We show in several examples the benefits of the proposed methodology, including an application to Bayesian prediction with misspecified models.
The tree-structured varying coefficient model (TSVC) is a flexible regression approach that allows the effects of covariates to vary with the values of the effect modifiers. Relevant effect modifiers are identified inherently using recursive partitioning techniques. To quantify uncertainty in TSVC models, we propose a procedure to construct confidence intervals of the estimated partition-specific coefficients. This task constitutes a selective inference problem as the coefficients of a TSVC model result from data-driven model building. To account for this issue, we introduce a parametric bootstrap approach, which is tailored to the complex structure of TSVC. Finite sample properties, particularly coverage proportions, of the proposed confidence intervals are evaluated in a simulation study. For illustration, we consider applications to data from COVID-19 patients and from patients suffering from acute odontogenic infection. The proposed approach may also be adapted for constructing confidence intervals for other tree-based methods.
Blocking, a special case of rerandomization, is routinely implemented in the design stage of randomized experiments to balance the baseline covariates. This study proposes a regression adjustment method based on the least absolute shrinkage and selection operator (Lasso) to efficiently estimate the average treatment effect in randomized block experiments with high-dimensional covariates. We derive the asymptotic properties of the proposed estimator and outline the conditions under which this estimator is more efficient than the unadjusted one. We provide a conservative variance estimator to facilitate valid inferences. Our framework allows one treated or control unit in some blocks and heterogeneous propensity scores across blocks, thus including paired experiments and finely stratified experiments as special cases. We further accommodate rerandomized experiments and a combination of blocking and rerandomization. Moreover, our analysis allows both the number of blocks and block sizes to tend to infinity, as well as heterogeneous treatment effects across blocks without assuming a true outcome data-generating model. Simulation studies and two real-data analyses demonstrate the advantages of the proposed method.
This note discusses a simple modification of cross-conformal prediction inspired by recent work on e-values. The precursor of conformal prediction developed in the 1990s by Gammerman, Vapnik, and Vovk was also based on e-values and is called conformal e-prediction in this note. Replacing e-values by p-values led to conformal prediction, which has important advantages over conformal e-prediction without obvious disadvantages. The situation with cross-conformal prediction is, however, different: whereas for cross-conformal prediction validity is only an empirical fact (and can be broken with excessive randomization), this note draws the reader's attention to the obvious fact that cross-conformal e-prediction enjoys a guaranteed property of validity.
This paper presents a multivariate normal integral expression for the joint survival function of the cumulated components of any multinomial random vector. This result can be viewed as a multivariate analog of Equation (7) from Carter & Pollard (2004), who improved Tusn\'ady's inequality. Our findings are based on a crucial relationship between the joint survival function of the cumulated components of any multinomial random vector and the joint cumulative distribution function of a corresponding Dirichlet distribution. We offer two distinct proofs: the first expands the logarithm of the Dirichlet density, while the second employs Laplace's method applied to the Dirichlet integral.
We formulate and solve a Bayesian inverse Navier-Stokes (N-S) problem that assimilates velocimetry data in order to jointly reconstruct a 3D flow field and learn the unknown N-S parameters, including the boundary position. By hardwiring a generalised N-S problem, and regularising its unknown parameters using Gaussian prior distributions, we learn the most likely parameters in a collapsed search space. The most likely flow field reconstruction is then the N-S solution that corresponds to the learned parameters. We develop the method in the variational setting and use a stabilised Nitsche weak form of the N-S problem that permits the control of all N-S parameters. To regularise the inferred the geometry, we use a viscous signed distance field (vSDF) as an auxiliary variable, which is given as the solution of a viscous Eikonal boundary value problem. We devise an algorithm that solves this inverse problem, and numerically implement it using an adjoint-consistent stabilised cut-cell finite element method. We then use this method to reconstruct magnetic resonance velocimetry (flow-MRI) data of a 3D steady laminar flow through a physical model of an aortic arch for two different Reynolds numbers and signal-to-noise ratio (SNR) levels (low/high). We find that the method can accurately i) reconstruct the low SNR data by filtering out the noise/artefacts and recovering flow features that are obscured by noise, and ii) reproduce the high SNR data without overfitting. Although the framework that we develop applies to 3D steady laminar flows in complex geometries, it readily extends to time-dependent laminar and Reynolds-averaged turbulent flows, as well as non-Newtonian (e.g. viscoelastic) fluids.
In statistical inference, a discrepancy between the parameter-to-observable map that generates the data and the parameter-to-observable map that is used for inference can lead to misspecified likelihoods and thus to incorrect estimates. In many inverse problems, the parameter-to-observable map is the composition of a linear state-to-observable map called an `observation operator' and a possibly nonlinear parameter-to-state map called the `model'. We consider such Bayesian inverse problems where the discrepancy in the parameter-to-observable map is due to the use of an approximate model that differs from the best model, i.e. to nonzero `model error'. Multiple approaches have been proposed to address such discrepancies, each leading to a specific posterior. We show how to use local Lipschitz stability estimates of posteriors with respect to likelihood perturbations to bound the Kullback--Leibler divergence of the posterior of each approach with respect to the posterior associated to the best model. Our bounds lead to criteria for choosing observation operators that mitigate the effect of model error for Bayesian inverse problems of this type. We illustrate one such criterion on an advection-diffusion-reaction PDE inverse problem from the literature, and use this example to discuss the importance and challenges of model error-aware inference.
Rational approximation is a powerful tool to obtain accurate surrogates for nonlinear functions that are easy to evaluate and linearize. The interpolatory adaptive Antoulas--Anderson (AAA) method is one approach to construct such approximants numerically. For large-scale vector- and matrix-valued functions, however, the direct application of the set-valued variant of AAA becomes inefficient. We propose and analyze a new sketching approach for such functions called sketchAAA that, with high probability, leads to much better approximants than previously suggested approaches while retaining efficiency. The sketching approach works in a black-box fashion where only evaluations of the nonlinear function at sampling points are needed. Numerical tests with nonlinear eigenvalue problems illustrate the efficacy of our approach, with speedups above 200 for sampling large-scale black-box functions without sacrificing on accuracy.
We present and analyze a structure-preserving method for the approximation of solutions to nonlinear cross-diffusion systems, which combines a Local Discontinuous Galerkin spatial discretization with the backward Euler time stepping scheme. The proposed method makes use of the underlying entropy structure of the system, expressing the main unknown in terms of the entropy variable by means of a nonlinear transformation. Such a transformation allows for imposing the physical positivity or boundedness constraints on the approximate solution in a strong sense. Moreover, nonlinearities do not appear explicitly within differential operators or interface terms in the scheme, which significantly improves its efficiency and ease its implementation. We prove the existence of discrete solutions and their asymptotic convergence to continuous weak solutions. Numerical results for some one- and two-dimensional problems illustrate the accuracy and entropy stability of the proposed method.
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.