In this work we are interested in the (ill-posed) inverse problem for absolute permeability characterization that arises in predictive modeling of porous media flows. We consider a Bayesian statistical framework with a preconditioned Markov Chain Monte Carlo (MCMC) algorithm for the solution of the inverse problem. Reduction of uncertainty can be accomplished by incorporating measurements at sparse locations (static data) in the prior distribution. We present a new method to condition Gaussian fields (the log of permeability fields) to available sparse measurements. A truncated Karhunen-Lo\`eve expansion (KLE) is used for dimension reduction. In the proposed method the imposition of static data is made through the projection of a sample (expressed as a vector of independent, identically distributed normal random variables) onto the nullspace of a data matrix, that is defined in terms of the KLE. The numerical implementation of the proposed method is straightforward. Through numerical experiments for a model of second-order elliptic equation, we show that the proposed method in multi-chain studies converges much faster than the MCMC method without conditioning. These studies indicate the importance of conditioning in accelerating the MCMC convergence.
Poisson log-linear models are ubiquitous in many applications, and one of the most popular approaches for parametric count regression. In the Bayesian context, however, there are no sufficient specific computational tools for efficient sampling from the posterior distribution of parameters, and standard algorithms, such as random walk Metropolis-Hastings or Hamiltonian Monte Carlo algorithms, are typically used. Herein, we developed an efficient Metropolis-Hastings algorithm and importance sampler to simulate from the posterior distribution of the parameters of Poisson log-linear models under conditional Gaussian priors with superior performance with respect to the state-of-the-art alternatives. The key for both algorithms is the introduction of a proposal density based on a Gaussian approximation of the posterior distribution of parameters. Specifically, our result leverages the negative binomial approximation of the Poisson likelihood and the successful P\'olya-gamma data augmentation scheme. Via simulation, we obtained that the time per independent sample of the proposed samplers is competitive with that obtained using the successful Hamiltonian Monte Carlo sampling, with the Metropolis-Hastings showing superior performance in all scenarios considered.
In the context of Bayesian factor analysis, it is possible to compute mean plausible values, which might be used as covariates or predictors or in order to provide individual scores for the Bayesian latent variables. Previous simulation studies ascertained the validity of the plausible values by the mean squared difference of the plausible values and the generating factor scores. However, the generating factor scores are unknown in empirical studies so that an indicator that is solely based on model parameters is needed in order to evaluate the validity of factor score estimates in empirical studies. The coefficient of determinacy is based on model parameters and can be computed whenever Bayesian factor analysis is performed in empirical settings. Therefore, the central aim of the present simulation study was to compare the coefficient of determinacy based on model parameters with the correlation of mean plausible values with the generating factors. It was found that the coefficient of determinacy yields an acceptable estimate for the validity of mean plausible values. As for small sample sizes and a small salient loading size the coefficient of determinacy overestimates the validity, it is recommended to report the coefficient of determinacy together with a bias-correction in order to estimate the validity of mean plausible values in empirical settings.
In the analysis of two-way contingency tables, the measures for representing the degree of departure from independence, symmetry or asymmetry are often used. These measures in contingency tables are expressed as functions of the probability structure of the tables. Hence, the value of a measure is estimated. Plug-in estimators of measures with sample proportions are used to estimate the measures, but without sufficient sample size, the bias and mean squared error (MSE) of the estimators become large. This study proposes an estimator that can reduce the bias and MSE, even without a sufficient sample size, using the Bayesian estimators of cell probabilities. We asymptotically evaluate the MSE of the estimator of the measure plugging in the posterior means of the cell probabilities when the prior distribution of the cell probabilities is the Dirichlet distribution. As a result, we can derive the Dirichlet parameter that asymptotically minimizes the MSE of the estimator. Numerical experiments show that the proposed estimator has a smaller bias and MSE than the plug-in estimator with sample proportions, uniform prior, and Jeffreys prior. Another advantage of our approach is the construction of credible intervals for measures using Monte Carlo simulations.
Information geometry is concerned with the application of differential geometry concepts in the study of the parametric spaces of statistical models. When the random variables are independent and identically distributed, the underlying parametric space exhibit constant curvature, which makes the geometry hyperbolic (negative) or spherical (positive). In this paper, we derive closed-form expressions for the components of the first and second fundamental forms regarding pairwise isotropic Gaussian-Markov random field manifolds, allowing the computation of the Gaussian, mean and principal curvatures. Computational simulations using Markov Chain Monte Carlo dynamics indicate that a change in the sign of the Gaussian curvature is related to the emergence of phase transitions in the field. Moreover, the curvatures are highly asymmetrical for positive and negative displacements in the inverse temperature parameter, suggesting the existence of irreversible geometric properties in the parametric space along the dynamics. Furthermore, these asymmetric changes in the curvature of the space induces an intrinsic notion of time in the evolution of the random field.
In this work we consider regularized Wasserstein barycenters (average in Wasserstein distance) in Fourier basis. We prove that random Fourier parameters of the barycenter converge to some Gaussian random vector by distribution. The convergence rate has been derived in finite-sample case with explicit dependence on measures count ($n$) and the dimension of parameters ($p$).
We study the optimal transport problem for pairs of stationary finite-state Markov chains, with an emphasis on the computation of optimal transition couplings. Transition couplings are a constrained family of transport plans that capture the dynamics of Markov chains. Solutions of the optimal transition coupling (OTC) problem correspond to alignments of the two chains that minimize long-term average cost. We establish a connection between the OTC problem and Markov decision processes, and show that solutions of the OTC problem can be obtained via an adaptation of policy iteration. For settings with large state spaces, we develop a fast approximate algorithm based on an entropy-regularized version of the OTC problem, and provide bounds on its per-iteration complexity. We establish a stability result for both the regularized and unregularized algorithms, from which a statistical consistency result follows as a corollary. We validate our theoretical results empirically through a simulation study, demonstrating that the approximate algorithm exhibits faster overall runtime with low error. Finally, we extend the setting and application of our methods to hidden Markov models, and illustrate the potential use of the proposed algorithms in practice with an application to computer-generated music.
Under some regularity assumptions, we report an a priori error analysis of a dG scheme for the Poisson and Stokes flow problem in their dual mixed formulation. Both formulations satisfy a Babu\v{s}ka-Brezzi type condition within the space H(div) x L2. It is well known that the lowest order Crouzeix-Raviart element paired with piecewise constants satisfies such a condition on (broken) H1 x L2 spaces. In the present article, we use this pair. The continuity of the normal component is weakly imposed by penalizing jumps of the broken H(div) component. For the resulting methods, we prove well-posedness and convergence with constants independent of data and mesh size. We report error estimates in the methods natural norms and optimal local error estimates for the divergence error. In fact, our finite element solution shares for each triangle one DOF with the CR interpolant and the divergence is locally the best-approximation for any regularity. Numerical experiments support the findings and suggest that the other errors converge optimally even for the lowest regularity solutions and a crack-problem, as long as the crack is resolved by the mesh.
In order to avoid the curse of dimensionality, frequently encountered in Big Data analysis, there was a vast development in the field of linear and nonlinear dimension reduction techniques in recent years. These techniques (sometimes referred to as manifold learning) assume that the scattered input data is lying on a lower dimensional manifold, thus the high dimensionality problem can be overcome by learning the lower dimensionality behavior. However, in real life applications, data is often very noisy. In this work, we propose a method to approximate $\mathcal{M}$ a $d$-dimensional $C^{m+1}$ smooth submanifold of $\mathbb{R}^n$ ($d \ll n$) based upon noisy scattered data points (i.e., a data cloud). We assume that the data points are located "near" the lower dimensional manifold and suggest a non-linear moving least-squares projection on an approximating $d$-dimensional manifold. Under some mild assumptions, the resulting approximant is shown to be infinitely smooth and of high approximation order (i.e., $O(h^{m+1})$, where $h$ is the fill distance and $m$ is the degree of the local polynomial approximation). The method presented here assumes no analytic knowledge of the approximated manifold and the approximation algorithm is linear in the large dimension $n$. Furthermore, the approximating manifold can serve as a framework to perform operations directly on the high dimensional data in a computationally efficient manner. This way, the preparatory step of dimension reduction, which induces distortions to the data, can be avoided altogether.
We consider the task of learning the parameters of a {\em single} component of a mixture model, for the case when we are given {\em side information} about that component, we call this the "search problem" in mixture models. We would like to solve this with computational and sample complexity lower than solving the overall original problem, where one learns parameters of all components. Our main contributions are the development of a simple but general model for the notion of side information, and a corresponding simple matrix-based algorithm for solving the search problem in this general setting. We then specialize this model and algorithm to four common scenarios: Gaussian mixture models, LDA topic models, subspace clustering, and mixed linear regression. For each one of these we show that if (and only if) the side information is informative, we obtain parameter estimates with greater accuracy, and also improved computation complexity than existing moment based mixture model algorithms (e.g. tensor methods). We also illustrate several natural ways one can obtain such side information, for specific problem instances. Our experiments on real data sets (NY Times, Yelp, BSDS500) further demonstrate the practicality of our algorithms showing significant improvement in runtime and accuracy.
Discrete random structures are important tools in Bayesian nonparametrics and the resulting models have proven effective in density estimation, clustering, topic modeling and prediction, among others. In this paper, we consider nested processes and study the dependence structures they induce. Dependence ranges between homogeneity, corresponding to full exchangeability, and maximum heterogeneity, corresponding to (unconditional) independence across samples. The popular nested Dirichlet process is shown to degenerate to the fully exchangeable case when there are ties across samples at the observed or latent level. To overcome this drawback, inherent to nesting general discrete random measures, we introduce a novel class of latent nested processes. These are obtained by adding common and group-specific completely random measures and, then, normalising to yield dependent random probability measures. We provide results on the partition distributions induced by latent nested processes, and develop an Markov Chain Monte Carlo sampler for Bayesian inferences. A test for distributional homogeneity across groups is obtained as a by product. The results and their inferential implications are showcased on synthetic and real data.