We develop a prior probability model for temporal Poisson process intensities through structured mixtures of Erlang densities with common scale parameter, mixing on the integer shape parameters. The mixture weights are constructed through increments of a cumulative intensity function which is modeled nonparametrically with a gamma process prior. Such model specification provides a novel extension of Erlang mixtures for density estimation to the intensity estimation setting. The prior model structure supports general shapes for the point process intensity function, and it also enables effective handling of the Poisson process likelihood normalizing term resulting in efficient posterior simulation. The Erlang mixture modeling approach is further elaborated to develop an inference method for spatial Poisson processes. The methodology is examined relative to existing Bayesian nonparametric modeling approaches, including empirical comparison with Gaussian process prior based models, and is illustrated with synthetic and real data examples.
Bayesian nonparametric mixture models offer a rich framework for model based clustering. We consider the situation where the kernel of the mixture is available only up to an intractable normalizing constant. In this case, most of the commonly used Markov chain Monte Carlo (MCMC) methods are not suitable. We propose an approximate Bayesian computational (ABC) strategy, whereby we approximate the posterior to avoid the intractability of the kernel. We derive an ABC-MCMC algorithm which combines (i) the use of the predictive distribution induced by the nonparametric prior as proposal and (ii) the use of the Wasserstein distance and its connection to optimal matching problems. To overcome the sensibility with respect to the parameters of our algorithm, we further propose an adaptive strategy. We illustrate the use of the proposed algorithm with several simulation studies and an application on real data, where we cluster a population of networks, comparing its performance with standard MCMC algorithms and validating the adaptive strategy.
Neural networks (NNs) are making a large impact both on research and industry. Nevertheless, as NNs' accuracy increases, it is followed by an expansion in their size, required number of compute operations and energy consumption. Increase in resource consumption results in NNs' reduced adoption rate and real-world deployment impracticality. Therefore, NNs need to be compressed to make them available to a wider audience and at the same time decrease their runtime costs. In this work, we approach this challenge from a causal inference perspective, and we propose a scoring mechanism to facilitate structured pruning of NNs. The approach is based on measuring mutual information under a maximum entropy perturbation, sequentially propagated through the NN. We demonstrate the method's performance on two datasets and various NNs' sizes, and we show that our approach achieves competitive performance under challenging conditions.
Survival outcomes are common in comparative effectiveness studies and require unique handling because they are usually incompletely observed due to right-censoring. A ``once for all'' approach for causal inference with survival outcomes constructs pseudo-observations and allows standard methods such as propensity score weighting to proceed as if the outcomes are completely observed. For a general class of model-free causal estimands with survival outcomes on user-specified target populations, we develop corresponding propensity score weighting estimators based on the pseudo-observations and establish their asymptotic properties. In particular, utilizing the functional delta-method and the von Mises expansion, we derive a new closed-form variance of the weighting estimator that takes into account the uncertainty due to both pseudo-observation calculation and propensity score estimation. This allows valid and computationally efficient inference without resampling. We also prove the optimal efficiency property of the overlap weights within the class of balancing weights for survival outcomes. The proposed methods are applicable to both binary and multiple treatments. Extensive simulations are conducted to explore the operating characteristics of the proposed method versus other commonly used alternatives. We apply the proposed method to compare the causal effects of three popular treatment approaches for prostate cancer patients.
Optimization of thin-walled structures like an aircraft wing, aircraft fuselage or submarine hull often involves dividing the shell surface into numerous localized panels, each characterized by its own set of design variables. The process of extracting information about a localized panel (nodal coordinates, mesh connectivity) from a finite element model, input file is usually a problem-specific task. In this work, a generalized process to extract localized panels from the two-dimensional (2D) mesh is discussed. The process employs set operations on elemental connectivity information and is independent of nodal coordinates. Thus, it is capable of extracting panel of any shape given the boundary and thus can be used during optimization of a wide range of structures. A method to create stiffeners on the resulting local panels is also presented, and the effect of stiffener element size on buckling is studied. The local panel extraction process is demonstrated by integrating it into a distributed MDO framework for optimization of an aircraft wing having curvilinear spars and ribs (SpaRibs). A range of examples is included wherein the process is used to create panels on the wing-skin, bounded by adjacent SpaRibs.
In this work, we develop a constructive modeling framework for extreme threshold exceedances in repeated observations of spatial fields, based on general product mixtures of random fields possessing light or heavy-tailed margins and various spatial dependence characteristics, which are suitably designed to provide high flexibility in the tail and at sub-asymptotic levels. Our proposed model is akin to a recently proposed Gamma-Gamma model using a ratio of processes with Gamma marginal distributions, but it possesses a higher degree of flexibility in its joint tail structure, capturing strong dependence more easily. We focus on constructions with the following three product factors, whose different roles ensure their statistical identifiability: a heavy-tailed spatially-dependent field, a lighter-tailed spatially-constant field, and another lighter-tailed spatially-independent field. Thanks to the model's hierarchical formulation, inference may be conveniently performed based on Markov chain Monte Carlo methods. We leverage the Metropolis adjusted Langevin algorithm (MALA) with random block proposals for latent variables, as well as the stochastic gradient Langevin dynamics (SGLD) algorithm for hyperparameters, in order to fit our proposed model very efficiently in relatively high spatio-temporal dimensions, while simultaneously censoring non-threshold exceedances and performing spatial prediction at multiple sites. The censoring mechanism is applied to the spatially independent component, such that only univariate cumulative distribution functions have to be evaluated. We explore the theoretical properties of the novel model, and illustrate the proposed methodology by simulation and application to daily precipitation data from North-Eastern Spain measured at about 100 stations over the period 2011-2020.
There has been growing interest on forecasting mortality. In this article, we propose a novel dynamic Bayesian approach for modeling and forecasting the age-at-death distribution, focusing on a three-components mixture of a Dirac mass, a Gaussian distribution and a Skew-Normal distribution. According to the specified model, the age-at-death distribution is characterized via seven parameters corresponding to the main aspects of infant, adult and old-age mortality. The proposed approach focuses on coherent modeling of multiple countries, and following a Bayesian approach to inference we allow to borrow information across populations and to shrink parameters towards a common mean level, implicitly penalizing diverging scenarios. Dynamic modeling across years is induced trough an hierarchical dynamic prior distribution that allows to characterize the temporal evolution of each mortality component and to forecast the age-at-death distribution. Empirical results on multiple countries indicate that the proposed approach outperforms popular methods for forecasting mortality, providing interpretable insights on the evolution of mortality.
This paper proposes an algorithm to generate random numbers from any member of the truncated multivariate elliptical family of distributions with a strictly decreasing density generating function. Based on Neal (2003) and Ho et al. (2012), we construct an efficient sampling method by means of a slice sampling algorithm with Gibbs sampler steps. We also provide a faster approach to approximate the first and the second moment for the truncated multivariate elliptical distributions where Monte Carlo integration is used for the truncated partition, and explicit expressions for the non-truncated part (Galarza et al., 2020). Examples and an application to environmental spatial data illustrate its usefulness. Methods are available for free in the new R library elliptical.
Empirical likelihood enables a nonparametric, likelihood-driven style of inference without restrictive assumptions routinely made in parametric models. We develop a framework for applying empirical likelihood to the analysis of experimental designs, addressing issues that arise from blocking and multiple hypothesis testing. In addition to popular designs such as balanced incomplete block designs, our approach allows for highly unbalanced, incomplete block designs. Based on all these designs, we derive an asymptotic multivariate chi-square distribution for a set of empirical likelihood test statistics. Further, we propose two single-step multiple testing procedures: asymptotic Monte Carlo and nonparametric bootstrap. Both procedures asymptotically control the generalized family-wise error rate and efficiently construct simultaneous confidence intervals for comparisons of interest without explicitly considering the underlying covariance structure. A simulation study demonstrates that the performance of the procedures is robust to violations of standard assumptions of linear mixed models. Significantly, considering the asymptotic nature of empirical likelihood, the nonparametric bootstrap procedure performs well even for small sample sizes. We also present an application to experiments on a pesticide. Supplementary materials for this article are available online.
Stationary points embedded in the derivatives are often critical for a model to be interpretable and may be considered as key features of interest in many applications. We propose a semiparametric Bayesian model to efficiently infer the locations of stationary points of a nonparametric function, while treating the function itself as a nuisance parameter. We use Gaussian processes as a flexible prior for the underlying function and impose derivative constraints to control the function's shape via conditioning. We develop an inferential strategy that intentionally restricts estimation to the case of at least one stationary point, bypassing possible mis-specifications in the number of stationary points and avoiding the varying dimension problem that often brings in computational complexity. We illustrate the proposed methods using simulations and then apply the method to the estimation of event-related potentials (ERP) derived from electroencephalography (EEG) signals. We show how the proposed method automatically identifies characteristic components and their latencies at the individual level, which avoids the excessive averaging across subjects which is routinely done in the field to obtain smooth curves. By applying this approach to EEG data collected from younger and older adults during a speech perception task, we are able to demonstrate how the time course of speech perception processes changes with age.
Discrete random structures are important tools in Bayesian nonparametrics and the resulting models have proven effective in density estimation, clustering, topic modeling and prediction, among others. In this paper, we consider nested processes and study the dependence structures they induce. Dependence ranges between homogeneity, corresponding to full exchangeability, and maximum heterogeneity, corresponding to (unconditional) independence across samples. The popular nested Dirichlet process is shown to degenerate to the fully exchangeable case when there are ties across samples at the observed or latent level. To overcome this drawback, inherent to nesting general discrete random measures, we introduce a novel class of latent nested processes. These are obtained by adding common and group-specific completely random measures and, then, normalising to yield dependent random probability measures. We provide results on the partition distributions induced by latent nested processes, and develop an Markov Chain Monte Carlo sampler for Bayesian inferences. A test for distributional homogeneity across groups is obtained as a by product. The results and their inferential implications are showcased on synthetic and real data.