A widely-used model for determining the long-term health impacts of public health interventions, often called a "multistate lifetable", requires estimates of incidence, case fatality, and sometimes also remission rates, for multiple diseases by age and gender. Generally, direct data on both incidence and case fatality are not available in every disease and setting. For example, we may know population mortality and prevalence rather than case fatality and incidence. This paper presents Bayesian continuous-time multistate models for estimating transition rates between disease states based on incomplete data. This builds on previous methods by using a formal statistical model with transparent data-generating assumptions, while providing accessible software as an R package. Rates for people of different ages and areas can be related flexibly through splines or hierarchical models. Previous methods are also extended to allow age-specific trends through calendar time. The model is used to estimate case fatality for multiple diseases in the city regions of England, based on incidence, prevalence and mortality data from the Global Burden of Disease study. The estimates can be used to inform health impact models relating to those diseases and areas. Different assumptions about rates are compared, and we check the influence of different data sources.
It is common practice to use Laplace approximations to compute marginal likelihoods in Bayesian versions of generalised linear models (GLM). Marginal likelihoods combined with model priors are then used in different search algorithms to compute the posterior marginal probabilities of models and individual covariates. This allows performing Bayesian model selection and model averaging. For large sample sizes, even the Laplace approximation becomes computationally challenging because the optimisation routine involved needs to evaluate the likelihood on the full set of data in multiple iterations. As a consequence, the algorithm is not scalable for large datasets. To address this problem, we suggest using a version of a popular batch stochastic gradient descent (BSGD) algorithm for estimating the marginal likelihood of a GLM by subsampling from the data. We further combine the algorithm with Markov chain Monte Carlo (MCMC) based methods for Bayesian model selection and provide some theoretical results on the convergence of the estimates. Finally, we report results from experiments illustrating the performance of the proposed algorithm.
Statistical analysis based on quantile regression methods is more comprehensive, flexible, and less sensitive to outliers when compared to mean regression methods. When the link between different diseases are of interest, joint disease mapping is useful for measuring directional correlation between them. Most studies study this link through multiple correlated mean regressions. In this paper we propose a joint quantile regression framework for multiple diseases where different quantile levels can be considered. We are motivated by the theorized link between the presence of Malaria and the gene deficiency G6PD, where medical scientist have anecdotally discovered a possible link between high levels of G6PD and lower than expected levels of Malaria initially pointing towards the occurrence of G6PD inhibiting the occurrence of Malaria. This link cannot be investigated with mean regressions and thus the need for flexible joint quantile regression in a disease mapping framework. Our joint quantile disease mapping model can be used for linear and non-linear effects of covariates by stochastic splines, since we define it as a latent Gaussian model. We perform Bayesian inference of this model using the INLA framework embedded in the R software package INLA. Finally, we illustrate the applicability of model by analyzing the malaria and G6PD deficiency incidences in 21 African countries using linked quantiles of different levels.
The network influence model is a model for binary outcome variables that accounts for dependencies between outcomes for units that are relationally tied. The basic influence model was previously extended to afford a suite of new dependence assumptions and because of its relation to traditional Markov random field models it is often referred to as the auto logistic actor-attribute model (ALAAM). We extend on current approaches for fitting ALAAMs by presenting a comprehensive Bayesian inference scheme that supports testing of dependencies across subsets of data and the presence of missing data. We illustrate different aspects of the procedures through three empirical examples: masculinity attitudes in an all-male Australian school class, educational progression in Swedish schools, and un-employment among adults in a community sample in Australia.
Approximate Bayesian computation (ABC) is a popular likelihood-free inference method for models with intractable likelihood functions. As ABC methods usually rely on comparing summary statistics of observed and simulated data, the choice of the statistics is crucial. This choice involves a trade-off between loss of information and dimensionality reduction, and is often determined based on domain knowledge. However, handcrafting and selecting suitable statistics is a laborious task involving multiple trial-and-error steps. In this work, we introduce an active learning method for ABC statistics selection which reduces the domain expert's work considerably. By involving the experts, we are able to handle misspecified models, unlike the existing dimension reduction methods. Moreover, empirical results show better posterior estimates than with existing methods, when the simulation budget is limited.
For stochastic models with intractable likelihood functions, approximate Bayesian computation offers a way of approximating the true posterior through repeated comparisons of observations with simulated model outputs in terms of a small set of summary statistics. These statistics need to retain the information that is relevant for constraining the parameters but cancel out the noise. They can thus be seen as thermodynamic state variables, for general stochastic models. For many scientific applications, we need strictly more summary statistics than model parameters to reach a satisfactory approximation of the posterior. Therefore, we propose to use the inner dimension of deep neural network based Autoencoders as summary statistics. To create an incentive for the encoder to encode all the parameter-related information but not the noise, we give the decoder access to explicit or implicit information on the noise that has been used to generate the training data. We validate the approach empirically on two types of stochastic models.
This paper tackles the problem of missing data imputation for noisy and non-Gaussian data. A classical imputation method, the Expectation Maximization (EM) algorithm for Gaussian mixture models, has shown interesting properties when compared to other popular approaches such as those based on k-nearest neighbors or on multiple imputations by chained equations. However, Gaussian mixture models are known to be not robust to heterogeneous data, which can lead to poor estimation performance when the data is contaminated by outliers or come from a non-Gaussian distributions. To overcome this issue, a new expectation maximization algorithm is investigated for mixtures of elliptical distributions with the nice property of handling potential missing data. The complete-data likelihood associated with mixtures of elliptical distributions is well adapted to the EM framework thanks to its conditional distribution, which is shown to be a Student distribution. Experimental results on synthetic data demonstrate that the proposed algorithm is robust to outliers and can be used with non-Gaussian data. Furthermore, experiments conducted on real-world datasets show that this algorithm is very competitive when compared to other classical imputation methods.
Researchers are often faced with evaluating the effect of a policy or program that was simultaneously initiated across an entire population of units at a single point in time, and its effects over the targeted population can manifest at any time period afterwards. In the presence of data measured over time, Bayesian time series models have been used to impute what would have happened after the policy was initiated, had the policy not taken place, in order to estimate causal effects. However, the considerations regarding the definition of the target estimands, the underlying assumptions, the plausibility of such assumptions, and the choice of an appropriate model have not been thoroughly investigated. In this paper, we establish useful estimands for the evaluation of large-scale policies. We discuss that imputation of missing potential outcomes relies on an assumption which, even though untestable, can be partially evaluated using observed data. We illustrate an approach to evaluate this key causal assumption and facilitate model elicitation based on data from the time interval before policy initiation and using classic statistical techniques. As an illustration, we study the Hospital Readmissions Reduction Program (HRRP), a US federal intervention aiming to improve health outcomes for patients with pneumonia, acute myocardial infraction, or congestive failure admitted to a hospital. We evaluate the effect of the HRRP on population mortality across the US and in four geographic subregions, and at different time windows. We find that the HRRP increased mortality from the three targeted conditions across most scenarios considered, and is likely to have had a detrimental effect on public health.
The role of epidemiological models is crucial for informing public health officials during a public health emergency, such as the COVID-19 pandemic. However, traditional epidemiological models fail to capture the time-varying effects of mitigation strategies and do not account for under-reporting of active cases, thus introducing bias in the estimation of model parameters. To overcome these modelling challenges, we extend the SIR and SEIR epidemiological models with two time-varying parameters that capture the transmission rate and the rate at which active cases are reported to health officials. Using two real datasets of COVID-19 cases, we perform Bayesian inference via our SIR and SEIR models with time-varying transmission and reporting rates and via their standard counterparts with constant rates. Our approach provides parameter estimates with more realistic interpretation, and one-week ahead predictions with reduced uncertainty.
Discovering causal structure among a set of variables is a fundamental problem in many empirical sciences. Traditional score-based casual discovery methods rely on various local heuristics to search for a Directed Acyclic Graph (DAG) according to a predefined score function. While these methods, e.g., greedy equivalence search, may have attractive results with infinite samples and certain model assumptions, they are usually less satisfactory in practice due to finite data and possible violation of assumptions. Motivated by recent advances in neural combinatorial optimization, we propose to use Reinforcement Learning (RL) to search for the DAG with the best scoring. Our encoder-decoder model takes observable data as input and generates graph adjacency matrices that are used to compute rewards. The reward incorporates both the predefined score function and two penalty terms for enforcing acyclicity. In contrast with typical RL applications where the goal is to learn a policy, we use RL as a search strategy and our final output would be the graph, among all graphs generated during training, that achieves the best reward. We conduct experiments on both synthetic and real datasets, and show that the proposed approach not only has an improved search ability but also allows a flexible score function under the acyclicity constraint.
The Everyday Sexism Project documents everyday examples of sexism reported by volunteer contributors from all around the world. It collected 100,000 entries in 13+ languages within the first 3 years of its existence. The content of reports in various languages submitted to Everyday Sexism is a valuable source of crowdsourced information with great potential for feminist and gender studies. In this paper, we take a computational approach to analyze the content of reports. We use topic-modelling techniques to extract emerging topics and concepts from the reports, and to map the semantic relations between those topics. The resulting picture closely resembles and adds to that arrived at through qualitative analysis, showing that this form of topic modeling could be useful for sifting through datasets that had not previously been subject to any analysis. More precisely, we come up with a map of topics for two different resolutions of our topic model and discuss the connection between the identified topics. In the low resolution picture, for instance, we found Public space/Street, Online, Work related/Office, Transport, School, Media harassment, and Domestic abuse. Among these, the strongest connection is between Public space/Street harassment and Domestic abuse and sexism in personal relationships.The strength of the relationships between topics illustrates the fluid and ubiquitous nature of sexism, with no single experience being unrelated to another.