Longitudinal observational patient data can be used to investigate the causal effects of time-varying treatments on time-to-event outcomes. Several methods have been developed for controlling for the time-dependent confounding that typically occurs. The most commonly used is inverse probability weighted estimation of marginal structural models (MSM-IPTW). An alternative, the sequential trials approach, is increasingly popular, in particular in combination with the target trial emulation framework. This approach involves creating a sequence of `trials' from new time origins, restricting to individuals as yet untreated and meeting other eligibility criteria, and comparing treatment initiators and non-initiators. Individuals are censored when they deviate from their treatment status at the start of each `trial' (initiator/non-initiator) and this is addressed using inverse probability of censoring weights. The analysis is based on data combined across trials. We show that the sequential trials approach can estimate the parameter of a particular MSM, and compare it to a MSM-IPTW with respect to the estimands being identified, the assumptions needed and how data are used differently. We show how both approaches can estimate the same marginal risk differences. The two approaches are compared using a simulation study. The sequential trials approach, which tends to involve less extreme weights than MSM-IPTW, results in greater efficiency for estimating the marginal risk difference at most follow-up times, but this can, in certain scenarios, be reversed at late time points. We apply the methods to longitudinal observational data from the UK Cystic Fibrosis Registry to estimate the effect of dornase alfa on survival.
The robust Poisson model is becoming increasingly popular when estimating the association of exposures with a binary outcome. Unlike the logistic regression model, the robust Poisson model yields results that can be interpreted as risk or prevalence ratios. In addition, it does not suffer from frequent non-convergence problems like the log-binomial model. However, using a Poisson distribution to model a binary outcome may seem counterintuitive. Methodological papers have often presented this as a good approximation to the more natural binomial distribution. In this paper, we provide an alternative perspective to the robust Poisson model based on the semiparametric theory. This perspective highlights that the robust Poisson model does not require assuming a Poisson distribution for the outcome. In fact, the model can be seen as making no assumption on the distribution of the outcome; only a log-linear relationship assumption between the risk/prevalence of the outcome and the explanatory variables is required. This assumption and consequences of its violation are discussed. Suggestions to reduce the risk of violating the modeling assumption are also provided.
In this paper, we propose a general subgroup analysis framework based on semiparametric additive mixed effect models in longitudinal analysis, which can identify subgroups on each covariate and estimate the corresponding regression functions simultaneously. In addition, the proposed procedure is applicable for both balanced and unbalanced longitudinal data. A backfitting combined with k-means algorithm is developed to estimate each semiparametric additive component across subgroups and detect subgroup structure on each covariate respectively. The actual number of groups is estimated by minimizing a Bayesian information criteria. The numerical studies demonstrate the efficacy and accuracy of the proposed procedure in identifying the subgroups and estimating the regression functions. In addition, we illustrate the usefulness of our method with an application to PBC data and provide a meaningful partition of the population.
Quality of Life (QOL) outcomes are important in the management of chronic illnesses. In studies of efficacies of treatments or intervention modalities, QOL scales multi-dimensional constructs are routinely used as primary endpoints. The standard data analysis strategy computes composite (average) overall and domain scores, and conducts a mixed-model analysis for evaluating efficacy or monitoring medical conditions as if these scores were in continuous metric scale. However, assumptions of parametric models like continuity and homoscedastivity can be violated in many cases. Furthermore, it is even more challenging when there are missing values on some of the variables. In this paper, we propose a purely nonparametric approach in the sense that meaningful and, yet, nonparametric effect size measures are developed. We propose estimator for the effect size and develop the asymptotic properties. Our methods are shown to be particularly effective in the presence of some form of clustering and/or missing values. Inferential procedures are derived from the asymptotic theory. The Asthma Randomized Trial of Indoor Wood Smoke data will be used to illustrate the applications of the proposed methods. The data was collected from a three-arm randomized trial which evaluated interventions targeting biomass smoke particulate matter from older model residential wood stoves in homes that have children with asthma.
Data from both a randomized trial and an observational study are sometimes simultaneously available for evaluating the effect of an intervention. The randomized data typically allows for reliable estimation of average treatment effects but may be limited in sample size and patient heterogeneity for estimating conditional average treatment effects for a broad range of patients. Estimates from the observational study can potentially compensate for these limitations, but there may be concerns about whether confounding and treatment effect heterogeneity have been adequately addressed. We propose an approach for combining conditional treatment effect estimators from each source such that it aggressively weights toward the randomized estimator when bias in the observational estimator is detected. This allows the combination to be consistent for a conditional causal effect, regardless of whether assumptions required for consistent estimation in the observational study are satisfied. When the bias is negligible, the estimators from each source are combined for optimal efficiency. We show the problem can be formulated as a penalized least squares problem and consider its asymptotic properties. Simulations demonstrate the robustness and efficiency of the method in finite samples, in scenarios with bias or no bias in the observational estimator. We illustrate the method by estimating the effects of hormone replacement therapy on the risk of developing coronary heart disease in data from the Women's Health Initiative.
Complementary features of randomized controlled trials (RCTs) and observational studies (OSs) can be used jointly to estimate the average treatment effect of a target population. We propose a calibration weighting estimator that enforces the covariate balance between the RCT and OS, therefore improving the trial-based estimator's generalizability. Exploiting semiparametric efficiency theory, we propose a doubly robust augmented calibration weighting estimator that achieves the efficiency bound derived under the identification assumptions. A nonparametric sieve method is provided as an alternative to the parametric approach, which enables the robust approximation of the nuisance functions and data-adaptive selection of outcome predictors for calibration. We establish asymptotic results and confirm the finite sample performances of the proposed estimators by simulation experiments and an application on the estimation of the treatment effect of adjuvant chemotherapy for early-stage non-small cell lung patients after surgery.
Beta regression model is useful in the analysis of bounded continuous outcomes such as proportions. It is well known that for any regression model, the presence of multicollinearity leads to poor performance of the maximum likelihood estimators. The ridge type estimators have been proposed to alleviate the adverse effects of the multicollinearity. Furthermore, when some of the predictors have insignificant or weak effects on the outcomes, it is desired to recover as much information as possible from these predictors instead of discarding them all together. In this paper we proposed ridge type shrinkage estimators for the low and high dimensional beta regression model, which address the above two issues simultaneously. We compute the biases and variances of the proposed estimators in closed forms and use Monte Carlo simulations to evaluate their performances. The results show that, both in low and high dimensional data, the performance of the proposed estimators are superior to ridge estimators that discard weak or insignificant predictors. We conclude this paper by applying the proposed methods for two real data from econometric and medicine.
Feature selection is an extensively studied technique in the machine learning literature where the main objective is to identify the subset of features that provides the highest predictive power. However, in causal inference, our goal is to identify the set of variables that are associated with both the treatment variable and outcome (i.e., the confounders). While controlling for the confounding variables helps us to achieve an unbiased estimate of causal effect, recent research shows that controlling for purely outcome predictors along with the confounders can reduce the variance of the estimate. In this paper, we propose an Outcome Adaptive Elastic-Net (OAENet) method specifically designed for causal inference to select the confounders and outcome predictors for inclusion in the propensity score model or in the matching mechanism. OAENet provides two major advantages over existing methods: it performs superiorly on correlated data, and it can be applied to any matching method and any estimates. In addition, OAENet is computationally efficient compared to state-of-the-art methods.
Structural equation models are commonly used to capture the relationship between sets of observed and unobservable variables. Traditionally these models are fitted using frequentist approaches but recently researchers and practitioners have developed increasing interest in Bayesian inference. In Bayesian settings, inference for these models is typically performed via Markov chain Monte Carlo methods, which may be computationally intensive for models with a large number of manifest variables or complex structures. Variational approximations can be a fast alternative; however, they have not been adequately explored for this class of models. We develop a mean field variational Bayes approach for fitting elemental structural equation models and demonstrate how bootstrap can considerably improve the variational approximation quality. We show that this variational approximation method can provide reliable inference while being significantly faster than Markov chain Monte Carlo.
Semiconductor device models are essential to understand the charge transport in thin film transistors (TFTs). Using these TFT models to draw inference involves estimating parameters used to fit to the experimental data. These experimental data can involve extracted charge carrier mobility or measured current. Estimating these parameters help us draw inferences about device performance. Fitting a TFT model for a given experimental data using the model parameters relies on manual fine tuning of multiple parameters by human experts. Several of these parameters may have confounding effects on the experimental data, making their individual effect extraction a non-intuitive process during manual tuning. To avoid this convoluted process, we propose a new method for automating the model parameter extraction process resulting in an accurate model fitting. In this work, model choice based approximate Bayesian computation (aBc) is used for generating the posterior distribution of the estimated parameters using observed mobility at various gate voltage values. Furthermore, it is shown that the extracted parameters can be accurately predicted from the mobility curves using gradient boosted trees. This work also provides a comparative analysis of the proposed framework with fine-tuned neural networks wherein the proposed framework is shown to perform better.
Just-in-time adaptive interventions (JITAIs) are time-varying adaptive interventions that use frequent opportunities for the intervention to be adapted--weekly, daily, or even many times a day. The micro-randomized trial (MRT) has emerged for use in informing the construction of JITAIs. MRTs can be used to address research questions about whether and under what circumstances JITAI components are effective, with the ultimate objective of developing effective and efficient JITAI. The purpose of this article is to clarify why, when, and how to use MRTs; to highlight elements that must be considered when designing and implementing an MRT; and to review primary and secondary analyses methods for MRTs. We briefly review key elements of JITAIs and discuss a variety of considerations that go into planning and designing an MRT. We provide a definition of causal excursion effects suitable for use in primary and secondary analyses of MRT data to inform JITAI development. We review the weighted and centered least-squares (WCLS) estimator which provides consistent causal excursion effect estimators from MRT data. We describe how the WCLS estimator along with associated test statistics can be obtained using standard statistical software such as R (R Core Team, 2019). Throughout we illustrate the MRT design and analyses using the HeartSteps MRT, for developing a JITAI to increase physical activity among sedentary individuals. We supplement the HeartSteps MRT with two other MRTs, SARA and BariFit, each of which highlights different research questions that can be addressed using the MRT and experimental design considerations that might arise.