In this paper, I try to tame "Basu's elephants" (data with extreme selection on observables). I propose new practical large-sample and finite-sample methods for estimating and inferring heterogeneous causal effects (under unconfoundedness) in the empirically relevant context of limited overlap. I develop a general principle called "Stable Probability Weighting" (SPW) that can be used as an alternative to the widely used Inverse Probability Weighting (IPW) technique, which relies on strong overlap. I show that IPW (or its augmented version), when valid, is a special case of the more general SPW (or its doubly robust version), which adjusts for the extremeness of the conditional probabilities of the treatment states. The SPW principle can be implemented using several existing large-sample parametric, semiparametric, and nonparametric procedures for conditional moment models. In addition, I provide new finite-sample results that apply when unconfoundedness is plausible within fine strata. Since IPW estimation relies on the problematic reciprocal of the estimated propensity score, I develop a "Finite-Sample Stable Probability Weighting" (FPW) set-estimator that is unbiased in a sense. I also propose new finite-sample inference methods for testing a general class of weak null hypotheses. The associated computationally convenient methods, which can be used to construct valid confidence sets and to bound the finite-sample confidence distribution, are of independent interest. My large-sample and finite-sample frameworks extend to the setting of multivalued treatments.
In prevalent cohort studies with follow-up, the time-to-event outcome is subject to left truncation leading to selection bias. For estimation of the distribution of time-to-event, conventional methods adjusting for left truncation tend to rely on the (quasi-)independence assumption that the truncation time and the event time are "independent" on the observed region. This assumption is violated when there is dependence between the truncation time and the event time possibly induced by measured covariates. Inverse probability of truncation weighting leveraging covariate information can be used in this case, but it is sensitive to misspecification of the truncation model. In this work, we apply the semiparametric theory to find the efficient influence curve of an expected (arbitrarily transformed) survival time in the presence of covariate-induced dependent left truncation. We then use it to construct estimators that are shown to enjoy double-robustness properties. Our work represents the first attempt to construct doubly robust estimators in the presence of left truncation, which does not fall under the established framework of coarsened data where doubly robust approaches are developed. We provide technical conditions for the asymptotic properties that appear to not have been carefully examined in the literature for time-to-event data, and study the estimators via extensive simulation. We apply the estimators to two data sets from practice, with different right-censoring patterns.
We consider identification and inference for the average treatment effect and heterogeneous treatment effect conditional on observable covariates in the presence of unmeasured confounding. Since point identification of average treatment effect and heterogeneous treatment effect is not achievable without strong assumptions, we obtain bounds on both average and heterogeneous treatment effects by leveraging differential effects, a tool that allows for using a second treatment to learn the effect of the first treatment. The differential effect is the effect of using one treatment in lieu of the other, and it could be identified in some observational studies in which treatments are not randomly assigned to units, where differences in outcomes may be due to biased assignments rather than treatment effects. With differential effects, we develop a flexible and easy-to-implement semi-parametric framework to estimate bounds and establish asymptotic properties over the support for conducting statistical inference. We provide conditions under which causal estimands are point identifiable as well in the proposed framework. The proposed method is examined by a simulation study and two case studies using datasets from National Health and Nutrition Examination Survey and Youth Risk Behavior Surveillance System.
When choosing estimands and estimators in randomized clinical trials, caution is warranted as intercurrent events, such as - due to patients who switch treatment after disease progression, are often extreme. Statistical analyses may then easily lure one into making large implicit extrapolations, which often go unnoticed. We will illustrate this problem of implicit extrapolations using a real oncology case study, with a right-censored time-to-event endpoint, in which patients can cross over from the control to the experimental treatment after disease progression, for ethical reasons. We resolve this by developing an estimator for the survival risk ratio contrasting the survival probabilities at each time t if all patients would take experimental treatment with the survival probabilities at those times t if all patients would take control treatment up to time t, using randomization as an instrumental variable to avoid reliance on no unmeasured confounders assumptions. This doubly robust estimator can handle time-varying treatment switches and right-censored survival times. Insight into the rationale behind the estimator is provided and the approach is demonstrated by re-analyzing the oncology trial.
How should we intervene on an unknown structural equation model to maximize a downstream variable of interest? This setting, also known as causal Bayesian optimization (CBO), has important applications in medicine, ecology, and manufacturing. Standard Bayesian optimization algorithms fail to effectively leverage the underlying causal structure. Existing CBO approaches assume noiseless measurements and do not come with guarantees. We propose the model-based causal Bayesian optimization algorithm (MCBO) that learns a full system model instead of only modeling intervention-reward pairs. MCBO propagates epistemic uncertainty about the causal mechanisms through the graph and trades off exploration and exploitation via the optimism principle. We bound its cumulative regret, and obtain the first non-asymptotic bounds for CBO. Unlike in standard Bayesian optimization, our acquisition function cannot be evaluated in closed form, so we show how the reparameterization trick can be used to apply gradient-based optimizers. The resulting practical implementation of MCBO compares favorably with state-of-the-art approaches empirically.
Making causal inferences from observational studies can be challenging when confounders are missing not at random. In such cases, identifying causal effects is often not guaranteed. Motivated by a real example, we consider a treatment-independent missingness assumption under which we establish the identification of causal effects when confounders are missing not at random. We propose a weighted estimating equation (WEE) approach for estimating model parameters and introduce three estimators for the average causal effect, based on regression, propensity score weighting, and doubly robust methods. We evaluate the performance of these estimators through simulations, and provide a real data analysis to illustrate our proposed method.
Background: Outcome measures that are count variables with excessive zeros are common in health behaviors research. There is a lack of empirical data about the relative performance of prevailing statistical models when outcomes are zero-inflated, particularly compared with recently developed approaches. Methods: The current simulation study examined five commonly used analytical approaches for count outcomes, including two linear models (with outcomes on raw and log-transformed scales, respectively) and three count distribution-based models (i.e., Poisson, negative binomial, and zero-inflated Poisson (ZIP) models). We also considered the marginalized zero-inflated Poisson (MZIP) model, a novel alternative that estimates the effects on overall mean while adjusting for zero-inflation. Extensive simulations were conducted to evaluate their the statistical power and Type I error rate across various data conditions. Results: Under zero-inflation, the Poisson model failed to control the Type I error rate, resulting in higher than expected false positive results. When the intervention effects on the zero (vs. non-zero) and count parts were in the same direction, the MZIP model had the highest statistical power, followed by the linear model with outcomes on raw scale, negative binomial model, and ZIP model. The performance of a linear model with a log-transformed outcome variable was unsatisfactory. When only one of the effects on the zero (vs. non-zero) part and the count part existed, the ZIP model had the highest statistical power. Conclusions: The MZIP model demonstrated better statistical properties in detecting true intervention effects and controlling false positive results for zero-inflated count outcomes. This MZIP model may serve as an appealing analytical approach to evaluating overall intervention effects in studies with count outcomes marked by excessive zeros.
We consider design-based causal inference in settings where randomized treatments have effects that bleed out into space in complex ways that overlap and in violation of the standard "no interference" assumption for many causal inference methods. We define a spatial "average marginalized effect," which characterizes how, in expectation, units of observation that are a specified distance from an intervention node are affected by treatment at that node, averaging over effects emanating from other intervention nodes. We establish conditions for non-parametric identification under unknown interference, asymptotic distributions of estimators, and recovery of structural effects. We propose methods for both sample-theoretic and permutation-based inference. We provide illustrations using randomized field experiments on forest conservation and health.
In this paper, we propose a method for estimating model parameters using Small-Angle Scattering (SAS) data based on the Bayesian inference. Conventional SAS data analyses involve processes of manual parameter adjustment by analysts or optimization using gradient methods. These analysis processes tend to involve heuristic approaches and may lead to local solutions.Furthermore, it is difficult to evaluate the reliability of the results obtained by conventional analysis methods. Our method solves these problems by estimating model parameters as probability distributions from SAS data using the framework of the Bayesian inference. We evaluate the performance of our method through numerical experiments using artificial data of representative measurement target models.From the results of the numerical experiments, we show that our method provides not only high accuracy and reliability of estimation, but also perspectives on the transition point of estimability with respect to the measurement time and the lower bound of the angular domain of the measured data.
Dynamic treatment regimens (DTRs), also known as treatment algorithms or adaptive interventions, play an increasingly important role in many health domains. DTRs are motivated to address the unique and changing needs of individuals by delivering the type of treatment needed, when needed, while minimizing unnecessary treatment. Practically, a DTR is a sequence of decision rules that specify, for each of several points in time, how available information about the individual's status and progress should be used in practice to decide which treatment (e.g., type or intensity) to deliver. The sequential multiple assignment randomized trial (SMART) is an experimental design widely used to empirically inform the development of DTRs. Sample size planning resources for SMARTs have been developed for continuous, binary, and survival outcomes. However, an important gap exists in sample size estimation methodology for SMARTs with longitudinal count outcomes. Further, in many health domains, count data are overdispersed - having variance greater than their mean. We propose a Monte Carlo-based approach to sample size estimation applicable to many types of longitudinal outcomes and provide a case study with longitudinal overdispersed count outcomes. A SMART for engaging alcohol and cocaine-dependent patients in treatment is used as motivation.
Analyzing observational data from multiple sources can be useful for increasing statistical power to detect a treatment effect; however, practical constraints such as privacy considerations may restrict individual-level information sharing across data sets. This paper develops federated methods that only utilize summary-level information from heterogeneous data sets. Our federated methods provide doubly-robust point estimates of treatment effects as well as variance estimates. We derive the asymptotic distributions of our federated estimators, which are shown to be asymptotically equivalent to the corresponding estimators from the combined, individual-level data. We show that to achieve these properties, federated methods should be adjusted based on conditions such as whether models are correctly specified and stable across heterogeneous data sets.