Unmeasured confounding is a key threat to reliable causal inference based on observational studies. Motivated from two powerful natural experiment devices, the instrumental variables and difference-in-differences, we propose a new method called instrumented difference-in-differences that explicitly leverages exogenous randomness in an exposure trend to estimate the average and conditional average treatment effect in the presence of unmeasured confounding. We develop the identification assumptions using the potential outcomes framework. We propose a Wald estimator and a class of multiply robust and efficient semiparametric estimators, with provable consistency and asymptotic normality. In addition, we extend the instrumented difference-in-differences to a two-sample design to facilitate investigations of delayed treatment effect and provide a measure of weak identification. We demonstrate our results in simulated and real datasets.
Unobserved confounding is the main obstacle to causal effect estimation from observational data. Instrumental variables (IVs) are widely used for causal effect estimation when there exist latent confounders. With the standard IV method, when a given IV is valid, unbiased estimation can be obtained, but the validity requirement of standard IV is strict and untestable. Conditional IV has been proposed to relax the requirement of standard IV by conditioning on a set of observed variables (known as a conditioning set for a conditional IV). However, the criterion for finding a conditioning set for a conditional IV needs complete causal structure knowledge or a directed acyclic graph (DAG) representing the causal relationships of both observed and unobserved variables. This makes it impossible to discover a conditioning set directly from data. In this paper, by leveraging maximal ancestral graphs (MAGs) in causal inference with latent variables, we propose a new type of IV, ancestral IV in MAG, and develop the theory to support data-driven discovery of the conditioning set for a given ancestral IV in MAG. Based on the theory, we develop an algorithm for unbiased causal effect estimation with an ancestral IV in MAG and observational data. Extensive experiments on synthetic and real-world datasets have demonstrated the performance of the algorithm in comparison with existing IV methods.
The Mat\'ern covariance function is ubiquitous in the application of Gaussian processes to spatial statistics and beyond. Perhaps the most important reason for this is that the smoothness parameter $\nu$ gives complete control over the mean-square differentiability of the process, which has significant implications for the behavior of estimated quantities such as interpolants and forecasts. Unfortunately, derivatives of the Mat\'ern covariance function with respect to $\nu$ require derivatives of the modified second-kind Bessel function $\mathcal{K}_\nu$ with respect to $\nu$. While closed form expressions of these derivatives do exist, they are prohibitively difficult and expensive to compute. For this reason, many software packages require fixing $\nu$ as opposed to estimating it, and all existing software packages that attempt to offer the functionality of estimating $\nu$ use finite difference estimates for $\partial_\nu \mathcal{K}_\nu$. In this work, we introduce a new implementation of $\mathcal{K}_\nu$ that has been designed to provide derivatives via automatic differentiation (AD), and whose resulting derivatives are significantly faster and more accurate than those computed using finite differences. We provide comprehensive testing for both speed and accuracy and show that our AD solution can be used to build accurate Hessian matrices for second-order maximum likelihood estimation in settings where Hessians built with finite difference approximations completely fail.
Disagreement remains on what the target estimand should be for population-adjusted indirect treatment comparisons. This debate is of central importance for policy-makers and applied practitioners in health technology assessment. Misunderstandings are based on properties inherent to estimators, not estimands, and on generalizing conclusions based on linear regression to non-linear models. Estimators of marginal estimands need not be unadjusted and may be covariate-adjusted. The population-level interpretation of conditional estimates follows from collapsibility and does not necessarily hold for the underlying conditional estimands. For non-collapsible effect measures, neither conditional estimates nor estimands have a population-level interpretation. Estimators of marginal effects tend to be more precise and efficient than estimators of conditional effects where the measure of effect is non-collapsible. In any case, such comparisons are inconsequential for estimators targeting distinct estimands. Statistical efficiency should not drive the choice of the estimand. On the other hand, the estimand, selected on the basis of relevance to decision-making, should drive the choice of the most efficient estimator. Health technology assessment agencies make reimbursement decisions at the population level. Therefore, marginal estimands are required. Current pairwise population adjustment methods such as matching-adjusted indirect comparison are restricted to target marginal estimands that are specific to the comparator study sample. These may not be relevant for decision-making. Multilevel network meta-regression (ML-NMR) can potentially target marginal estimands in any population of interest. Such population could be characterized by decision-makers using increasingly available ``real-world'' data sources. Therefore, ML-NMR presents new directions and abundant opportunities for evidence synthesis.
This paper presents a parametric estimation method for ill-observed linear stationary Hawkes processes. When the exact locations of points are not observed, but only counts over time intervals of fixed size, methods based on the likelihood are not feasible. We show that spectral estimation based on Whittle's method is adapted to this case and provides consistent and asymptotically normal estimators, provided a mild moment condition on the reproduction function. Simulated datasets and a case-study illustrate the performances of the estimation, notably of the reproduction function even when time intervals are relatively large.
Power is an important aspect of experimental design, because it allows researchers to understand the chance of detecting causal effects if they exist. It is common to specify a desired level of power, and then compute the sample size necessary to obtain that level of power; thus, power calculations help determine how experiments are conducted in practice. Power and sample size calculations are readily available for completely randomized experiments; however, there can be many benefits to using other experimental designs. For example, in recent years it has been established that rerandomized designs, where subjects are randomized until a prespecified level of covariate balance is obtained, increase the precision of causal effect estimators. This work establishes the statistical power of rerandomized treatment-control experiments, thereby allowing for sample size calculators. Our theoretical results also clarify how power and sample size are affected by treatment effect heterogeneity, a quantity that is often ignored in power analyses. Via simulation, we confirm our theoretical results and find that rerandomization can lead to substantial sample size reductions; e.g., in many realistic scenarios, rerandomization can lead to a 25% or even 50% reduction in sample size for a fixed level of power, compared to complete randomization. Power and sample size calculators based on our results are in the R package rerandPower on CRAN.
Consider a random graph process with $n$ vertices corresponding to points $v_{i} \sim {Unif}[0,1]$ embedded randomly in the interval, and where edges are inserted between $v_{i}, v_{j}$ independently with probability given by the graphon $w(v_{i},v_{j}) \in [0,1]$. Following Chuangpishit et al. (2015), we call a graphon $w$ diagonally increasing if, for each $x$, $w(x,y)$ decreases as $y$ moves away from $x$. We call a permutation $\sigma \in S_{n}$ an ordering of these vertices if $v_{\sigma(i)} < v_{\sigma(j)}$ for all $i < j$, and ask: how can we accurately estimate $\sigma$ from an observed graph? We present a randomized algorithm with output $\hat{\sigma}$ that, for a large class of graphons, achieves error $\max_{1 \leq i \leq n} | \sigma(i) - \hat{\sigma}(i)| = O^{*}(\sqrt{n})$ with high probability; we also show that this is the best-possible convergence rate for a large class of algorithms and proof strategies. Under an additional assumption that is satisfied by some popular graphon models, we break this "barrier" at $\sqrt{n}$ and obtain the vastly better rate $O^{*}(n^{\epsilon})$ for any $\epsilon > 0$. These improved seriation bounds can be combined with previous work to give more efficient and accurate algorithms for related tasks, including: estimating diagonally increasing graphons, and testing whether a graphon is diagonally increasing.
Advances in the state of the art for 3d human sensing are currently limited by the lack of visual datasets with 3d ground truth, including multiple people, in motion, operating in real-world environments, with complex illumination or occlusion, and potentially observed by a moving camera. Sophisticated scene understanding would require estimating human pose and shape as well as gestures, towards representations that ultimately combine useful metric and behavioral signals with free-viewpoint photo-realistic visualisation capabilities. To sustain progress, we build a large-scale photo-realistic dataset, Human-SPACE (HSPACE), of animated humans placed in complex synthetic indoor and outdoor environments. We combine a hundred diverse individuals of varying ages, gender, proportions, and ethnicity, with hundreds of motions and scenes, as well as parametric variations in body shape (for a total of 1,600 different humans), in order to generate an initial dataset of over 1 million frames. Human animations are obtained by fitting an expressive human body model, GHUM, to single scans of people, followed by novel re-targeting and positioning procedures that support the realistic animation of dressed humans, statistical variation of body proportions, and jointly consistent scene placement of multiple moving people. Assets are generated automatically, at scale, and are compatible with existing real time rendering and game engines. The dataset with evaluation server will be made available for research. Our large-scale analysis of the impact of synthetic data, in connection with real data and weak supervision, underlines the considerable potential for continuing quality improvements and limiting the sim-to-real gap, in this practical setting, in connection with increased model capacity.
Estimating the effects of interventions on patient outcome is one of the key aspects of personalized medicine. Their inference is often challenged by the fact that the training data comprises only the outcome for the administered treatment, and not for alternative treatments (the so-called counterfactual outcomes). Several methods were suggested for this scenario based on observational data, i.e.~data where the intervention was not applied randomly, for both continuous and binary outcome variables. However, patient outcome is often recorded in terms of time-to-event data, comprising right-censored event times if an event does not occur within the observation period. Albeit their enormous importance, time-to-event data is rarely used for treatment optimization. We suggest an approach named BITES (Balanced Individual Treatment Effect for Survival data), which combines a treatment-specific semi-parametric Cox loss with a treatment-balanced deep neural network; i.e.~we regularize differences between treated and non-treated patients using Integral Probability Metrics (IPM). We show in simulation studies that this approach outperforms the state of the art. Further, we demonstrate in an application to a cohort of breast cancer patients that hormone treatment can be optimized based on six routine parameters. We successfully validated this finding in an independent cohort. BITES is provided as an easy-to-use python implementation.
This paper synthesizes recent advances in the econometrics of difference-in-differences (DiD) and provides concrete recommendations for practitioners. We begin by articulating a simple set of "canonical" assumptions under which the econometrics of DiD are well-understood. We then argue that recent advances in DiD methods can be broadly classified as relaxing some components of the canonical DiD setup, with a focus on $(i)$ multiple periods and variation in treatment timing, $(ii)$ potential violations of parallel trends, or $(iii)$ alternative frameworks for inference. Our discussion highlights the different ways that the DiD literature has advanced beyond the canonical model, and helps to clarify when each of the papers will be relevant for empirical work. We conclude by discussing some promising areas for future research.
To solve complex real-world problems with reinforcement learning, we cannot rely on manually specified reward functions. Instead, we can have humans communicate an objective to the agent directly. In this work, we combine two approaches to learning from human feedback: expert demonstrations and trajectory preferences. We train a deep neural network to model the reward function and use its predicted reward to train an DQN-based deep reinforcement learning agent on 9 Atari games. Our approach beats the imitation learning baseline in 7 games and achieves strictly superhuman performance on 2 games without using game rewards. Additionally, we investigate the goodness of fit of the reward model, present some reward hacking problems, and study the effects of noise in the human labels.