Clinical studies often encounter with truncation-by-death problems, which may render the outcomes undefined. Statistical analysis based only on observed survivors may lead to biased results because the characters of survivors may differ greatly between treatment groups. Under the principal stratification framework, a meaningful causal parameter, the survivor average causal effect, in the always-survivor group can be defined. This causal parameter may not be identifiable in observational studies where the treatment assignment and the survival or outcome process are confounded by unmeasured features. In this paper, we propose a new method to deal with unmeasured confounding when the outcome is truncated by death. First, a new method is proposed to identify the heterogeneous conditional survivor average causal effect based on a substitutional variable under monotonicity. Second, under additional assumptions, the survivor average causal effect on the overall population is also identified. Furthermore, we consider estimation and inference for the conditional survivor average causal effect based on parametric and nonparametric methods with good asymptotic properties. Good finite-sample properties are demonstrated by simulation and sensitivity analysis. The proposed method is applied to investigate the effect of allogeneic stem cell transplantation types on leukemia relapse.
An important problem in causal inference is to break down the total effect of a treatment on an outcome into different causal pathways and to quantify the causal effect in each pathway. For instance, in causal fairness, the total effect of being a male employee (i.e., treatment) constitutes its direct effect on annual income (i.e., outcome) and the indirect effect via the employee's occupation (i.e., mediator). Causal mediation analysis (CMA) is a formal statistical framework commonly used to reveal such underlying causal mechanisms. One major challenge of CMA in observational studies is handling confounders, variables that cause spurious causal relationships among treatment, mediator, and outcome. Conventional methods assume sequential ignorability that implies all confounders can be measured, which is often unverifiable in practice. This work aims to circumvent the stringent sequential ignorability assumptions and consider hidden confounders. Drawing upon proxy strategies and recent advances in deep learning, we propose to simultaneously uncover the latent variables that characterize hidden confounders and estimate the causal effects. Empirical evaluations using both synthetic and semi-synthetic datasets validate the effectiveness of the proposed method. We further show the potentials of our approach for causal fairness analysis.
Traditional recommender systems aim to estimate a user's rating to an item based on observed ratings from the population. As with all observational studies, hidden confounders, which are factors that affect both item exposures and user ratings, lead to a systematic bias in the estimation. Consequently, a new trend in recommender system research is to negate the influence of confounders from a causal perspective. Observing that confounders in recommendations are usually shared among items and are therefore multi-cause confounders, we model the recommendation as a multi-cause multi-outcome (MCMO) inference problem. Specifically, to remedy confounding bias, we estimate user-specific latent variables that render the item exposures independent Bernoulli trials. The generative distribution is parameterized by a DNN with factorized logistic likelihood and the intractable posteriors are estimated by variational inference. Controlling these factors as substitute confounders, under mild assumptions, can eliminate the bias incurred by multi-cause confounders. Furthermore, we show that MCMO modeling may lead to high variance due to scarce observations associated with the high-dimensional causal space. Fortunately, we theoretically demonstrate that introducing user features as pre-treatment variables can substantially improve sample efficiency and alleviate overfitting. Empirical studies on simulated and real-world datasets show that the proposed deep causal recommender shows more robustness to unobserved confounders than state-of-the-art causal recommenders. Codes and datasets are released at //github.com/yaochenzhu/deep-deconf.
The increase in the use of mobile and wearable devices now allow dense assessment of mediating processes over time. For example, a pharmacological intervention may have an effect on smoking cessation via reductions in momentary withdrawal symptoms. We define and identify the causal direct and indirect effects in terms of potential outcomes on the mean difference and odds ratio scales, and present a method for estimating and testing the indirect effect of a randomized treatment on a distal binary variable as mediated by the nonparametric trajectory of an intensively measured longitudinal variable (e.g., from ecological momentary assessment). Coverage of a bootstrap test for the indirect effect is demonstrated via simulation. An empirical example is presented based on estimating later smoking abstinence from patterns of craving during smoking cessation treatment. We provide an R package, funmediation, to conveniently apply this technique. We conclude by discussing possible extensions to multiple mediators and directions for future research.
The problem of selecting optimal backdoor adjustment sets to estimate causal effects in graphical models with hidden and conditioned variables is addressed. Previous work has defined optimality as achieving the smallest asymptotic estimation variance and derived an optimal set for the case without hidden variables. For the case with hidden variables there can be settings where no optimal set exists and currently only a sufficient graphical optimality criterion of limited applicability has been derived. In the present work optimality is characterized as maximizing a certain adjustment information which allows to derive a necessary and sufficient graphical criterion for the existence of an optimal adjustment set and a definition and algorithm to construct it. Further, the optimal set is valid if and only if a valid adjustment set exists and has higher (or equal) adjustment information than the Adjust-set proposed in Perkovi{\'c} et al. [Journal of Machine Learning Research, 18: 1--62, 2018] for any graph. The results translate to minimal asymptotic estimation variance for a class of estimators whose asymptotic variance follows a certain information-theoretic relation. Numerical experiments indicate that the asymptotic results also hold for relatively small sample sizes and that the optimal adjustment set or minimized variants thereof often yield better variance also beyond that estimator class. Surprisingly, among the randomly created setups more than 90\% fulfill the optimality conditions indicating that also in many real-world scenarios graphical optimality may hold. Code is available as part of the python package \url{//github.com/jakobrunge/tigramite}.
The aim of this paper is to study the recovery of a spatially dependent potential in a (sub)diffusion equation from overposed final time data. We construct a monotone operator one of whose fixed points is the unknown potential. The uniqueness of the identification is theoretically verified by using the monotonicity of the operator and a fixed point argument. Moreover, we show a conditional stability in Hilbert spaces under some suitable conditions on the problem data. Next, a completely discrete scheme is developed, by using Galerkin finite element method in space and finite difference method in time, and then a fixed point iteration is applied to reconstruct the potential. We prove the linear convergence of the iterative algorithm by the contraction mapping theorem, and present a thorough error analysis for the reconstructed potential. Our derived \textsl{a priori} error estimate provides a guideline to choose discretization parameters according to the noise level. The analysis relies heavily on some suitable nonstandard error estimates for the direct problem as well as the aforementioned conditional stability. Numerical experiments are provided to illustrate and complement our theoretical analysis.
Estimation of heterogeneous treatment effects is an active area of research in causal inference. Most of the existing methods, however, focus on estimating the conditional average treatment effects of a single, binary treatment given a set of pre-treatment covariates. In this paper, we propose a method to estimate the heterogeneous causal effects of high-dimensional treatments, which poses unique challenges in terms of estimation and interpretation. The proposed approach is based on a Bayesian mixture of regularized regressions to identify groups of units who exhibit similar patterns of treatment effects. By directly modeling cluster membership with covariates, the proposed methodology allows one to explore the unit characteristics that are associated with different patterns of treatment effects. Our motivating application is conjoint analysis, which is a popular survey experiment in social science and marketing research and is based on a high-dimensional factorial design. We apply the proposed methodology to the conjoint data, where survey respondents are asked to select one of two immigrant profiles with randomly selected attributes. We find that a group of respondents with a relatively high degree of prejudice appears to discriminate against immigrants from non-European countries like Iraq. An open-source software package is available for implementing the proposed methodology.
Occlusion between different objects is a typical challenge in Multi-Object Tracking (MOT), which often leads to inferior tracking results due to the missing detected objects. The common practice in multi-object tracking is re-identifying the missed objects after their reappearance. Though tracking performance can be boosted by the re-identification, the annotation of identity is required to train the model. In addition, such practice of re-identification still can not track those highly occluded objects when they are missed by the detector. In this paper, we focus on online multi-object tracking and design two novel modules, the unsupervised re-identification learning module and the occlusion estimation module, to handle these problems. Specifically, the proposed unsupervised re-identification learning module does not require any (pseudo) identity information nor suffer from the scalability issue. The proposed occlusion estimation module tries to predict the locations where occlusions happen, which are used to estimate the positions of missed objects by the detector. Our study shows that, when applied to state-of-the-art MOT methods, the proposed unsupervised re-identification learning is comparable to supervised re-identification learning, and the tracking performance is further improved by the proposed occlusion estimation module.
We investigate the quality of space approximation of a class of stochastic integral equations of convolution type with Gaussian noise. Such equations arise, for example, when considering mild solutions of stochastic fractional order partial differential equations but also when considering mild solutions of classical stochastic partial differential equations. The key requirement for the equations is a smoothing property of the deterministic evolution operator which is typical in parabolic type problems. We show that if one has access to nonsmooth data estimates for the deterministic error operator together with its derivative of a space discretization procedure, then one obtains error estimates in pathwise H\"older norms with rates that can be read off the deterministic error rates. We illustrate the main result by considering a class of stochastic fractional order partial differential equations and space approximations performed by spectral Galerkin methods and finite elements. We also improve an existing result on the stochastic heat equation.
The concept of Fisher information can be useful even in cases where the probability distributions of interest are not absolutely continuous with respect to the natural reference measure on the underlying space. Practical examples where this extension is useful are provided in the context of multi-object tracking statistical models. Upon defining the Fisher information without introducing a reference measure, we provide remarkably concise proofs of the loss of Fisher information in some widely used multi-object tracking observation models.
Image forensics aims to detect the manipulation of digital images. Currently, splicing detection, copy-move detection and image retouching detection are drawing much attentions from researchers. However, image editing techniques develop with time goes by. One emerging image editing technique is colorization, which can colorize grayscale images with realistic colors. Unfortunately, this technique may also be intentionally applied to certain images to confound object recognition algorithms. To the best of our knowledge, no forensic technique has yet been invented to identify whether an image is colorized. We observed that, compared to natural images, colorized images, which are generated by three state-of-the-art methods, possess statistical differences for the hue and saturation channels. Besides, we also observe statistical inconsistencies in the dark and bright channels, because the colorization process will inevitably affect the dark and bright channel values. Based on our observations, i.e., potential traces in the hue, saturation, dark and bright channels, we propose two simple yet effective detection methods for fake colorized images: Histogram based Fake Colorized Image Detection (FCID-HIST) and Feature Encoding based Fake Colorized Image Detection (FCID-FE). Experimental results demonstrate that both proposed methods exhibit a decent performance against multiple state-of-the-art colorization approaches.