We propose a novel method for estimating heterogeneous treatment effects based on the fused lasso. By first ordering samples based on the propensity or prognostic score, we match units from the treatment and control groups. We then run the fused lasso to obtain piecewise constant treatment effects with respect to the ordering defined by the score. Similar to the existing methods based on discretizing the score, our methods yields interpretable subgroup effects. However, the existing methods fixed the subgroup a priori, but our causal fused lasso forms data-adaptive subgroups. We show that the estimator consistently estimates the treatment effects conditional on the score under very general conditions on the covariates and treatment. We demonstrate the performance of our procedure using extensive experiments that show that it can outperform state-of-the-art methods.
We propose a new nonparametric modeling framework for causal inference when outcomes depend on how agents are linked in a social or economic network. Such network interference describes a large literature on treatment spillovers, social interactions, social learning, information diffusion, disease and financial contagion, social capital formation, and more. Our approach works by first characterizing how an agent is linked in the network using the configuration of other agents and connections nearby as measured by path distance. The impact of a policy or treatment assignment is then learned by pooling outcome data across similarly configured agents. We demonstrate the approach by proposing an asymptotically valid test for the hypothesis of policy irrelevance/no treatment effects and bounding the mean-squared error of a k-nearest-neighbor estimator for the average or distributional policy effect/treatment response.
The multiple-network poroelasticity (MPET) equations describe deformation and pressures in an elastic medium permeated by interacting fluid networks. In this paper, we (i) place these equations in the theoretical context of coupled elliptic-parabolic problems, (ii) use this context to derive residual-based a-posteriori error estimates and indicators for fully discrete MPET solutions and (iii) evaluate the performance of these error estimators in adaptive algorithms for a set of test cases: ranging from synthetic scenarios to physiologically realistic simulations of brain mechanics.
Replication analysis is widely used in many fields of study. Once a research is published, many other researchers will conduct the same or very similar analysis to confirm the reliability of the published research. However, what if the data is confidential? In particular, if the data sets used for the studies are confidential, we cannot release the results of replication analyses to any entity without the permission to access the data sets, otherwise it may result in serious privacy leakage especially when the published study and replication studies are using similar or common data sets. For example, examining the influence of the treatment on outliers can cause serious leakage of the information about outliers. In this paper, we build two frameworks for replication analysis by a differentially private Bayesian approach. We formalize our questions of interest and illustrates the properties of our methods by a combination of theoretical analysis and simulation to show the feasibility of our approach. We also provide some guidance on the choice of parameters and interpretation of the results.
Meta-analyses of survival studies aim to reveal the variation of an effect measure of interest over different studies and present a meaningful summary. They must address between study heterogeneity in several dimensions and eliminate spurious sources of variation. Forest plots of the usual (adjusted) hazard ratios are fraught with difficulties from this perspective since both the magnitude and interpretation of these hazard ratios depend on factors ancillary to the true study-specific exposure effect. These factors generally include the study duration, the censoring patterns within studies, the covariates adjusted for and their distribution over exposure groups. Ignoring these mentioned features and accepting implausible hidden assumptions may critically affect interpretation of the pooled effect measure. Risk differences or restricted mean effects over a common follow-up interval and balanced distribution of a covariate set are natural candidates for exposure evaluation and possible treatment choice. In this paper, we propose differently standardized survival curves over a fitting time horizon, targeting various estimands with their own transportability. With each type of standardization comes a given interpretation within studies and overall, under stated assumptions. These curves can in turn be summarized by standardized study-specific contrasts, including hazard ratios with more consistent meaning. We prefer forest plots of risk differences at well chosen time points. Our case study examines overall survival among anal squamous cell carcinoma patients, expressing the tumor marker $p16^{INK4a}$ or not, based on the individual patient data of six studies.
We study the problem of density estimation for a random vector ${\boldsymbol X}$ in $\mathbb R^d$ with probability density $f(\boldsymbol x)$. For a spanning tree $T$ defined on the vertex set $\{1,\dots ,d\}$, the tree density $f_{T}$ is a product of bivariate conditional densities. The optimal spanning tree $T^*$ is the spanning tree $T$, for which the Kullback-Leibler divergence of $f$ and $f_{T}$ is the smallest. From i.i.d. data we identify the optimal tree $T^*$ and computationally efficiently construct a tree density estimate $f_n$ such that, without any regularity conditions on the density $f$, one has that $\lim_{n\to \infty} \int |f_n(\boldsymbol x)-f_{T^*}(\boldsymbol x)|d\boldsymbol x=0$ a.s. For Lipschitz continuous $f$ with bounded support, $\mathbb E\{ \int |f_n(\boldsymbol x)-f_{T^*}(\boldsymbol x)|d\boldsymbol x\}=O(n^{-1/4})$.
Recently, many estimators for network treatment effects have been proposed. But, their optimality properties in terms of semiparametric efficiency have yet to be resolved. We present a simple, yet flexible asymptotic framework to derive the efficient influence function and the semiparametric efficiency lower bound for a family of network causal effects under partial interference. An important corollary of our results is that one of the existing estimators by Liu et al. (2019) is locally efficient. We also present other estimators that are efficient and discuss results on adaptive estimation. We conclude by using the efficient estimators to study the direct and spillover effects of conditional cash transfer programs in Colombia.
The simultaneous estimation of many parameters based on data collected from corresponding studies is a key research problem that has received renewed attention in the high-dimensional setting. Many practical situations involve heterogeneous data where heterogeneity is captured by a nuisance parameter. Effectively pooling information across samples while correctly accounting for heterogeneity presents a significant challenge in large-scale estimation problems. We address this issue by introducing the "Nonparametric Empirical Bayes Structural Tweedie" (NEST) estimator, which efficiently estimates the unknown effect sizes and properly adjusts for heterogeneity via a generalized version of Tweedie's formula. For the normal means problem, NEST simultaneously handles the two main selection biases introduced by heterogeneity: one, the selection bias in the mean, which cannot be effectively corrected without also correcting for, two, selection bias in the variance. Our theoretical results show that NEST has strong asymptotic properties without requiring explicit assumptions about the prior. Extensions to other two-parameter members of the exponential family are discussed. Simulation studies show that NEST outperforms competing methods, with much efficiency gains in many settings. The proposed method is demonstrated on estimating the batting averages of baseball players and Sharpe ratios of mutual fund returns.
Constrained tensor and matrix factorization models allow to extract interpretable patterns from multiway data. Therefore identifiability properties and efficient algorithms for constrained low-rank approximations are nowadays important research topics. This work deals with columns of factor matrices of a low-rank approximation being sparse in a known and possibly overcomplete basis, a model coined as Dictionary-based Low-Rank Approximation (DLRA). While earlier contributions focused on finding factor columns inside a dictionary of candidate columns, i.e. one-sparse approximations, this work is the first to tackle DLRA with sparsity larger than one. I propose to focus on the sparse-coding subproblem coined Mixed Sparse-Coding (MSC) that emerges when solving DLRA with an alternating optimization strategy. Several algorithms based on sparse-coding heuristics (greedy methods, convex relaxations) are provided to solve MSC. The performance of these heuristics is evaluated on simulated data. Then, I show how to adapt an efficient MSC solver based on the LASSO to compute Dictionary-based Matrix Factorization and Canonical Polyadic Decomposition in the context of hyperspectral image processing and chemometrics. These experiments suggest that DLRA extends the modeling capabilities of low-rank approximations, helps reducing estimation variance and enhances the identifiability and interpretability of estimated factors.
I propose a new type of confidence interval for correct asymptotic inference after using data to select a model of interest without assuming any model is correctly specified. This hybrid confidence interval is constructed by combining techniques from the selective inference and post-selection inference literatures to yield a short confidence interval across a wide range of data realizations. I show that hybrid confidence intervals have correct asymptotic coverage, uniformly over a large class of probability distributions that do not bound scaled model parameters. I illustrate the use of these confidence intervals in the problem of inference after using the LASSO objective function to select a regression model of interest and provide evidence of their desirable length and coverage properties in small samples via a set of Monte Carlo experiments that entail a variety of different data distributions as well as an empirical application to the predictors of diabetes disease progression.
Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.