Estimating the causal effect of a treatment or exposure for a subpopulation is of great interest in many biomedical and economical studies. Expected shortfall, also referred to as the super-quantile, is an attractive effect-size measure that can accommodate data heterogeneity and aggregate local information of effect over a certain region of interest of the outcome distribution. In this article, we propose the ComplieR Expected Shortfall Treatment Effect (CRESTE) model under an instrumental variable framework to quantity the CRESTE for a binary endogenous treatment variable. By utilizing the special characteristics of a binary instrumental variable and a specific formulation of Neyman-orthogonalization, we propose a two-step estimation procedure, which can be implemented by simply solving weighted least-squares regression and weighted quantile regression with estimated weights. We develop the asymptotic properties for the proposed estimator and use numerical simulations to confirm its validity and robust finite-sample performance. An illustrative analysis of a National Job Training Partnership Act study is presented to show the practical utility of the proposed method.
Users online tend to join polarized groups of like-minded peers around shared narratives, forming echo chambers. The echo chamber effect and opinion polarization may be driven by several factors including human biases in information consumption and personalized recommendations produced by feed algorithms. Until now, studies have mainly used opinion dynamic models to explore the mechanisms behind the emergence of polarization and echo chambers. The objective was to determine the key factors contributing to these phenomena and identify their interplay. However, the validation of model predictions with empirical data still displays two main drawbacks: lack of systematicity and qualitative analysis. In our work, we bridge this gap by providing a method to numerically compare the opinion distributions obtained from simulations with those measured on social media. To validate this procedure, we develop an opinion dynamic model that takes into account the interplay between human and algorithmic factors. We subject our model to empirical testing with data from diverse social media platforms and benchmark it against two state-of-the-art models. To further enhance our understanding of social media platforms, we provide a synthetic description of their characteristics in terms of the model's parameter space. This representation has the potential to facilitate the refinement of feed algorithms, thus mitigating the detrimental effects of extreme polarization on online discourse.
In most major cities and urban areas, residents form homogeneous neighborhoods along ethnic or socioeconomic lines. This phenomenon is widely known as residential segregation and has been studied extensively. Fifty years ago, Schelling proposed a landmark model that explains residential segregation in an elegant agent-based way. A recent stream of papers analyzed Schelling's model using game-theoretic approaches. However, all these works considered models with a given number of discrete types modeling different ethnic groups. We focus on segregation caused by non-categorical attributes, such as household income or position in a political left-right spectrum. For this, we consider agent types that can be represented as real numbers. This opens up a great variety of reasonable models and, as a proof of concept, we focus on several natural candidates. In particular, we consider agents that evaluate their location by the average type-difference or the maximum type-difference to their neighbors, or by having a certain tolerance range for type-values of neighboring agents. We study the existence and computation of equilibria and provide bounds on the Price of Anarchy and Stability. Also, we present simulation results that compare our models and shed light on the obtained equilibria for our variants.
Treatment effect estimation is of high-importance for both researchers and practitioners across many scientific and industrial domains. The abundance of observational data makes them increasingly used by researchers for the estimation of causal effects. However, these data suffer from biases, from several weaknesses, leading to inaccurate causal effect estimations, if not handled properly. Therefore, several machine learning techniques have been proposed, most of them focusing on leveraging the predictive power of neural network models to attain more precise estimation of causal effects. In this work, we propose a new methodology, named Nearest Neighboring Information for Causal Inference (NNCI), for integrating valuable nearest neighboring information on neural network-based models for estimating treatment effects. The proposed NNCI methodology is applied to some of the most well established neural network-based models for treatment effect estimation with the use of observational data. Numerical experiments and analysis provide empirical and statistical evidence that the integration of NNCI with state-of-the-art neural network models leads to considerably improved treatment effect estimations on a variety of well-known challenging benchmarks.
Existing risk-aware multi-armed bandit models typically focus on risk measures of individual options such as variance. As a result, they cannot be directly applied to important real-world online decision making problems with correlated options. In this paper, we propose a novel Continuous Mean-Covariance Bandit (CMCB) model to explicitly take into account option correlation. Specifically, in CMCB, there is a learner who sequentially chooses weight vectors on given options and observes random feedback according to the decisions. The agent's objective is to achieve the best trade-off between reward and risk, measured with option covariance. To capture different reward observation scenarios in practice, we consider three feedback settings, i.e., full-information, semi-bandit and full-bandit feedback. We propose novel algorithms with optimal regrets (within logarithmic factors), and provide matching lower bounds to validate their optimalities. The experimental results also demonstrate the superiority of our algorithms. To the best of our knowledge, this is the first work that considers option correlation in risk-aware bandits and explicitly quantifies how arbitrary covariance structures impact the learning performance. The novel analytical techniques we developed for exploiting the estimated covariance to build concentration and bounding the risk of selected actions based on sampling strategy properties can likely find applications in other bandit analysis and be of independent interests.
Under two-phase designs, the outcome and several covariates and confounders are measured in the first phase, and a new predictor of interest, which may be costly to collect, can be measured on a subsample in the second phase, without incurring the costs of recruiting subjects. By using the information gathered in the first phase, the second-phase subsample can be selected to enhance the efficiency of testing and estimating the effect of the new predictor on the outcome. Past studies have focused on optimal two-phase sampling schemes for statistical inference on local ($\beta = o(1)$) effects of the predictor of interest. In this study, we propose an extension of the two-phase designs that employs an optimal sampling scheme for estimating predictor effects with pseudo conditional likelihood estimators in case-control studies. This approach is applicable to both local and non-local effects. We demonstrate the effectiveness of the proposed sampling scheme through simulation studies and analysis of data from 170 patients hospitalized for treatment of COVID-19. The results show a significant improvement in the estimation of the parameter of interest.
In the recent literature on estimating heterogeneous treatment effects, each proposed method makes its own set of restrictive assumptions about the intervention's effects and which subpopulations to explicitly estimate. Moreover, the majority of the literature provides no mechanism to identify which subpopulations are the most affected--beyond manual inspection--and provides little guarantee on the correctness of the identified subpopulations. Therefore, we propose Treatment Effect Subset Scan (TESS), a new method for discovering which subpopulation in a randomized experiment is most significantly affected by a treatment. We frame this challenge as a pattern detection problem where we efficiently maximize a nonparametric scan statistic (a measure of the conditional quantile treatment effect) over subpopulations. Furthermore, we identify the subpopulation which experiences the largest distributional change as a result of the intervention, while making minimal assumptions about the intervention's effects or the underlying data generating process. In addition to the algorithm, we demonstrate that under the sharp null hypothesis of no treatment effect, the asymptotic Type I and II error can be controlled, and provide sufficient conditions for detection consistency--i.e., exact identification of the affected subpopulation. Finally, we validate the efficacy of the method by discovering heterogeneous treatment effects in simulations and in real-world data from a well-known program evaluation study.
Likelihood-free inference methods typically make use of a distance between simulated and real data. A common example is the maximum mean discrepancy (MMD), which has previously been used for approximate Bayesian computation, minimum distance estimation, generalised Bayesian inference, and within the nonparametric learning framework. The MMD is commonly estimated at a root-$m$ rate, where $m$ is the number of simulated samples. This can lead to significant computational challenges since a large $m$ is required to obtain an accurate estimate, which is crucial for parameter estimation. In this paper, we propose a novel estimator for the MMD with significantly improved sample complexity. The estimator is particularly well suited for computationally expensive smooth simulators with low- to mid-dimensional inputs. This claim is supported through both theoretical results and an extensive simulation study on benchmark simulators.
We study a double robust Bayesian inference procedure on the average treatment effect (ATE) under unconfoundedness. Our robust Bayesian approach involves two adjustment steps: first, we make a correction for prior distributions of the conditional mean function; second, we introduce a recentering term on the posterior distribution of the resulting ATE. We prove asymptotic equivalence of our Bayesian estimator and double robust frequentist estimators by establishing a new semiparametric Bernstein-von Mises theorem under double robustness; i.e., the lack of smoothness of conditional mean functions can be compensated by high regularity of the propensity score and vice versa. Consequently, the resulting Bayesian point estimator internalizes the bias correction as the frequentist-type doubly robust estimator, and the Bayesian credible sets form confidence intervals with asymptotically exact coverage probability. In simulations, we find that this robust Bayesian procedure leads to significant bias reduction of point estimation and accurate coverage of confidence intervals, especially when the dimensionality of covariates is large relative to the sample size and the underlying functions become complex. We illustrate our method in an application to the National Supported Work Demonstration.
Chatterjee, Gmyr, and Pandurangan [PODC 2020] recently introduced the notion of awake complexity for distributed algorithms, which measures the number of rounds in which a node is awake. In the other rounds, the node is sleeping and performs no computation or communication. Measuring the number of awake rounds can be of significance in many settings of distributed computing, e.g., in sensor networks where energy consumption is of concern. In that paper, Chatterjee et al. provide an elegant randomized algorithm for the Maximal Independent Set (MIS) problem that achieves an $O(1)$ node-averaged awake complexity. That is, the average awake time among the nodes is $O(1)$ rounds. However, to achieve that, the algorithm sacrifices the more standard round complexity measure from the well-known $O(\log n)$ bound of MIS, due to Luby [STOC'85], to $O(\log^{3.41} n)$ rounds. Our first contribution is to present a simple randomized distributed MIS algorithm that, with high probability, has $O(1)$ node-averaged awake complexity and $O(\log n)$ worst-case round complexity. Our second, and more technical contribution, is to show algorithms with the same $O(1)$ node-averaged awake complexity and $O(\log n)$ worst-case round complexity for $(1+\varepsilon)$-approximation of maximum matching and $(2+\varepsilon)$-approximation of minimum vertex cover, where $\varepsilon$ denotes an arbitrary small positive constant.
When an exposure of interest is confounded by unmeasured factors, an instrumental variable (IV) can be used to identify and estimate certain causal contrasts. Identification of the marginal average treatment effect (ATE) from IVs relies on strong untestable structural assumptions. When one is unwilling to assert such structure, IVs can nonetheless be used to construct bounds on the ATE. Famously, Balke and Pearl (1997) proved tight bounds on the ATE for a binary outcome, in a randomized trial with noncompliance and no covariate information. We demonstrate how these bounds remain useful in observational settings with baseline confounders of the IV, as well as randomized trials with measured baseline covariates. The resulting bounds on the ATE are non-smooth functionals, and thus standard nonparametric efficiency theory is not immediately applicable. To remedy this, we propose (1) under a novel margin condition, influence function-based estimators of the bounds that can attain parametric convergence rates when the nuisance functions are modeled flexibly, and (2) estimators of smooth approximations of these bounds. We propose extensions to continuous outcomes, explore finite sample properties in simulations, and illustrate the proposed estimators in a randomized field experiment studying the effects of canvassing on resulting voter turnout.