The survey world is rife with nonresponse and in many situations the missingness mechanism is not at random, which is a major source of bias for statistical inference. Nonetheless, the survey world is rich with paradata that track the data collection process. A traditional form of paradata is callback data that record attempts to contact. Although it has been recognized that callback data are useful for nonresponse adjustment, they have not been used widely in statistical analysis until recently. In particular, there have been a few attempts that use callback data to estimate response propensity scores, which rest on fully parametric models and fairly stringent assumptions. In this paper, we propose a stableness of resistance assumption for identifying the propensity scores and the outcome distribution of interest, without imposing any parametric restrictions. We establish the semiparametric efficiency theory, derive the efficient influence function, and propose a suite of semiparametric estimation methods including doubly robust ones, which generalize existing parametric approaches. We also consider extension of this framework to causal inference for unmeasured confounding adjustment. Application to a Consumer Expenditure Survey dataset suggests an association between nonresponse and high housing expenditures, and reanalysis of Card (1995)'s dataset on the return to schooling shows a smaller effect of education in the overall population than in the respondents.
Partially linear additive models generalize linear ones since they model the relation between a response variable and covariates by assuming that some covariates have a linear relation with the response but each of the others enter through unknown univariate smooth functions. The harmful effect of outliers either in the residuals or in the covariates involved in the linear component has been described in the situation of partially linear models, that is, when only one nonparametric component is involved in the model. When dealing with additive components, the problem of providing reliable estimators when atypical data arise, is of practical importance motivating the need of robust procedures. Hence, we propose a family of robust estimators for partially linear additive models by combining $B-$splines with robust linear regression estimators. We obtain consistency results, rates of convergence and asymptotic normality for the linear components, under mild assumptions. A Monte Carlo study is carried out to compare the performance of the robust proposal with its classical counterpart under different models and contamination schemes. The numerical experiments show the advantage of the proposed methodology for finite samples. We also illustrate the usefulness of the proposed approach on a real data set.
We consider M SNP data from N individuals who are an admixture of K unknown ancient populations. Let $\Pi_{si}$ be the frequency of the reference allele of individual i at SNP s. So the number of reference alleles at SNP s for a diploid individual is binomially distributed with parameters 2 and $\Pi_{si}$. We suppose $\Pi_{si}=\sum_{k=1}^KF_{sk}Q_{ki}$, where $F_{sk}$ is the allele frequency of SNP s in population k and $Q_{ki}$ is the proportion of population k in the ancestry of individual i. I am interested in the identifiability of F and Q, up to a relabelling of the ancient populations. Under what conditions, when $\Pi =F^1Q^1=F^2Q^2$ are $F^1$ and $F^2$ and $Q^1$ and $Q^2$ equal? I show that the anchor condition (Cabreros and Storey, 2019) on one matrix together with an independence condition on the other matrix is sufficient for identifiability. I will argue that the proof of the necessary condition in Cabreros and Storey, 2019 is incorrect, and I will provide a correct proof, which in addition does not require knowledge of the number of ancestral populations. I will also provide abstract necessary and sufficient conditions for identifiability. I will show that one cannot deviate substantially from the anchor condition without losing identifiability. Finally, I show necessary and sufficient conditions for identifiability for the non-admixed case.
We propose some extensions to semi-parametric models based on Bayesian additive regression trees (BART). In the semi-parametric BART paradigm, the response variable is approximated by a linear predictor and a BART model, where the linear component is responsible for estimating the main effects and BART accounts for non-specified interactions and non-linearities. Previous semi-parametric models based on BART have assumed that the set of covariates in the linear predictor and the BART model are mutually exclusive in an attempt to avoid bias and poor coverage properties. The main novelty in our approach lies in the way we change the tree-generation moves in BART to deal with bias/confounding between the parametric and non-parametric components, even when they have covariates in common. This allows us to model complex interactions involving the covariates of primary interest, both among themselves and with those in the BART component. Through synthetic and real-world examples, we demonstrate that the performance of our novel semi-parametric BART is competitive when compared to regression models, alternative formulations of semi-parametric BART, and other tree-based methods. The implementation of the proposed method is available at //github.com/ebprado/CSP-BART.
Most governments employ a set of quasi-standard measures to fight COVID-19 including wearing masks, social distancing, virus testing, contact tracing, and vaccination. However, combining these measures into an efficient holistic pandemic response instrument is even more involved than anticipated. We argue that some non-trivial factors behind the varying effectiveness of these measures are selfish decision making and the differing national implementations of the response mechanism. In this paper, through simple games, we show the effect of individual incentives on the decisions made with respect to mask wearing, social distancing and vaccination, and how these may result in sub-optimal outcomes. We also demonstrate the responsibility of national authorities in designing these games properly regarding data transparency, the chosen policies and their influence on the preferred outcome. We promote a mechanism design approach: it is in the best interest of every government to carefully balance social good and response costs when implementing their respective pandemic response mechanism; moreover, there is no one-size-fits-all solution when designing an effective solution.
Standard regression adjustment gives inconsistent estimates of causal effects when there are time-varying treatment effects and time-varying covariates. Loosely speaking, the issue is that some covariates are post-treatment variables because they may be affected by prior treatment status, and regressing out post-treatment variables causes bias. More precisely, the bias is due to certain non-confounding latent variables that create colliders in the causal graph. These latent variables, which we call phantoms, do not harm the identifiability of the causal effect, but they render naive regression estimates inconsistent. Motivated by this, we ask: how can we modify regression methods so that they hold up even in the presence of phantoms? We develop an estimator for this setting based on regression modeling (linear, log-linear, probit and Cox regression), proving that it is consistent for the causal effect of interest. In particular, the estimator is a regression model fit with a simple adjustment for collinearity, making it easy to understand and implement with standard regression software. From a causal point of view, the proposed estimator is an instance of the parametric g-formula. Importantly, we show that our estimator is immune to the null paradox that plagues most other parametric g-formula methods.
The statistical machine learning community has demonstrated considerable resourcefulness over the years in developing highly expressive tools for estimation, prediction, and inference. The bedrock assumptions underlying these developments are that the data comes from a fixed population and displays little heterogeneity. But reality is significantly more complex: statistical models now routinely fail when released into real-world systems and scientific applications, where such assumptions rarely hold. Consequently, we pursue a different path in this paper vis-a-vis the well-worn trail of developing new methodology for estimation and prediction. In this paper, we develop tools and theory for detecting and identifying regions of the covariate space (subpopulations) where model performance has begun to degrade, and study intervening to fix these failures through refitting. We present empirical results with three real-world data sets -- including a time series involving forecasting the incidence of COVID-19 -- showing that our methodology generates interpretable results, is useful for tracking model performance, and can boost model performance through refitting. We complement these empirical results with theory proving that our methodology is minimax optimal for recovering anomalous subpopulations as well as refitting to improve accuracy in a structured normal means setting.
Entity resolution (ER), comprising record linkage and de-duplication, is the process of merging noisy databases in the absence of unique identifiers to remove duplicate entities. One major challenge of analysis with linked data is identifying a representative record among determined matches to pass to an inferential or predictive task, referred to as the \emph{downstream task}. Additionally, incorporating uncertainty from ER in the downstream task is critical to ensure proper inference. To bridge the gap between ER and the downstream task in an analysis pipeline, we propose five methods to choose a representative (or canonical) record from linked data, referred to as canonicalization. Our methods are scalable in the number of records, appropriate in general data scenarios, and provide natural error propagation via a Bayesian canonicalization stage. The proposed methodology is evaluated on three simulated data sets and one application -- determining the relationship between demographic information and party affiliation in voter registration data from the North Carolina State Board of Elections. We first perform Bayesian ER and evaluate our proposed methods for canonicalization before considering the downstream tasks of linear and logistic regression. Bayesian canonicalization methods are empirically shown to improve downstream inference in both settings through prediction and coverage.
Assessment of voice signals has long been performed with the assumption of periodicity as this facilitates analysis. Near periodicity of normal voice signals makes short-time harmonic modeling an appealing choice to extract vocal feature parameters. For dysphonic voice, however, a fixed harmonic structure could be too constrained as it strictly enforces periodicity in the model. Slight variation in amplitude or frequency in the signal may cause the model to misrepresent the observed signal. To address these issues, this paper presents a time-varying harmonic model, which allows its fundamental frequency and harmonic amplitudes to be polynomial functions of time. The model decouples the slow deviations of frequency and amplitude from fast irregular vocal fold vibratory behaviors such as subharmonics and diplophonia. The time-varying model is shown to track the frequency and amplitude modulations present in voice with severe tremor. This reduces the sensitivity of the model-based harmonics-to-noise ratio measures to slow frequency and amplitude variations while maintaining its sensitivity to increase in turbulent noise or the presence of irregular vibration. Other uses of the model include the vocal tract filter estimation and the rates of frequency and intensity changes. These use cases are experimentally demonstrated along with the modeling accuracy.
We study open domain response generation with limited message-response pairs. The problem exists in real-world applications but is less explored by the existing work. Since the paired data now is no longer enough to train a neural generation model, we consider leveraging the large scale of unpaired data that are much easier to obtain, and propose response generation with both paired and unpaired data. The generation model is defined by an encoder-decoder architecture with templates as prior, where the templates are estimated from the unpaired data as a neural hidden semi-markov model. By this means, response generation learned from the small paired data can be aided by the semantic and syntactic knowledge in the large unpaired data. To balance the effect of the prior and the input message to response generation, we propose learning the whole generation model with an adversarial approach. Empirical studies on question response generation and sentiment response generation indicate that when only a few pairs are available, our model can significantly outperform several state-of-the-art response generation models in terms of both automatic and human evaluation.
Discrete random structures are important tools in Bayesian nonparametrics and the resulting models have proven effective in density estimation, clustering, topic modeling and prediction, among others. In this paper, we consider nested processes and study the dependence structures they induce. Dependence ranges between homogeneity, corresponding to full exchangeability, and maximum heterogeneity, corresponding to (unconditional) independence across samples. The popular nested Dirichlet process is shown to degenerate to the fully exchangeable case when there are ties across samples at the observed or latent level. To overcome this drawback, inherent to nesting general discrete random measures, we introduce a novel class of latent nested processes. These are obtained by adding common and group-specific completely random measures and, then, normalising to yield dependent random probability measures. We provide results on the partition distributions induced by latent nested processes, and develop an Markov Chain Monte Carlo sampler for Bayesian inferences. A test for distributional homogeneity across groups is obtained as a by product. The results and their inferential implications are showcased on synthetic and real data.