With the availability of high dimensional genetic biomarkers, it is of interest to identify heterogeneous effects of these predictors on patients' survival, along with proper statistical inference. Censored quantile regression has emerged as a powerful tool for detecting heterogeneous effects of covariates on survival outcomes. To our knowledge, there is little work available to draw inference on the effects of high dimensional predictors for censored quantile regression. This paper proposes a novel procedure to draw inference on all predictors within the framework of global censored quantile regression, which investigates covariate-response associations over an interval of quantile levels, instead of a few discrete values. The proposed estimator combines a sequence of low dimensional model estimates that are based on multi-sample splittings and variable selection. We show that, under some regularity conditions, the estimator is consistent and asymptotically follows a Gaussian process indexed by the quantile level. Simulation studies indicate that our procedure can properly quantify the uncertainty of the estimates in high dimensional settings. We apply our method to analyze the heterogeneous effects of SNPs residing in lung cancer pathways on patients' survival, using the Boston Lung Cancer Survival Cohort, a cancer epidemiology study on the molecular mechanism of lung cancer.
Variable selection methods are widely used in molecular biology to detect biomarkers or to infer gene regulatory networks from transcriptomic data. Methods are mainly based on the high-dimensional Gaussian linear regression model and we focus on this framework for this review. We propose a comparison study of variable selection procedures from regularization paths by considering three simulation settings. In the first one, the variables are independent allowing the evaluation of the methods in the theoretical framework used to develop them. In the second setting, two structures of the correlation between variables are considered to evaluate how biological dependencies usually observed affect the estimation. Finally, the third setting mimics the biological complexity of transcription factor regulations, it is the farthest setting from the Gaussian framework. In all the settings, the capacity of prediction and the identification of the explaining variables are evaluated for each method. Our results show that variable selection procedures rely on statistical assumptions that should be carefully checked. The Gaussian assumption and the number of explaining variables are the two key points. As soon as correlation exists, the regularization function Elastic-net provides better results than Lasso. LinSelect, a non-asymptotic model selection method, should be preferred to the eBIC criterion commonly used. Bolasso is a judicious strategy to limit the selection of non explaining variables.
We consider asymptotically exact inference on the leading canonical correlation directions and strengths between two high dimensional vectors under sparsity restrictions. In this regard, our main contribution is the development of a loss function, based on which, one can operationalize a one-step bias-correction on reasonable initial estimators. Our analytic results in this regard are adaptive over suitable structural restrictions of the high dimensional nuisance parameters, which, in this set-up, correspond to the covariance matrices of the variables of interest. We further supplement the theoretical guarantees behind our procedures with extensive numerical studies.
Analysts are often confronted with censoring, wherein some variables are not observed at their true value, but rather at a value that is known to fall above or below that truth. While much attention has been given to the analysis of censored outcomes, contemporary focus has shifted to censored covariates, as well. Missing data is often overcome using multiple imputation, which leverages the entire dataset by replacing missing values with informed placeholders, and this method can be modified for censored data by also incorporating partial information from censored values. One such modification involves replacing censored covariates with their conditional means given other fully observed information, such as the censored value or additional covariates. So-called conditional mean imputation approaches were proposed for censored covariates in Atem et al. [2017], Atem et al.[2019a], and Atem et al. [2019b]. These methods are robust to additional parametric assumptions on the censored covariate and utilize all available data, which is appealing. As we worked to implement these methods, however, we discovered that these three manuscripts provide nonequivalent formulas and, in fact, none is the correct formula for the conditional mean. Herein, we derive the correct form of the conditional mean and demonstrate the impact of the incorrect formulas on the imputed values and statistical inference. Under several settings considered, using an incorrect formula is seen to seriously bias parameter estimation in simple linear regression. Lastly, we provide user-friendly R software, the imputeCensoRd package, to enable future researchers to tackle censored covariates in their data.
A sentinel network, Ob\'epine, has been designed to monitor SARS-CoV-2 viral load in wastewaters arriving at several tens of wastewater treatment plants in France as an indirect macro-epidemiological parameter. The sources of uncertainty in such monitoring system are numerous and the concentration measurements it provides are left-censored and contain numerous outliers, which biases the results of usual smoothing methods. Hence the need for an adapted pre-processing in order to evaluate the real daily amount of virus arriving to each WWTP. We propose a method based on an auto-regressive model adapted to censored data with outliers. Inference and prediction are produced via a discretised smoother which makes it a very flexible tool. This method is both validated on simulations and on real data from Ob\'epine. The resulting smoothed signal shows a good correlation with other epidemiological indicators and currently contributes to the construction of the wastewater indicators provided each week by Ob\'epine.
Modern high-dimensional point process data, especially those from neuroscience experiments, often involve observations from multiple conditions and/or experiments. Networks of interactions corresponding to these conditions are expected to share many edges, but also exhibit unique, condition-specific ones. However, the degree of similarity among the networks from different conditions is generally unknown. Existing approaches for multivariate point processes do not take these structures into account and do not provide inference for jointly estimated networks. To address these needs, we propose a joint estimation procedure for networks of high-dimensional point processes that incorporates easy-to-compute weights in order to data-adaptively encourage similarity between the estimated networks. We also propose a powerful hierarchical multiple testing procedure for edges of all estimated networks, which takes into account the data-driven similarity structure of the multi-experiment networks. Compared to conventional multiple testing procedures, our proposed procedure greatly reduces the number of tests and results in improved power, while tightly controlling the family-wise error rate. Unlike existing procedures, our method is also free of assumptions on dependency between tests, offers flexibility on p-values calculated along the hierarchy, and is robust to misspecification of the hierarchical structure. We verify our theoretical results via simulation studies and demonstrate the application of the proposed procedure using neuronal spike train data.
There are a variety of settings where vague prior information may be available on the importance of predictors in high-dimensional regression settings. Examples include ordering on the variables offered by their empirical variances (which is typically discarded through standardisation), the lag of predictors when fitting autoregressive models in time series settings, or the level of missingness of the variables. Whilst such orderings may not match the true importance of variables, we argue that there is little to be lost, and potentially much to be gained, by using them. We propose a simple scheme involving fitting a sequence of models indicated by the ordering. We show that the computational cost for fitting all models when ridge regression is used is no more than for a single fit of ridge regression, and describe a strategy for Lasso regression that makes use of previous fits to greatly speed up fitting the entire sequence of models. We propose to select a final estimator by cross-validation and provide a general result on the quality of the best performing estimator on a test set selected from among a number $M$ of competing estimators in a high-dimensional linear regression setting. Our result requires no sparsity assumptions and shows that only a $\log M$ price is incurred compared to the unknown best estimator. We demonstrate the effectiveness of our approach when applied to missing or corrupted data, and time series settings. An R package is available on github.
Blocking, a special case of rerandomization, is routinely implemented in the design stage of randomized experiments to balance baseline covariates. Regression adjustment is highly encouraged in the analysis stage to adjust for the remaining covariate imbalances. Researchers have recommended combining these techniques; however, the research on this combination in a randomization-based inference framework with a large number of covariates is limited. This paper proposes several methods that combine the blocking, rerandomization, and regression adjustment techniques in randomized experiments with high-dimensional covariates. In the design stage, we suggest the implementation of blocking or rerandomization or both techniques to balance a fixed number of covariates most relevant to the outcomes. For the analysis stage, we propose regression adjustment methods based on the Lasso to adjust for the remaining imbalances in the additional high-dimensional covariates. Moreover, we establish the asymptotic properties of the proposed Lasso-adjusted average treatment effect estimators and outline conditions under which these estimators are more efficient than the unadjusted estimators. In addition, we provide conservative variance estimators to facilitate valid inferences. Our analysis is randomization-based, allowing the outcome data generating models to be mis-specified. Simulation studies and two real data analyses demonstrate the advantages of the proposed methods.
We investigate the issue of post-selection inference for a fixed and a mixed parameter in a linear mixed model using a conditional Akaike information criterion as a model selection procedure. Within the framework of linear mixed models we develop complete theory to construct confidence intervals for regression and mixed parameters under three frameworks: nested and general model sets as well as misspecified models. Our theoretical analysis is accompanied by a simulation experiment and a post-selection examination on mean income across Galicia's counties. Our numerical studies confirm a good performance of our new procedure. Moreover, they reveal a startling robustness to the model misspecification of a naive method to construct the confidence intervals for a mixed parameter which is in contrast to our findings for the fixed parameters.
In recent years, conditional copulas, that allow dependence between variables to vary according to the values of one or more covariates, have attracted increasing attention. In high dimension, vine copulas offer greater flexibility compared to multivariate copulas, since they are constructed using bivariate copulas as building blocks. In this paper we present a novel inferential approach for multivariate distributions, which combines the flexibility of vine constructions with the advantages of Bayesian nonparametrics, not requiring the specification of parametric families for each pair copula. Expressing multivariate copulas using vines allows us to easily account for covariate specifications driving the dependence between response variables. More precisely, we specify the vine copula density as an infinite mixture of Gaussian copulas, defining a Dirichlet process (DP) prior on the mixing measure, and we perform posterior inference via Markov chain Monte Carlo (MCMC) sampling. Our approach is successful as for clustering as well as for density estimation. We carry out intensive simulation studies and apply the proposed approach to investigate the impact of natural disasters on financial development. Our results show that the methodology is able to capture the heterogeneity in the dataset and to reveal different behaviours of different country clusters in relation to natural disasters.
Randomized controlled trials (RCTs) are the gold standard for evaluating the causal effect of a treatment; however, they often have limited sample sizes and sometimes poor generalizability. On the other hand, non-randomized, observational data derived from large administrative databases have massive sample sizes and better generalizability, but they are prone to unmeasured confounding bias. It is thus of considerable interest to reconcile effect estimates obtained from randomized controlled trials and observational studies investigating the same intervention, potentially harvesting the best from both realms. In this paper, we theoretically characterize the potential efficiency gain of integrating observational data into the RCT-based analysis from a minimax point of view. For estimation, we derive the minimax rate of convergence for the mean squared error, and propose a fully adaptive anchored thresholding estimator that attains the optimal rate up to poly-log factors. For inference, we characterize the minimax rate for the length of confidence intervals and show that adaptation (to unknown confounding bias) is in general impossible. A curious phenomenon thus emerges: for estimation, the efficiency gain from data integration can be achieved without prior knowledge on the magnitude of the confounding bias; for inference, the same task becomes information-theoretically impossible in general. We corroborate our theoretical findings using simulations and a real data example from the RCT DUPLICATE initiative [Franklin et al., 2021b].