Competing risk data appear widely in modern biomedical research. Cause-specific hazard models are often used to deal with competing risk data in the past two decades. There is no current study on the kernel likelihood method for the cause-specific hazard model with time-varying coefficients. We propose to use the local partial log-likelihood approach for nonparametric time-varying coefficient estimation. Simulation studies demonstrate that our proposed nonparametric kernel estimator has a good performance under assumed finite sample settings. Finally, we apply the proposed method to analyze a diabetes dialysis study with competing death causes.
Variable selection methods are widely used in molecular biology to detect biomarkers or to infer gene regulatory networks from transcriptomic data. Methods are mainly based on the high-dimensional Gaussian linear regression model and we focus on this framework for this review. We propose a comparison study of variable selection procedures from regularization paths by considering three simulation settings. In the first one, the variables are independent allowing the evaluation of the methods in the theoretical framework used to develop them. In the second setting, two structures of the correlation between variables are considered to evaluate how biological dependencies usually observed affect the estimation. Finally, the third setting mimics the biological complexity of transcription factor regulations, it is the farthest setting from the Gaussian framework. In all the settings, the capacity of prediction and the identification of the explaining variables are evaluated for each method. Our results show that variable selection procedures rely on statistical assumptions that should be carefully checked. The Gaussian assumption and the number of explaining variables are the two key points. As soon as correlation exists, the regularization function Elastic-net provides better results than Lasso. LinSelect, a non-asymptotic model selection method, should be preferred to the eBIC criterion commonly used. Bolasso is a judicious strategy to limit the selection of non explaining variables.
In this work, we propose a scalable Bayesian procedure for learning the local dependence structure in a high-dimensional model where the variables possess a natural ordering. The ordering of variables can be indexed by time, the vicinities of spatial locations, and so on, with the natural assumption that variables far apart tend to have weak correlations. Applications of such models abound in a variety of fields such as finance, genome associations analysis and spatial modeling. We adopt a flexible framework under which each variable is dependent on its neighbors or predecessors, and the neighborhood size can vary for each variable. It is of great interest to reveal this local dependence structure by estimating the covariance or precision matrix while yielding a consistent estimate of the varying neighborhood size for each variable. The existing literature on banded covariance matrix estimation, which assumes a fixed bandwidth cannot be adapted for this general setup. We employ the modified Cholesky decomposition for the precision matrix and design a flexible prior for this model through appropriate priors on the neighborhood sizes and Cholesky factors. The posterior contraction rates of the Cholesky factor are derived which are nearly or exactly minimax optimal, and our procedure leads to consistent estimates of the neighborhood size for all the variables. Another appealing feature of our procedure is its scalability to models with large numbers of variables due to efficient posterior inference without resorting to MCMC algorithms. Numerical comparisons are carried out with competitive methods, and applications are considered for some real datasets.
Modern high-dimensional point process data, especially those from neuroscience experiments, often involve observations from multiple conditions and/or experiments. Networks of interactions corresponding to these conditions are expected to share many edges, but also exhibit unique, condition-specific ones. However, the degree of similarity among the networks from different conditions is generally unknown. Existing approaches for multivariate point processes do not take these structures into account and do not provide inference for jointly estimated networks. To address these needs, we propose a joint estimation procedure for networks of high-dimensional point processes that incorporates easy-to-compute weights in order to data-adaptively encourage similarity between the estimated networks. We also propose a powerful hierarchical multiple testing procedure for edges of all estimated networks, which takes into account the data-driven similarity structure of the multi-experiment networks. Compared to conventional multiple testing procedures, our proposed procedure greatly reduces the number of tests and results in improved power, while tightly controlling the family-wise error rate. Unlike existing procedures, our method is also free of assumptions on dependency between tests, offers flexibility on p-values calculated along the hierarchy, and is robust to misspecification of the hierarchical structure. We verify our theoretical results via simulation studies and demonstrate the application of the proposed procedure using neuronal spike train data.
The current complexity of regression is nearly linear in the complexity of matrix multiplication/inversion. Here we show that algorithms for $2$-norm regression, i.e., standard linear regression, as well as $p$-norm regression (for $1 < p < \infty$) can be improved to go below the matrix multiplication threshold for sufficiently sparse matrices. We also show that for some values of $p$, the dependence on dimension in input-sparsity time algorithms can be improved beyond $d^\omega$ for tall-and-thin row-sparse matrices.
Blocking, a special case of rerandomization, is routinely implemented in the design stage of randomized experiments to balance baseline covariates. Regression adjustment is highly encouraged in the analysis stage to adjust for the remaining covariate imbalances. Researchers have recommended combining these techniques; however, the research on this combination in a randomization-based inference framework with a large number of covariates is limited. This paper proposes several methods that combine the blocking, rerandomization, and regression adjustment techniques in randomized experiments with high-dimensional covariates. In the design stage, we suggest the implementation of blocking or rerandomization or both techniques to balance a fixed number of covariates most relevant to the outcomes. For the analysis stage, we propose regression adjustment methods based on the Lasso to adjust for the remaining imbalances in the additional high-dimensional covariates. Moreover, we establish the asymptotic properties of the proposed Lasso-adjusted average treatment effect estimators and outline conditions under which these estimators are more efficient than the unadjusted estimators. In addition, we provide conservative variance estimators to facilitate valid inferences. Our analysis is randomization-based, allowing the outcome data generating models to be mis-specified. Simulation studies and two real data analyses demonstrate the advantages of the proposed methods.
We propose a new semi-parametric model based on Bayesian Additive Regression Trees (BART). In our approach, the response variable is approximated by a linear predictor and a BART model, where the first component is responsible for estimating the main effects and BART accounts for the non-specified interactions and non-linearities. The novelty in our approach lies in the way we change tree generation moves in BART to deal with confounding between the parametric and non-parametric components when they have covariates in common. Through synthetic and real-world examples, we demonstrate that the performance of the new semi-parametric BART is competitive when compared to regression models and other tree-based methods. The implementation of the proposed method is available at //github.com/ebprado/SP-BART.
Various nonparametric approaches for Bayesian spectral density estimation of stationary time series have been suggested in the literature, mostly based on the Whittle likelihood approximation. A generalization of this approximation has been proposed in Kirch et al. who prove posterior consistency for spectral density estimation in combination with the Bernstein-Dirichlet process prior for Gaussian time series. In this paper, we will extend the posterior consistency result to non-Gaussian time series by employing a general consistency theorem of Shalizi for dependent data and misspecified models. As a special case, posterior consistency for the spectral density under the Whittle likelihood as proposed by Choudhuri, Ghosal and Roy is also extended to non-Gaussian time series. Small sample properties of this approach are illustrated with several examples of non-Gaussian time series.
High dimensional non-Gaussian time series data are increasingly encountered in a wide range of applications. Conventional estimation methods and technical tools are inadequate when it comes to ultra high dimensional and heavy-tailed data. We investigate robust estimation of high dimensional autoregressive models with fat-tailed innovation vectors by solving a regularized regression problem using convex robust loss function. As a significant improvement, the dimension can be allowed to increase exponentially with the sample size to ensure consistency under very mild moment conditions. To develop the consistency theory, we establish a new Bernstein type inequality for the sum of autoregressive models. Numerical results indicate a good performance of robust estimates.
We propose a new method of estimation in topic models, that is not a variation on the existing simplex finding algorithms, and that estimates the number of topics K from the observed data. We derive new finite sample minimax lower bounds for the estimation of A, as well as new upper bounds for our proposed estimator. We describe the scenarios where our estimator is minimax adaptive. Our finite sample analysis is valid for any number of documents (n), individual document length (N_i), dictionary size (p) and number of topics (K), and both p and K are allowed to increase with n, a situation not handled well by previous analyses. We complement our theoretical results with a detailed simulation study. We illustrate that the new algorithm is faster and more accurate than the current ones, although we start out with a computational and theoretical disadvantage of not knowing the correct number of topics K, while we provide the competing methods with the correct value in our simulations.
We study the problem of learning a latent variable model from a stream of data. Latent variable models are popular in practice because they can explain observed data in terms of unobserved concepts. These models have been traditionally studied in the offline setting. In the online setting, on the other hand, the online EM is arguably the most popular algorithm for learning latent variable models. Although the online EM is computationally efficient, it typically converges to a local optimum. In this work, we develop a new online learning algorithm for latent variable models, which we call SpectralLeader. SpectralLeader always converges to the global optimum, and we derive a sublinear upper bound on its $n$-step regret in the bag-of-words model. In both synthetic and real-world experiments, we show that SpectralLeader performs similarly to or better than the online EM with tuned hyper-parameters.