Some patients with COVID-19 show changes in signs and symptoms such as temperature and oxygen saturation days before being positively tested for SARS-CoV-2, while others remain asymptomatic. It is important to identify these subgroups and to understand what biological and clinical predictors are related to these subgroups. This information will provide insights into how the immune system may respond differently to infection and can further be used to identify infected individuals. We propose a flexible nonparametric mixed-effects mixture model that identifies risk factors and classifies patients with biological changes. We model the latent probability of biological changes using a logistic regression model and trajectories in the latent groups using smoothing splines. We developed an EM algorithm to maximize the penalized likelihood for estimating all parameters and mean functions. We evaluate our methods by simulations and apply the proposed model to investigate changes in temperature in a cohort of COVID-19-infected hemodialysis patients.
Many multivariate data sets exhibit a form of positive dependence, which can either appear globally between all variables or only locally within particular subgroups. A popular notion of positive dependence that allows for localized positivity is positive association. In this work we introduce the notion of extremal positive association for multivariate extremes from threshold exceedances. Via a sufficient condition for extremal association, we show that extremal association generalizes extremal tree models. For H\"usler--Reiss distributions the sufficient condition permits a parametric description that we call the metric property. As the parameter of a H\"usler--Reiss distribution is a Euclidean distance matrix, the metric property relates to research in electrical network theory and Euclidean geometry. We show that the metric property can be localized with respect to a graph and study surrogate likelihood inference. This gives rise to a two-step estimation procedure for locally metrical H\"usler--Reiss graphical models. The second step allows for a simple dual problem, which is implemented via a gradient descent algorithm. Finally, we demonstrate our results on simulated and real data.
We extend the idea of automated debiased machine learning to the dynamic treatment regime and more generally to nested functionals. We show that the multiply robust formula for the dynamic treatment regime with discrete treatments can be re-stated in terms of a recursive Riesz representer characterization of nested mean regressions. We then apply a recursive Riesz representer estimation learning algorithm that estimates de-biasing corrections without the need to characterize how the correction terms look like, such as for instance, products of inverse probability weighting terms, as is done in prior work on doubly robust estimation in the dynamic regime. Our approach defines a sequence of loss minimization problems, whose minimizers are the mulitpliers of the de-biasing correction, hence circumventing the need for solving auxiliary propensity models and directly optimizing for the mean squared error of the target de-biasing correction. We provide further applications of our approach to estimation of dynamic discrete choice models and estimation of long-term effects with surrogates.
Learning the graphical structure of Bayesian networks is key to describing data-generating mechanisms in many complex applications but poses considerable computational challenges. Observational data can only identify the equivalence class of the directed acyclic graph underlying a Bayesian network model, and a variety of methods exist to tackle the problem. Under certain assumptions, the popular PC algorithm can consistently recover the correct equivalence class by reverse-engineering the conditional independence (CI) relationships holding in the variable distribution. The dual PC algorithm is a novel scheme to carry out the CI tests within the PC algorithm by leveraging the inverse relationship between covariance and precision matrices. By exploiting block matrix inversions we can also perform tests on partial correlations of complementary (or dual) conditioning sets. The multiple CI tests of the dual PC algorithm proceed by first considering marginal and full-order CI relationships and progressively moving to central-order ones. Simulation studies show that the dual PC algorithm outperforms the classic PC algorithm both in terms of run time and in recovering the underlying network structure, even in the presence of deviations from Gaussianity. Additionally, we show that the dual PC algorithm applies for Gaussian copula models, and demonstrate its performance in that setting.
Motivated by the Acute Respiratory Distress Syndrome Network (ARDSNetwork) ARDS respiratory management (ARMA) trial, we developed a flexible Bayesian machine learning approach to estimate the average causal effect and heterogeneous causal effects among the always-survivors stratum when clinical outcomes are subject to truncation. We adopted Bayesian additive regression trees (BART) to flexibly specify separate models for the potential outcomes and latent strata membership. In the analysis of the ARMA trial, we found that the low tidal volume treatment had an overall benefit for participants sustaining acute lung injuries on the outcome of time to returning home, but substantial heterogeneity in treatment effects among the always-survivors, driven most strongly by sex and the alveolar-arterial oxygen gradient at baseline (a physiologic measure of lung function and source of hypoxemia). These findings illustrate how the proposed methodology could guide the prognostic enrichment of future trials in the field. We also demonstrated through a simulation study that our proposed Bayesian machine learning approach outperforms other parametric methods in reducing the estimation bias in both the average causal effect and heterogeneous causal effects for always-survivors.
In extreme value theory and other related risk analysis fields, probability weighted moments (PWM) have been frequently used to estimate the parameters of classical extreme value distributions. This method-of-moment technique can be applied when second moments are finite, a reasonable assumption in many environmental domains like climatological and hydrological studies. Three advantages of PWM estimators can be put forward: their simple interpretations, their rapid numerical implementation and their close connection to the well-studied class of U-statistics. Concerning the later, this connection leads to precise asymptotic properties, but non asymptotic bounds have been lacking when off-the-shelf techniques (Chernoff method) cannot be applied, as exponential moment assumptions become unrealistic in many extreme value settings. In addition, large values analysis is not immune to the undesirable effect of outliers, for example, defective readings in satellite measurements or possible anomalies in climate model runs. Recently, the treatment of outliers has sparked some interest in extreme value theory, but results about finite sample bounds in a robust extreme value theory context are yet to be found, in particular for PWMs or tail index estimators. In this work, we propose a new class of robust PWM estimators, inspired by the median-of-means framework of Devroye et al. (2016). This class of robust estimators is shown to satisfy a sub-Gaussian inequality when the assumption of finite second moments holds. Such non asymptotic bounds are also derived under the general contamination model. Our main proposition confirms theoretically a trade-off between efficiency and robustness. Our simulation study indicates that, while classical estimators of PWMs can be highly sensitive to outliers.
We examine the problem of variance components testing in general mixed effects models using the likelihood ratio test. We account for the presence of nuisance parameters, i.e. the fact that some untested variances might also be equal to zero. Two main issues arise in this context leading to a non regular setting. First, under the null hypothesis the true parameter value lies on the boundary of the parameter space. Moreover, due to the presence of nuisance parameters the exact location of these boundary points is not known, which prevents from using classical asymptotic theory of maximum likelihood estimation. Then, in the specific context of nonlinear mixed-effects models, the Fisher information matrix is singular at the true parameter value. We address these two points by proposing a shrinked parametric bootstrap procedure, which is straightforward to apply even for nonlinear models. We show that the procedure is consistent, solving both the boundary and the singularity issues, and we provide a verifiable criterion for the applicability of our theoretical results. We show through a simulation study that, compared to the asymptotic approach, our procedure has a better small sample performance and is more robust to the presence of nuisance parameters. A real data application is also provided.
We present new results on average causal effects in settings with unmeasured exposure-outcome confounding. Our results are motivated by a class of estimands, e.g., frequently of interest in medicine and public health, that are currently not targeted by standard approaches for average causal effects. We recognize these estimands as queries about the average causal effect of an intervening variable. We anchor our introduction of these estimands in an investigation of the role of chronic pain and opioid prescription patterns in the opioid epidemic, and illustrate how conventional approaches will lead unreplicable estimates with ambiguous policy implications. We argue that our altenative effects are replicable and have clear policy implications, and furthermore are non-parametrically identified by the classical frontdoor formula. As an independent contribution, we derive a new semiparametric efficient estimator of the frontdoor formula with a uniform sample boundedness guarantee. This property is unique among previously-described estimators in its class, and we demonstrate superior performance in finite-sample settings. Theoretical results are applied with data from the National Health and Nutrition Examination Survey.
Elliptical distribution is a basic assumption underlying many multivariate statistical methods. For example, in sufficient dimension reduction and statistical graphical models, this assumption is routinely imposed to simplify the data dependence structure. Before applying such methods, we need to decide whether the data are elliptically distributed. Currently existing tests either focus exclusively on spherical distributions, or rely on bootstrap to determine the null distribution, or require specific forms of the alternative distribution. In this paper, we introduce a general nonparametric test for elliptical distribution based on kernel embedding of the probability measure that embodies the two properties that characterize an elliptical distribution: namely, after centering and rescaling, (1) the direction and length of the random vector are independent, and (2) the directional vector is uniformly distributed on the unit sphere. We derive the null asymptotic distribution of the test statistic via von-Mises expansion, develop the sample-level procedure to determine the rejection region, and establish the consistency and validity of the proposed test. We apply our test to a SENIC dataset with and without a transformation aimed to achieve ellipticity.
In epidemiological studies, participants' disease status is often collected through self-reported outcomes in place of formal medical tests due to budget constraints. However, self-reported outcomes are often subject to measurement errors, and may lead to biased estimates if used in statistical analyses. In this paper, we propose statistical methods to correct for outcome measurement errors in survival analyses with multiple failure types through a reweighting strategy. We also discuss asymptotic properties of the proposed estimators and derive their asymptotic variances. The work is motivated by Conservation of Hearing Study (CHEARS) which aims to evaluate risk factors for hearing loss in the Nurses' Health Studies II (NHS II). We apply the proposed method to adjust for the measurement errors in self-reported hearing outcomes; the analysis results suggest that tinnitus is positively associated with moderate hearing loss at both low or mid and high sound frequencies, while the effects between different frequencies are similar.
We study a mean change point testing problem for high-dimensional data, with exponentially- or polynomially-decaying tails. In each case, depending on the $\ell_0$-norm of the mean change vector, we separately consider dense and sparse regimes. We characterise the boundary between the dense and sparse regimes under the above two tail conditions for the first time in the change point literature and propose novel testing procedures that attain optimal rates in each of the four regimes up to a poly-iterated logarithmic factor. By comparing with previous results under Gaussian assumptions, our results quantify the costs of heavy-tailedness on the fundamental difficulty of change point testing problems for high-dimensional data. To be specific, when the error vectors follow sub-Weibull distributions, a CUSUM-type statistic is shown to achieve a minimax testing rate up to $\sqrt{\log\log(8n)}$. When the error distributions have polynomially-decaying tails, admitting bounded $\alpha$-th moments for some $\alpha \geq 4$, we introduce a median-of-means-type test statistic that achieves a near-optimal testing rate in both dense and sparse regimes. In particular, in the sparse regime, we further propose a computationally-efficient test to achieve the exact optimality. Surprisingly, our investigation in the even more challenging case of $2 \leq \alpha < 4$, unveils a new phenomenon that the minimax testing rate has no sparse regime, i.e.\ testing sparse changes is information-theoretically as hard as testing dense changes. This phenomenon implies a phase transition of the minimax testing rates at $\alpha = 4$.