Identifying prognostic factors for disease progression is a cornerstone of medical research. Repeated assessments of a marker outcome are often used to evaluate disease progression, and the primary research question is to identify factors associated with the longitudinal trajectory of this marker. Our work is motivated by diabetic kidney disease (DKD), where serial measures of estimated glomerular filtration rate (eGFR) are the longitudinal measure of kidney function, and there is notable interest in identifying factors, such as metabolites, that are prognostic for DKD progression. Linear mixed models (LMM) with serial marker outcomes (e.g., eGFR) are a standard approach for prognostic model development, namely by evaluating the time and prognostic factor (e.g., metabolite) interaction. However, two-stage methods that first estimate individual-specific eGFR slopes, and then use these as outcomes in a regression framework with metabolites as predictors are easy to interpret and implement for applied researchers. Herein, we compared the LMM and two-stage methods, in terms of bias and mean squared error via analytic methods and simulations, allowing for irregularly spaced measures and missingness. Our findings provide novel insights into when two-stage methods are suitable longitudinal prognostic modeling alternatives to the LMM. Notably, our findings generalize to other disease studies.
Test error estimation is a fundamental problem in statistics and machine learning. Correctly assessing the future performance of an algorithm is an essential task, especially with the development of complex predictive algorithms that require data-driven parameter tuning. We propose a new coupled bootstrap estimator for the test error of Poisson-response algorithms, a fundamental model for count data and with applications such as signal processing, density estimation, and queue theory. The idea behind our estimator is to generate two carefully designed new random vectors from the original data, where one acts as a training sample and the other as a test set. It is unbiased for an intuitive parameter: the out-of-sample error of a Poisson random vector whose mean has been shrunken by a small factor. Moreover, in a limiting regime, the coupled bootstrap estimator recovers an exactly unbiased estimator for test error. Our framework is applicable to loss functions of the Bregman divergence family, and our analysis and examples focus on two important cases: Poisson likelihood deviance and squared loss. Through a bias-variance decomposition, we analyze the effect of the number of bootstrap samples and the added noise due to the two auxiliary variables. We then apply our method to different scenarios with both simulated and real data.
After being trained on a fully-labeled training set, where the observations are grouped into a certain number of known classes, novelty detection methods aim to classify the instances of an unlabeled test set while allowing for the presence of previously unseen classes. These models are valuable in many areas, ranging from social network and food adulteration analyses to biology, where an evolving population may be present. In this paper, we focus on a two-stage Bayesian semiparametric novelty detector, also known as Brand, recently introduced in the literature. Leveraging on a model-based mixture representation, Brand allows clustering the test observations into known training terms or a single novelty term. Furthermore, the novelty term is modeled with a Dirichlet Process mixture model to flexibly capture any departure from the known patterns. Brand was originally estimated using MCMC schemes, which are prohibitively costly when applied to high-dimensional data. To scale up Brand applicability to large datasets, we propose to resort to a variational Bayes approach, providing an efficient algorithm for posterior approximation. We demonstrate a significant gain in efficiency and excellent classification performance with thorough simulation studies. Finally, to showcase its applicability, we perform a novelty detection analysis using the openly-available Statlog dataset, a large collection of satellite imaging spectra, to search for novel soil types.
Pervasive cross-section dependence is increasingly recognized as a characteristic of economic data and the approximate factor model provides a useful framework for analysis. Assuming a strong factor structure where $\Lop\Lo/N^\alpha$ is positive definite in the limit when $\alpha=1$, early work established convergence of the principal component estimates of the factors and loadings up to a rotation matrix. This paper shows that the estimates are still consistent and asymptotically normal when $\alpha\in(0,1]$ albeit at slower rates and under additional assumptions on the sample size. The results hold whether $\alpha$ is constant or varies across factors. The framework developed for heterogeneous loadings and the simplified proofs that can be also used in strong analysis are of independent interest
Recent years have seen the development of many novel scoring tools for disease prognosis and prediction. To become accepted for use in clinical applications, these tools have to be validated on external data. In practice, validation is often hampered by logistical issues, resulting in multiple small-sized validation studies. It is therefore necessary to synthesize the results of these studies using techniques for meta-analysis. Here we consider strategies for meta-analyzing the concordance probability for time-to-event data ("C-index"), which has become a popular tool to evaluate the discriminatory power of prediction models with a right-censored outcome. We show that standard meta-analysis of the C-index may lead to biased results, as the magnitude of the concordance probability depends on the length of the time interval used for evaluation (defined e.g. by the follow-up time, which might differ considerably between studies). To address this issue, we propose a set of methods for random-effects meta-regression that incorporate time directly as covariate in the model equation. In addition to analyzing nonlinear time trends via fractional polynomial, spline, and exponential decay models, we provide recommendations on suitable transformations of the C-index before meta-regression. Our results suggest that the C-index is best meta-analyzed using fractional polynomial meta-regression with logit-transformed C-index values. Classical random-effects meta-analysis (not considering time as covariate) is demonstrated to be a suitable alternative when follow-up times are small. Our findings have implications for the reporting of C-index values in future studies, which should include information on the length of the time interval underlying the calculations.
We consider the problem of reconstructing the signal and the hidden variables from observations coming from a multi-layer network with rotationally invariant weight matrices. The multi-layer structure models inference from deep generative priors, and the rotational invariance imposed on the weights generalizes the i.i.d.\ Gaussian assumption by allowing for a complex correlation structure, which is typical in applications. In this work, we present a new class of approximate message passing (AMP) algorithms and give a state evolution recursion which precisely characterizes their performance in the large system limit. In contrast with the existing multi-layer VAMP (ML-VAMP) approach, our proposed AMP -- dubbed multi-layer rotationally invariant generalized AMP (ML-RI-GAMP) -- provides a natural generalization beyond Gaussian designs, in the sense that it recovers the existing Gaussian AMP as a special case. Furthermore, ML-RI-GAMP exhibits a significantly lower complexity than ML-VAMP, as the computationally intensive singular value decomposition is replaced by an estimation of the moments of the design matrices. Finally, our numerical results show that this complexity gain comes at little to no cost in the performance of the algorithm.
Hypothesis testing procedures are developed to assess linear operator constraints in function-on-scalar regression when incomplete functional responses are observed. The approach enables statistical inferences about the shape and other aspects of the functional regression coefficients within a unified framework encompassing three incomplete sampling scenarios: (i) partially observed response functions as curve segments over random sub-intervals of the domain; (ii) discretely observed functional responses with additive measurement errors; and (iii) the composition of former two scenarios, where partially observed response segments are observed discretely with measurement error. The latter scenario has been little explored to date, although such structured data is increasingly common in applications. For statistical inference, deviations from the constraint space are measured via integrated $L^2$-distance between the model estimates from the constrained and unconstrained model spaces. Large sample properties of the proposed test procedure are established, including the consistency, asymptotic distribution and local power of the test statistic. Finite sample power and level of the proposed test are investigated in a simulation study covering a variety of scenarios. The proposed methodologies are illustrated by applications to U.S. obesity prevalence data, analyzing the functional shape of its trends over time, and motion analysis in a study of automotive ergonomics.
This paper describes three methods for carrying out non-asymptotic inference on partially identified parameters that are solutions to a class of optimization problems. Applications in which the optimization problems arise include estimation under shape restrictions, estimation of models of discrete games, and estimation based on grouped data. The partially identified parameters are characterized by restrictions that involve the unknown population means of observed random variables in addition to structural parameters. Inference consists of finding confidence intervals for functions of the structural parameters. Our theory provides finite-sample lower bounds on the coverage probabilities of the confidence intervals under three sets of assumptions of increasing strength. With the moderate sample sizes found in most economics applications, the bounds become tighter as the assumptions strengthen. We discuss estimation of population parameters that the bounds depend on and contrast our methods with alternative methods for obtaining confidence intervals for partially identified parameters. The results of Monte Carlo experiments and empirical examples illustrate the usefulness of our method.
Approximate Bayesian Computation (ABC) enables statistical inference in simulator-based models whose likelihoods are difficult to calculate but easy to simulate from. ABC constructs a kernel-type approximation to the posterior distribution through an accept/reject mechanism which compares summary statistics of real and simulated data. To obviate the need for summary statistics, we directly compare empirical distributions with a Kullback-Leibler (KL) divergence estimator obtained via contrastive learning. In particular, we blend flexible machine learning classifiers within ABC to automate fake/real data comparisons. We consider the traditional accept/reject kernel as well as an exponential weighting scheme which does not require the ABC acceptance threshold. Our theoretical results show that the rate at which our ABC posterior distributions concentrate around the true parameter depends on the estimation error of the classifier. We derive limiting posterior shape results and find that, with a properly scaled exponential kernel, asymptotic normality holds. We demonstrate the usefulness of our approach on simulated examples as well as real data in the context of stock volatility estimation.
Two combined numerical methods for solving time-varying semilinear differential-algebraic equations (DAEs) are obtained. These equations are also called degenerate DEs, descriptor systems, operator-differential equations and DEs on manifolds. The convergence and correctness of the methods are proved. When constructing methods we use, in particular, time-varying spectral projectors which can be numerically found. This enables to numerically solve and analyze the considered DAE in the original form without additional analytical transformations. To improve the accuracy of the second method, recalculation (a ``predictor-corrector'' scheme) is used. Note that the developed methods are applicable to the DAEs with the continuous nonlinear part which may not be continuously differentiable in $t$, and that the restrictions of the type of the global Lipschitz condition, including the global condition of contractivity, are not used in the theorems on the global solvability of the DAEs and on the convergence of the numerical methods. This enables to use the developed methods for the numerical solution of more general classes of mathematical models. For example, the functions of currents and voltages in electric circuits may not be differentiable or may be approximated by nondifferentiable functions. Presented conditions for the global solvability of the DAEs ensure the existence of an unique exact global solution for the corresponding initial value problem, which enables to compute approximate solutions on any given time interval (provided that the conditions of theorems or remarks on the convergence of the methods are fulfilled). In the paper, the numerical analysis of the mathematical model for a certain electrical circuit, which demonstrates the application of the presented theorems and numerical methods, is carried out.
Pre-trained deep neural network language models such as ELMo, GPT, BERT and XLNet have recently achieved state-of-the-art performance on a variety of language understanding tasks. However, their size makes them impractical for a number of scenarios, especially on mobile and edge devices. In particular, the input word embedding matrix accounts for a significant proportion of the model's memory footprint, due to the large input vocabulary and embedding dimensions. Knowledge distillation techniques have had success at compressing large neural network models, but they are ineffective at yielding student models with vocabularies different from the original teacher models. We introduce a novel knowledge distillation technique for training a student model with a significantly smaller vocabulary as well as lower embedding and hidden state dimensions. Specifically, we employ a dual-training mechanism that trains the teacher and student models simultaneously to obtain optimal word embeddings for the student vocabulary. We combine this approach with learning shared projection matrices that transfer layer-wise knowledge from the teacher model to the student model. Our method is able to compress the BERT_BASE model by more than 60x, with only a minor drop in downstream task metrics, resulting in a language model with a footprint of under 7MB. Experimental results also demonstrate higher compression efficiency and accuracy when compared with other state-of-the-art compression techniques.