In clinical practice and biomedical research, measurements are often collected sparsely and irregularly in time while the data acquisition is expensive and inconvenient. Examples include measurements of spine bone mineral density, cancer growth through mammography or biopsy, a progression of defective vision, or assessment of gait in patients with neurological disorders. Since the data collection is often costly and inconvenient, estimation of progression from sparse observations is of great interest for practitioners. From the statistical standpoint, such data is often analyzed in the context of a mixed-effect model where time is treated as both a fixed-effect (population progression curve) and a random-effect (individual variability). Alternatively, researchers analyze Gaussian processes or functional data where observations are assumed to be drawn from a certain distribution of processes. These models are flexible but rely on probabilistic assumptions, require very careful implementation, specific to the given problem, and tend to be slow in practice. In this study, we propose an alternative elementary framework for analyzing longitudinal data, relying on matrix completion. Our method yields estimates of progression curves by iterative application of the Singular Value Decomposition. Our framework covers multivariate longitudinal data, regression, and can be easily extended to other settings. As it relies on existing tools for matrix algebra it is efficient and easy to implement. We apply our methods to understand trends of progression of motor impairment in children with Cerebral Palsy. Our model approximates individual progression curves and explains 30% of the variability. Low-rank representation of progression trends enables identification of different progression trends in subtypes of Cerebral Palsy.
In the present work, we describe a framework for modeling how models can be built that integrates concepts and methods from a wide range of fields. The information schism between the real-world and that which can be gathered and considered by any individual information processing agent is characterized and discussed, which is followed by the presentation of a series of the adopted requisites while developing the modeling approach. The issue of mapping from datasets into models is subsequently addressed, as well as some of the respectively implied difficulties and limitations. Based on these considerations, an approach to meta modeling how models are built is then progressively developed. First, the reference M^* meta model framework is presented, which relies critically in associating whole datasets and respective models in terms of a strict equivalence relation. Among the interesting features of this model are its ability to bridge the gap between data and modeling, as well as paving the way to an algebra of both data and models which can be employed to combine models into hierarchical manner. After illustrating the M* model in terms of patterns derived from regular lattices, the reported modeling approach continues by discussing how sampling issues, error and overlooked data can be addressed, leading to the $M^{<\epsilon>}$ variant. The situation in which the data needs to be represented in terms of respective probability densities is treated next, yielding the $M^{<\sigma>}$ meta model, which is then illustrated respectively to a real-world dataset (iris flowers data). Several considerations about how the developed framework can provide insights about data clustering, complexity, collaborative research, deep learning, and creativity are then presented, followed by overall conclusions.
Longitudinal fMRI datasets hold great promise for the study of neurodegenerative diseases, but realizing their potential depends on extracting accurate fMRI-based brain measures in individuals over time. This is especially true for rare, heterogeneous and/or rapidly progressing diseases, which often involve small samples whose functional features may vary dramatically across subjects and over time, making traditional group-difference analyses of limited utility. One such disease is ALS, which results in extreme motor function loss and eventual death. Here, we analyze a rich longitudinal dataset containing 190 motor task fMRI scans from 16 ALS patients and 22 age-matched HCs. We propose a novel longitudinal extension to our cortical surface-based spatial Bayesian GLM, which has high power and precision to detect activations in individuals. Using a series of longitudinal mixed-effects models to subsequently study the relationship between activation and disease progression, we observe an inverted U-shaped trajectory: at relatively mild disability we observe enlarging activations, while at higher disability we observe severely diminished activation, reflecting progression toward complete motor function loss. We observe distinct trajectories depending on clinical progression rate, with faster progressors exhibiting more extreme hyper-activation and subsequent hypo-activation. These differential trajectories suggest that initial hyper-activation is likely attributable to loss of inhibitory neurons. By contrast, earlier studies employing more limited sampling designs and using traditional group-difference analysis approaches were only able to observe the initial hyper-activation, which was assumed to be due to a compensatory process. This study provides a first example of how surface-based spatial Bayesian modeling furthers scientific understanding of neurodegenerative disease.
Sparse matrix factorization is the problem of approximating a matrix Z by a product of L sparse factors X^(L) X^(L--1). .. X^(1). This paper focuses on identifiability issues that appear in this problem, in view of better understanding under which sparsity constraints the problem is well-posed. We give conditions under which the problem of factorizing a matrix into two sparse factors admits a unique solution, up to unavoidable permutation and scaling equivalences. Our general framework considers an arbitrary family of prescribed sparsity patterns, allowing us to capture more structured notions of sparsity than simply the count of nonzero entries. These conditions are shown to be related to essential uniqueness of exact matrix decomposition into a sum of rank-one matrices, with structured sparsity constraints. A companion paper further exploits these conditions to derive identifiability properties in multilayer sparse matrix factorization of some well-known matrices like the Hadamard or the discrete Fourier transform matrices.
Structured point process data harvested from various platforms poses new challenges to the machine learning community. By imposing a matrix structure to repeatedly observed marked point processes, we propose a novel mixture model of multi-level marked point processes for identifying potential heterogeneity in the observed data. Specifically, we study a matrix whose entries are marked log-Gaussian Cox processes and cluster rows of such a matrix. An efficient semi-parametric Expectation-Solution (ES) algorithm combined with functional principal component analysis (FPCA) of point processes is proposed for model estimation. The effectiveness of the proposed framework is demonstrated through simulation studies and a real data analysis.
The noncentral Wishart distribution has become more mainstream in statistics as the prevalence of applications involving sample covariances with underlying multivariate Gaussian populations as dramatically increased since the advent of computers. Multiple sources in the literature deal with local approximations of the noncentral Wishart distribution with respect to its central counterpart. However, no source has yet developed explicit local approximations for the (central) Wishart distribution in terms of a normal analogue, which is important since Gaussian distributions are at the heart of the asymptotic theory for many statistical methods. In this paper, we prove a precise asymptotic expansion for the ratio of the Wishart density to the symmetric matrix-variate normal density with the same mean and covariances. The result is then used to derive an upper bound on the total variation between the corresponding probability measures and to find the pointwise variance of a new density estimator on the space of positive definite matrices with a Wishart asymmetric kernel. For the sake of completeness, we also find expressions for the pointwise bias of our new estimator, the pointwise variance as we move towards the boundary of its support, the mean squared error, the mean integrated squared error away from the boundary, and we prove its asymptotic normality.
Consistent segmentation of COVID-19 patient's CT scans across multiple time points is essential to assess disease progression and response to therapy accurately. Existing automatic and interactive segmentation models for medical images only use data from a single time point (static). However, valuable segmentation information from previous time points is often not used to aid the segmentation of a patient's follow-up scans. Also, fully automatic segmentation techniques frequently produce results that would need further editing for clinical use. In this work, we propose a new single network model for interactive segmentation that fully utilizes all available past information to refine the segmentation of follow-up scans. In the first segmentation round, our model takes 3D volumes of medical images from two-time points (target and reference) as concatenated slices with the additional reference time point segmentation as a guide to segment the target scan. In subsequent segmentation refinement rounds, user feedback in the form of scribbles that correct the segmentation and the target's previous segmentation results are additionally fed into the model. This ensures that the segmentation information from previous refinement rounds is retained. Experimental results on our in-house multiclass longitudinal COVID-19 dataset show that the proposed model outperforms its static version and can assist in localizing COVID-19 infections in patient's follow-up scans.
Matrix completion is a prevailing collaborative filtering method for recommendation systems that requires the data offered by users to provide personalized service. However, due to insidious attacks and unexpected inference, the release of user data often raises serious privacy concerns. Most of the existing solutions focus on improving the privacy guarantee for general matrix completion. As a special case, in recommendation systems where the observations are binary, one-bit matrix completion covers a broad range of real-life situations. In this paper, we propose a novel framework for one-bit matrix completion under the differential privacy constraint. In this framework, we develop several perturbation mechanisms and analyze the privacy-accuracy trade-off offered by each mechanism. The experiments conducted on both synthetic and real-world datasets demonstrate that our proposed approaches can maintain high-level privacy with little loss of completion accuracy.
With the rapid development of data collection techniques, complex data objects that are not in the Euclidean space are frequently encountered in new statistical applications. Fr\'echet regression model (Peterson & M\"uller 2019) provides a promising framework for regression analysis with metric space-valued responses. In this paper, we introduce a flexible sufficient dimension reduction (SDR) method for Fr\'echet regression to achieve two purposes: to mitigate the curse of dimensionality caused by high-dimensional predictors, and to provide a tool for data visualization for Fr\'echet regression. Our approach is flexible enough to turn any existing SDR method for Euclidean (X,Y) into one for Euclidean X and metric space-valued Y. The basic idea is to first map the metric-space valued random object $Y$ to a real-valued random variable $f(Y)$ using a class of functions, and then perform classical SDR to the transformed data. If the class of functions is sufficiently rich, then we are guaranteed to uncover the Fr\'echet SDR space. We showed that such a class, which we call an ensemble, can be generated by a universal kernel. We established the consistency and asymptotic convergence rate of the proposed methods. The finite-sample performance of the proposed methods is illustrated through simulation studies for several commonly encountered metric spaces that include Wasserstein space, the space of symmetric positive definite matrices, and the sphere. We illustrated the data visualization aspect of our method by exploring the human mortality distribution data across countries and by studying the distribution of hematoma density.
We study the rank of the instantaneous or spot covariance matrix $\Sigma_X(t)$ of a multidimensional continuous semi-martingale $X(t)$. Given high-frequency observations $X(i/n)$, $i=0,\ldots,n$, we test the null hypothesis $rank(\Sigma_X(t))\le r$ for all $t$ against local alternatives where the average $(r+1)$st eigenvalue is larger than some signal detection rate $v_n$. A major problem is that the inherent averaging in local covariance statistics produces a bias that distorts the rank statistics. We show that the bias depends on the regularity and a spectral gap of $\Sigma_X(t)$. We establish explicit matrix perturbation and concentration results that provide non-asymptotic uniform critical values and optimal signal detection rates $v_n$. This leads to a rank estimation method via sequential testing. For a class of stochastic volatility models, we determine data-driven critical values via normed p-variations of estimated local covariance matrices. The methods are illustrated by simulations and an application to high-frequency data of U.S. government bonds.
Discrete random structures are important tools in Bayesian nonparametrics and the resulting models have proven effective in density estimation, clustering, topic modeling and prediction, among others. In this paper, we consider nested processes and study the dependence structures they induce. Dependence ranges between homogeneity, corresponding to full exchangeability, and maximum heterogeneity, corresponding to (unconditional) independence across samples. The popular nested Dirichlet process is shown to degenerate to the fully exchangeable case when there are ties across samples at the observed or latent level. To overcome this drawback, inherent to nesting general discrete random measures, we introduce a novel class of latent nested processes. These are obtained by adding common and group-specific completely random measures and, then, normalising to yield dependent random probability measures. We provide results on the partition distributions induced by latent nested processes, and develop an Markov Chain Monte Carlo sampler for Bayesian inferences. A test for distributional homogeneity across groups is obtained as a by product. The results and their inferential implications are showcased on synthetic and real data.