Identifying neural populations is a central problem in neuroscience, as we can now observe the spiking activity of multiple neurons simultaneously with expanding recording capabilities. Although previous work has tried to summarize neural activity within and between known populations by extracting low-dimensional latent factors, in many cases what determines a unique population may be unclear. Neurons differ in their anatomical location, but also, in their cell types and response properties. To define population directly related to neural activity, we develop a clustering method based on a mixture of dynamic Poisson factor analyzers (mixDPFA) model, with the number of clusters and dimension of latent factors treated as unknown parameters. To analyze the proposed mixDPFA model, we propose a Markov chain Monte Carlo (MCMC) algorithm to efficiently sample its posterior distribution. Validating our proposed MCMC algorithm through simulations, we find that it can accurately recover the unknown parameters and the true clustering in the model, and is insensitive to the initial cluster assignments. We then apply the proposed mixDPFA model to multi-region experimental recordings, where we find that the proposed method can identify novel, reliable clusters of neurons based on their activity, and may, thus, be a useful tool for neural data analysis.
By learning the mappings between infinite function spaces using carefully designed neural networks, the operator learning methodology has exhibited significantly more efficiency than traditional methods in solving complex problems such as differential equations, but faces concerns about their accuracy and reliability. To overcomes these limitations, combined with the structures of the spectral numerical method, a general neural architecture named spectral operator learning (SOL) is introduced, and one variant called the orthogonal polynomial neural operator (OPNO), developed for PDEs with Dirichlet, Neumann and Robin boundary conditions (BCs), is proposed later. The strict BC satisfaction properties and the universal approximation capacity of the OPNO are theoretically proven. A variety of numerical experiments with physical backgrounds show that the OPNO outperforms other existing deep learning methodologies, as well as the traditional 2nd-order finite difference method (FDM) with a considerably fine mesh (with the relative errors reaching the order of 1e-6), and is up to almost 5 magnitudes faster than the traditional method.
Identifying a biomarker or treatment-dose threshold that marks a specified level of risk is an important problem, especially in clinical trials. This risk, viewed as a function of thresholds and possibly adjusted for covariates, we call the threshold-response function. Extending the work of Donovan, Hudgens and Gilbert (2019), we propose a nonparametric efficient estimator for the covariate-adjusted threshold-response function, which utilizes machine learning and Targeted Minimum-Loss Estimation (TMLE). We additionally propose a more general estimator, based on sequential regression, that also applies when there is outcome missingness. We show that the threshold-response for a given threshold may be viewed as the expected outcome under a stochastic intervention where all participants are given a treatment dose above the threshold. We prove the estimator is efficient and characterize its asymptotic distribution. A method to construct simultaneous 95% confidence bands for the threshold-response function and its inverse is given. Furthermore, we discuss how to adjust our estimator when the treatment or biomarker is missing-at-random, as is the case in clinical trials with biased sampling designs, using inverse-probability-weighting. The methods are assessed in a diverse set of simulation settings with rare outcomes and cumulative case-control sampling. The methods are employed to estimate neutralizing antibody thresholds for virologically confirmed dengue risk in the CYD14 and CYD15 dengue vaccine trials.
With advances in neural recording techniques, neuroscientists are now able to record the spiking activity of many hundreds of neurons simultaneously, and new statistical methods are needed to understand the structure of this large-scale neural population activity. Although previous work has tried to summarize neural activity within and between known populations by extracting low-dimensional latent factors, in many cases what determines a unique population may be unclear. Neurons differ in their anatomical location, but also, in their cell types and response properties. To identify populations directly related to neural activity, we develop a clustering method based on a mixture of dynamic Poisson factor analyzers (mixDPFA) model, with the number of clusters and dimension of latent factors for each cluster treated as unknown parameters. To analyze the proposed mixDPFA model, we propose a Markov chain Monte Carlo (MCMC) algorithm to efficiently sample its posterior distribution. Validating our proposed MCMC algorithm through simulations, we find that it can accurately recover the unknown parameters and the true clustering in the model, and is insensitive to the initial cluster assignments. We then apply the proposed mixDPFA model to multi-region experimental recordings, where we find that the proposed method can identify novel, reliable clusters of neurons based on their activity, and may, thus, be a useful tool for neural data analysis.
When analyzing complex networks, an important task is the identification of those nodes which play a leading role for the overall communicability of the network. In the context of modifying networks (or making them robust against targeted attacks or outages), it is also relevant to know how sensitive the network's communicability reacts to changes in certain nodes or edges. Recently, the concept of total network sensitivity was introduced in [O. De la Cruz Cabrera, J. Jin, S. Noschese, L. Reichel, Communication in complex networks, Appl. Numer. Math., 172, pp. 186-205, 2022], which allows to measure how sensitive the total communicability of a network is to the addition or removal of certain edges. One shortcoming of this concept is that sensitivities are extremely costly to compute when using a straight-forward approach (orders of magnitude more expensive than the corresponding communicability measures). In this work, we present computational procedures for estimating network sensitivity with a cost that is essentially linear in the number of nodes for many real-world complex networks. Additionally, we extend the sensitivity concept such that it also covers sensitivity of subgraph centrality and the Estrada index, and we discuss the case of node removal. We propose a priori bounds for these sensitivities which capture the qualitative behavior well and give insight into the general behavior of matrix function based network indices under perturbations. These bounds are based on decay results for Fr\'echet derivatives of matrix functions with structured, low-rank direction terms which might be of independent interest also for other applications than network analysis.
Modern clinical and epidemiological studies widely employ wearables to record parallel streams of real-time data on human physiology and behavior. With recent advances in distributional data analysis, these high-frequency data are now often treated as distributional observations resulting in novel regression settings. Motivated by these modelling setups, we develop a distributional outcome regression via quantile functions (DORQF) that expands existing literature with three key contributions: i) handling both scalar and distributional predictors, ii) ensuring jointly monotone regression structure without enforcing monotonicity on individual functional regression coefficients, iii) providing statistical inference via asymptotic projection-based joint confidence bands and a statistical test of global significance to quantify uncertainty of the estimated functional regression coefficients. The method is motivated by and applied to Actiheart component of Baltimore Longitudinal Study of Aging that collected one week of minute-level heart rate (HR) and physical activity (PA) data on 781 older adults to gain deeper understanding of age-related changes in daily life heart rate reserve, defined as a distribution of daily HR, while accounting for daily distribution of physical activity, age, gender, and body composition. Intriguingly, the results provide novel insights in epidemiology of daily life heart rate reserve.
Neural networks can be trained to solve regression problems by using gradient-based methods to minimize the square loss. However, practitioners often prefer to reformulate regression as a classification problem, observing that training on the cross entropy loss results in better performance. By focusing on two-layer ReLU networks, which can be fully characterized by measures over their feature space, we explore how the implicit bias induced by gradient-based optimization could partly explain the above phenomenon. We provide theoretical evidence that the regression formulation yields a measure whose support can differ greatly from that for classification, in the case of one-dimensional data. Our proposed optimal supports correspond directly to the features learned by the input layer of the network. The different nature of these supports sheds light on possible optimization difficulties the square loss could encounter during training, and we present empirical results illustrating this phenomenon.
Federated Learning (FL) is a novel distributed machine learning approach to leverage data from Internet of Things (IoT) devices while maintaining data privacy. However, the current FL algorithms face the challenges of non-independent and identically distributed (non-IID) data, which causes high communication costs and model accuracy declines. To address the statistical imbalances in FL, we propose a clustered data sharing framework which spares the partial data from cluster heads to credible associates through device-to-device (D2D) communication. Moreover, aiming at diluting the data skew on nodes, we formulate the joint clustering and data sharing problem based on the privacy-preserving constrained graph. To tackle the serious coupling of decisions on the graph, we devise a distribution-based adaptive clustering algorithm (DACA) basing on three deductive cluster-forming conditions, which ensures the maximum yield of data sharing. The experiments show that the proposed framework facilitates FL on non-IID datasets with better convergence and model accuracy under a limited communication environment.
We introduce Joint Coverage Regions (JCRs), which unify confidence intervals and prediction regions in frequentist statistics. Specifically, joint coverage regions aim to cover a pair formed by an unknown fixed parameter (such as the mean of a distribution), and an unobserved random datapoint (such as the outcomes associated to a new test datapoint). The first corresponds to a confidence component, while the second corresponds to a prediction part. In particular, our notion unifies classical statistical methods such as the Wald confidence interval with distribution-free prediction methods such as conformal prediction. We show how to construct finite-sample valid JCRs when a conditional pivot is available; under the same conditions where exact finite-sample confidence and prediction sets are known to exist. We further develop efficient JCR algorithms, including split-data versions by introducing adequate sets to reduce the cost of repeated computation. We illustrate the use of JCRs in statistical problems such as constructing efficient prediction sets when the parameter space is structured.
Gradient-based learning in multi-layer neural networks displays a number of striking features. In particular, the decrease rate of empirical risk is non-monotone even after averaging over large batches. Long plateaus in which one observes barely any progress alternate with intervals of rapid decrease. These successive phases of learning often take place on very different time scales. Finally, models learnt in an early phase are typically `simpler' or `easier to learn' although in a way that is difficult to formalize. Although theoretical explanations of these phenomena have been put forward, each of them captures at best certain specific regimes. In this paper, we study the gradient flow dynamics of a wide two-layer neural network in high-dimension, when data are distributed according to a single-index model (i.e., the target function depends on a one-dimensional projection of the covariates). Based on a mixture of new rigorous results, non-rigorous mathematical derivations, and numerical simulations, we propose a scenario for the learning dynamics in this setting. In particular, the proposed evolution exhibits separation of timescales and intermittency. These behaviors arise naturally because the population gradient flow can be recast as a singularly perturbed dynamical system.