To improve the statistical power for imaging biomarker detection, we propose a latent variable-based statistical network analysis (LatentSNA) that combines brain functional connectivity with internalizing psychopathology, implementing network science in a generative statistical process to preserve the neurologically meaningful network topology in the adolescents and children population. The developed inference-focused generative Bayesian framework (1) addresses the lack of power and inflated Type II errors in current analytic approaches when detecting imaging biomarkers, (2) allows unbiased estimation of biomarkers' influence on behavior variants, (3) quantifies the uncertainty and evaluates the likelihood of the estimated biomarker effects against chance and (4) ultimately improves brain-behavior prediction in novel samples and the clinical utilities of neuroimaging findings. We collectively model multi-state functional networks with multivariate internalizing profiles for 5,000 to 7,000 children in the Adolescent Brain Cognitive Development (ABCD) study with sufficiently accurate prediction of both children internalizing traits and functional connectivity, and substantially improved our ability to explain the individual internalizing differences compared with current approaches. We successfully uncover large, coherent star-like brain functional architectures associated with children's internalizing psychopathology across multiple functional systems and establish them as unique fingerprints for childhood internalization.
We consider the community recovery problem on a multilayer variant of the hypergraph stochastic block model (HSBM). Each layer is associated with an independent realization of a d-uniform HSBM on N vertices. Given the similarity matrix containing the aggregated number of hyperedges incident to each pair of vertices, the goal is to obtain a partition of the N vertices into disjoint communities. In this work, we investigate a semidefinite programming (SDP) approach and obtain information-theoretic conditions on the model parameters that guarantee exact recovery both in the assortative and the disassortative cases.
Quantum computing has recently emerged as a transformative technology. Yet, its promised advantages rely on efficiently translating quantum operations into viable physical realizations. In this work, we use generative machine learning models, specifically denoising diffusion models (DMs), to facilitate this transformation. Leveraging text-conditioning, we steer the model to produce desired quantum operations within gate-based quantum circuits. Notably, DMs allow to sidestep during training the exponential overhead inherent in the classical simulation of quantum dynamics -- a consistent bottleneck in preceding ML techniques. We demonstrate the model's capabilities across two tasks: entanglement generation and unitary compilation. The model excels at generating new circuits and supports typical DM extensions such as masking and editing to, for instance, align the circuit generation to the constraints of the targeted quantum device. Given their flexibility and generalization abilities, we envision DMs as pivotal in quantum circuit synthesis, enhancing both practical applications but also insights into theoretical quantum computation.
The synthesis of information deriving from complex networks is a topic receiving increasing relevance in ecology and environmental sciences. In particular, the aggregation of multilayer networks, i.e. network structures formed by multiple interacting networks (the layers), constitutes a fast-growing field. In several environmental applications, the layers of a multilayer network are modelled as a collection of similarity matrices describing how similar pairs of biological entities are, based on different types of features (e.g. biological traits). The present paper first discusses two main techniques for combining the multi-layered information into a single network (the so-called monoplex), i.e. Similarity Network Fusion (SNF) and Similarity Matrix Average (SMA). Then, the effectiveness of the two methods is tested on a real-world dataset of the relative abundance of microbial species in the ecosystems of nine glaciers (four glaciers in the Alps and five in the Andes). A preliminary clustering analysis on the monoplexes obtained with different methods shows the emergence of a tightly connected community formed by species that are typical of cryoconite holes worldwide. Moreover, the weights assigned to different layers by the SMA algorithm suggest that two large South American glaciers (Exploradores and Perito Moreno) are structurally different from the smaller glaciers in both Europe and South America. Overall, these results highlight the importance of integration methods in the discovery of the underlying organizational structure of biological entities in multilayer ecological networks.
In prediction settings where data are collected over time, it is often of interest to understand both the importance of variables for predicting the response at each time point and the importance summarized over the time series. Building on recent advances in estimation and inference for variable importance measures, we define summaries of variable importance trajectories. These measures can be estimated and the same approaches for inference can be applied regardless of the choice of the algorithm(s) used to estimate the prediction function. We propose a nonparametric efficient estimation and inference procedure as well as a null hypothesis testing procedure that are valid even when complex machine learning tools are used for prediction. Through simulations, we demonstrate that our proposed procedures have good operating characteristics, and we illustrate their use by investigating the longitudinal importance of risk factors for suicide attempt.
At least two, different approaches to define and solve statistical models for the analysis of economic systems exist: the typical, econometric one, interpreting the Gravity Model specification as the expected link weight of an arbitrary probability distribution, and the one rooted into statistical physics, constructing maximum-entropy distributions constrained to satisfy certain network properties. In a couple of recent, companion papers they have been successfully integrated within the framework induced by the constrained minimisation of the Kullback-Leibler divergence: specifically, two, broad classes of models have been devised, i.e. the integrated and the conditional ones, defined by different, probabilistic rules to place links, load them with weights and turn them into proper, econometric prescriptions. Still, the recipes adopted by the two approaches to estimate the parameters entering into the definition of each model differ. In econometrics, a likelihood that decouples the binary and weighted parts of a model, treating a network as deterministic, is typically maximised; to restore its random character, two alternatives exist: either solving the likelihood maximisation on each configuration of the ensemble and taking the average of the parameters afterwards or taking the average of the likelihood function and maximising the latter one. The difference between these approaches lies in the order in which the operations of averaging and maximisation are taken - a difference that is reminiscent of the quenched and annealed ways of averaging out the disorder in spin glasses. The results of the present contribution, devoted to comparing these recipes in the case of continuous, conditional network models, indicate that the annealed estimation recipe represents the best alternative to the deterministic one.
Pre-trained language models can be surprisingly adept at tasks they were not explicitly trained on, but how they implement these capabilities is poorly understood. In this paper, we investigate the basic mathematical abilities often acquired by pre-trained language models. Concretely, we use mechanistic interpretability techniques to explain the (limited) mathematical abilities of GPT-2 small. As a case study, we examine its ability to take in sentences such as "The war lasted from the year 1732 to the year 17", and predict valid two-digit end years (years > 32). We first identify a circuit, a small subset of GPT-2 small's computational graph that computes this task's output. Then, we explain the role of each circuit component, showing that GPT-2 small's final multi-layer perceptrons boost the probability of end years greater than the start year. Finally, we find related tasks that activate our circuit. Our results suggest that GPT-2 small computes greater-than using a complex but general mechanism that activates across diverse contexts.
We study Whitney-type estimates for approximation of convex functions in the uniform norm on various convex multivariate domains while paying a particular attention to the dependence of the involved constants on the dimension and the geometry of the domain.
Conventional local planners frequently become trapped in a locally optimal trajectory, primarily due to their inability to traverse obstacles. Having a larger number of topologically distinctive paths increases the likelihood of finding the optimal trajectory. It is crucial to generate a substantial number of topologically distinctive paths in real-time. Accordingly, we propose an efficient path planning approach based on tangent graphs to yield multiple topologically distinctive paths. Diverging from existing algorithms, our method eliminates the necessity of distinguishing whether two paths belong to the same topology; instead, it generates multiple topologically distinctive paths based on the locally shortest property of tangents. Additionally, we introduce a priority constraint for the queue during graph search, thereby averting the exponential expansion of queue size. To illustrate the advantages of our method, we conducted a comparative analysis with various typical algorithms using a widely recognized public dataset\footnote{//movingai.com/benchmarks/grids.html}. The results indicate that, on average, our method generates 320 topologically distinctive paths within a mere 100 milliseconds. This outcome underscores a significant enhancement in efficiency when compared to existing methods. To foster further research within the community, we have made the source code of our proposed algorithm publicly accessible\footnote{//joeyao-bit.github.io/posts/2023/09/07/}. We anticipate that this framework will significantly contribute to the development of more efficient topologically distinctive path planning, along with related trajectory optimization and motion planning endeavors.
In sampling-based Bayesian models of brain function, neural activities are assumed to be samples from probability distributions that the brain uses for probabilistic computation. However, a comprehensive understanding of how mechanistic models of neural dynamics can sample from arbitrary distributions is still lacking. We use tools from functional analysis and stochastic differential equations to explore the minimum architectural requirements for $\textit{recurrent}$ neural circuits to sample from complex distributions. We first consider the traditional sampling model consisting of a network of neurons whose outputs directly represent the samples (sampler-only network). We argue that synaptic current and firing-rate dynamics in the traditional model have limited capacity to sample from a complex probability distribution. We show that the firing rate dynamics of a recurrent neural circuit with a separate set of output units can sample from an arbitrary probability distribution. We call such circuits reservoir-sampler networks (RSNs). We propose an efficient training procedure based on denoising score matching that finds recurrent and output weights such that the RSN implements Langevin sampling. We empirically demonstrate our model's ability to sample from several complex data distributions using the proposed neural dynamics and discuss its applicability to developing the next generation of sampling-based brain models.
We propose a new variable selection procedure for a functional linear model with multiple scalar responses and multiple functional predictors. This method is based on basis expansions of the involved functional predictors and coefficients that lead to a multivariate linear regression model. Then a criterion by means of which the variable selection problem reduces to that of estimating a suitable set is introduced. Estimation of this set is achieved by using appropriate penalizations of estimates of this criterion, so leading to our proposal. A simulation study that permits to investigate the effectiveness of the proposed approach and to compare it with existing methods is given.