This work concerns developing communication- and computation-efficient methods for large-scale multiple testing over networks, which is of interest to many practical applications. We take an asymptotic approach and propose two methods, proportion-matching and greedy aggregation, tailored to distributed settings. The proportion-matching method achieves the global BH performance yet only requires a one-shot communication of the (estimated) proportion of true null hypotheses as well as the number of p-values at each node. By focusing on the asymptotic optimal power, we go beyond the BH procedure by providing an explicit characterization of the asymptotic optimal solution. This leads to the greedy aggregation method that effectively approximate the optimal rejection regions at each node, while computation-efficiency comes from the greedy-type approach naturally. Extensive numerical results over a variety of challenging settings are provided to support our theoretical findings.
Spiking neural networks (SNNs) have been recently brought to light due to their promising capabilities. SNNs simulate the brain with higher biological plausibility compared to previous generations of neural networks. Learning with fewer samples and consuming less power are among the key features of these networks. However, the theoretical advantages of SNNs have not been seen in practice due to the slowness of simulation tools and the impracticality of the proposed network structures. In this work, we implement a high-performance library named Spyker using C++/CUDA from scratch that outperforms its predecessor. Several SNNs are implemented in this work with different learning rules (spike-timing-dependent plasticity and reinforcement learning) using Spyker that achieve significantly better runtimes, to prove the practicality of the library in the simulation of large-scale networks. To our knowledge, no such tools have been developed to simulate large-scale spiking neural networks with high performance using a modular structure. Furthermore, a comparison of the represented stimuli extracted from Spyker to recorded electrophysiology data is performed to demonstrate the applicability of SNNs in describing the underlying neural mechanisms of the brain functions. The aim of this library is to take a significant step toward uncovering the true potential of the brain computations using SNNs.
Computational models have become a powerful tool in the quantitative sciences to understand the behaviour of complex systems that evolve in time. However, they often contain a potentially large number of free parameters whose values cannot be obtained from theory but need to be inferred from data. This is especially the case for models in the social sciences, economics, or computational epidemiology. Yet many current parameter estimation methods are mathematically involved and computationally slow to run. In this paper we present a computationally simple and fast method to retrieve accurate probability densities for model parameters using neural differential equations. We present a pipeline comprising multi-agent models acting as forward solvers for systems of ordinary or stochastic differential equations, and a neural network to then extract parameters from the data generated by the model. The two combined create a powerful tool that can quickly estimate densities on model parameters, even for very large systems. We demonstrate the method on synthetic time series data of the SIR model of the spread of infection, and perform an in-depth analysis of the Harris-Wilson model of economic activity on a network, representing a non-convex problem. For the latter, we apply our method both to synthetic data and to data of economic activity across Greater London. We find that our method calibrates the model orders of magnitude more accurately than a previous study of the same dataset using classical techniques, while running between 195 and 390 times faster.
The paper tackles the problem of clustering multiple networks, that do not share the same set of vertices, into groups of networks with similar topology. A statistical model-based approach based on a finite mixture of stochastic block models is proposed. A clustering is obtained by maximizing the integrated classification likelihood criterion. This is done by a hierarchical agglomerative algorithm, that starts from singleton clusters and successively merges clusters of networks. As such, a sequence of nested clusterings is computed that can be represented by a dendrogram providing valuable insights on the collection of networks. Using a Bayesian framework, model selection is performed in an automated way since the algorithm stops when the best number of clusters is attained. The algorithm is computationally efficient, when carefully implemented. The aggregation of groups of networks requires a means to overcome the label-switching problem of the stochastic block model and to match the block labels of the graphs. To address this problem, a new tool is proposed based on a comparison of the graphons of the associated stochastic block models. The clustering approach is assessed on synthetic data. An application to a collection of ecological networks illustrates the interpretability of the obtained results.
In many modern statistical problems, the limited available data must be used both to develop the hypotheses to test, and to test these hypotheses-that is, both for exploratory and confirmatory data analysis. Reusing the same dataset for both exploration and testing can lead to massive selection bias, leading to many false discoveries. Selective inference is a framework that allows for performing valid inference even when the same data is reused for exploration and testing. In this work, we are interested in the problem of selective inference for data clustering, where a clustering procedure is used to hypothesize a separation of the data points into a collection of subgroups, and we then wish to test whether these data-dependent clusters in fact represent meaningful differences within the data. Recent work by Gao et al. [2022] provides a framework for doing selective inference for this setting, where the hierarchical clustering algorithm is used for producing the cluster assignments, which was then extended to k-means clustering by Chen and Witten [2022]. Both these works rely on assuming a known covariance structure for the data, but in practice, the noise level needs to be estimated-and this is particularly challenging when the true cluster structure is unknown. In our work, we extend to the setting of noise with unknown variance, and provide a selective inference method for this more general setting. Empirical results show that our new method is better able to maintain high power while controlling Type I error when the true noise level is unknown.
The basic idea of lifelike computing systems is the transfer of concepts in living systems to technical use that goes even beyond existing concepts of self-adaptation and self-organisation (SASO). As a result, these systems become even more autonomous and changeable - up to a runtime transfer of the actual target function. Maintaining controllability requires a complete and dynamic (self-)quantification of the system behaviour with regard to aspects of SASO but also, in particular, lifelike properties. In this article, we discuss possible approaches for such metrics and establish a first metric for transferability. We analyse the behaviour of the metric using example applications and show that it is suitable for describing the system's behaviour at runtime.
In this paper, by designing a normalized nonmonotone search strategy with the Barzilai--Borwein-type step-size, a novel local minimax method (LMM), which is a globally convergent iterative method, is proposed and analyzed to find multiple (unstable) saddle points of nonconvex functionals in Hilbert spaces. Compared to traditional LMMs with monotone search strategies, this approach, which does not require strict decrease of the objective functional value at each iterative step, is observed to converge faster with less computations. Firstly, based on a normalized iterative scheme coupled with a local peak selection that pulls the iterative point back onto the solution submanifold, by generalizing the Zhang--Hager (ZH) search strategy in the optimization theory to the LMM framework, a kind of normalized ZH-type nonmonotone step-size search strategy is introduced, and then a novel nonmonotone LMM is constructed. Its feasibility and global convergence results are rigorously carried out under the relaxation of the monotonicity for the functional at the iterative sequences. Secondly, in order to speed up the convergence of the nonmonotone LMM, a globally convergent Barzilai--Borwein-type LMM (GBBLMM) is presented by explicitly constructing the Barzilai--Borwein-type step-size as a trial step-size of the normalized ZH-type nonmonotone step-size search strategy in each iteration. Finally, the GBBLMM algorithm is implemented to find multiple unstable solutions of two classes of semilinear elliptic boundary value problems with variational structures: one is the semilinear elliptic equations with the homogeneous Dirichlet boundary condition and another is the linear elliptic equations with semilinear Neumann boundary conditions. Extensive numerical results indicate that our approach is very effective and speeds up the LMMs significantly.
We present a novel approach to the parallelization of the parabolic fast multipole method for a space-time boundary element method for the heat equation. We exploit the special temporal structure of the involved operators to provide an efficient distributed parallelization with respect to time and with a one-directional communication pattern. On top, we apply a task-based shared memory parallelization and SIMD vectorization. In the numerical tests we observe high efficiencies of our parallelization approach.
Bayesian model comparison (BMC) offers a principled approach for assessing the relative merits of competing computational models and propagating uncertainty into model selection decisions. However, BMC is often intractable for the popular class of hierarchical models due to their high-dimensional nested parameter structure. To address this intractability, we propose a deep learning method for performing BMC on any set of hierarchical models which can be instantiated as probabilistic programs. Since our method enables amortized inference, it allows efficient re-estimation of posterior model probabilities and fast performance validation prior to any real-data application. In a series of extensive validation studies, we benchmark the performance of our method against the state-of-the-art bridge sampling method and demonstrate excellent amortized inference across all BMC settings. We then use our method to compare four hierarchical evidence accumulation models that have previously been deemed intractable for BMC due to partly implicit likelihoods. In this application, we corroborate evidence for the recently proposed L\'evy flight model of decision-making and show how transfer learning can be leveraged to enhance training efficiency. Reproducible code for all analyses is provided.
Personalised federated learning (FL) aims at collaboratively learning a machine learning model taylored for each client. Albeit promising advances have been made in this direction, most of existing approaches works do not allow for uncertainty quantification which is crucial in many applications. In addition, personalisation in the cross-device setting still involves important issues, especially for new clients or those having small number of observations. This paper aims at filling these gaps. To this end, we propose a novel methodology coined FedPop by recasting personalised FL into the population modeling paradigm where clients' models involve fixed common population parameters and random effects, aiming at explaining data heterogeneity. To derive convergence guarantees for our scheme, we introduce a new class of federated stochastic optimisation algorithms which relies on Markov chain Monte Carlo methods. Compared to existing personalised FL methods, the proposed methodology has important benefits: it is robust to client drift, practical for inference on new clients, and above all, enables uncertainty quantification under mild computational and memory overheads. We provide non-asymptotic convergence guarantees for the proposed algorithms and illustrate their performances on various personalised federated learning tasks.
Sampling methods (e.g., node-wise, layer-wise, or subgraph) has become an indispensable strategy to speed up training large-scale Graph Neural Networks (GNNs). However, existing sampling methods are mostly based on the graph structural information and ignore the dynamicity of optimization, which leads to high variance in estimating the stochastic gradients. The high variance issue can be very pronounced in extremely large graphs, where it results in slow convergence and poor generalization. In this paper, we theoretically analyze the variance of sampling methods and show that, due to the composite structure of empirical risk, the variance of any sampling method can be decomposed into \textit{embedding approximation variance} in the forward stage and \textit{stochastic gradient variance} in the backward stage that necessities mitigating both types of variance to obtain faster convergence rate. We propose a decoupled variance reduction strategy that employs (approximate) gradient information to adaptively sample nodes with minimal variance, and explicitly reduces the variance introduced by embedding approximation. We show theoretically and empirically that the proposed method, even with smaller mini-batch sizes, enjoys a faster convergence rate and entails a better generalization compared to the existing methods.