Functional magnetic resonance imaging (fMRI) time series data presents a unique opportunity to understand temporal brain connectivity, and models that uncover the complex dynamic workings of this organ are of keen interest in neuroscience. Change point models can capture and reflect the dynamic nature of brain connectivity, however methods that translate well into a high-dimensional context (where p>>n) are scarce. To this end, we introduce factorized binary search (FaBiSearch), a novel change point detection method in the network structure of multivariate high-dimensional time series. FaBiSearch uses non-negative matrix factorization, an unsupervised dimension reduction technique, and a new binary search algorithm to identify multiple change points. In addition, we propose a new method for network estimation for data between change points. We show that FaBiSearch outperforms another state-of-the-art method on simulated data sets and we apply FaBiSearch to a resting-state and to a task-based fMRI data set. In particular, for the task-based data set, we explore network dynamics during the reading of Chapter 9 in Harry Potter and the Sorcerer's Stone and find that change points across subjects coincide with key plot twists. Further, we find that the density of networks was positively related to the frequency of speech between characters in the story. Finally, we make all the methods discussed available in the R package fabisearch on CRAN.
We investigate quantum phase transitions in the transverse field Ising chain with algebraically decaying long-range antiferromagnetic interactions by using the variational Monte Carlo method with the restricted Boltzmann machine being employed as a trial wave function ansatz. In the finite-size scaling analysis with the order parameter and the second R\'enyi entropy, we find that the central charge deviates from 1/2 at a small decay exponent $\alpha_\mathrm{LR}$ in contrast to the critical exponents staying very close to the short-range (SR) Ising values regardless of $\alpha_\mathrm{LR}$ examined, supporting the previously proposed scenario of conformal invariance breakdown. To identify the threshold of the Ising universality and the conformal symmetry, we perform two additional tests for the universal Binder ratio and the conformal field theory (CFT) description of the correlation function. It turns out that both indicate a noticeable deviation from the SR Ising class at $\alpha_\mathrm{LR} < 2$. However, a closer look at the scaled correlation function for $\alpha_\mathrm{LR} \ge 2$ shows a gradual change from the asymptotic line of the CFT verified at $\alpha_\mathrm{LR} = 3$, providing a rough estimate of the threshold being in the range of $2 \lesssim \alpha_\mathrm{LR} < 3$.
Building on the success of PC-JeDi we introduce PC-Droid, a substantially improved diffusion model for the generation of jet particle clouds. By leveraging a new diffusion formulation, studying more recent integration solvers, and training on all jet types simultaneously, we are able to achieve state-of-the-art performance for all types of jets across all evaluation metrics. We study the trade-off between generation speed and quality by comparing two attention based architectures, as well as the potential of consistency distillation to reduce the number of diffusion steps. Both the faster architecture and consistency models demonstrate performance surpassing many competing models, with generation time up to two orders of magnitude faster than PC-JeDi and three orders of magnitude faster than Delphes.
Despite impressive performance for high-level downstream tasks, self-supervised pre-training methods have not yet fully delivered on dense geometric vision tasks such as stereo matching or optical flow. The application of self-supervised concepts, such as instance discrimination or masked image modeling, to geometric tasks is an active area of research. In this work, we build on the recent cross-view completion framework, a variation of masked image modeling that leverages a second view from the same scene which makes it well suited for binocular downstream tasks. The applicability of this concept has so far been limited in at least two ways: (a) by the difficulty of collecting real-world image pairs -- in practice only synthetic data have been used -- and (b) by the lack of generalization of vanilla transformers to dense downstream tasks for which relative position is more meaningful than absolute position. We explore three avenues of improvement. First, we introduce a method to collect suitable real-world image pairs at large scale. Second, we experiment with relative positional embeddings and show that they enable vision transformers to perform substantially better. Third, we scale up vision transformer based cross-completion architectures, which is made possible by the use of large amounts of data. With these improvements, we show for the first time that state-of-the-art results on stereo matching and optical flow can be reached without using any classical task-specific techniques like correlation volume, iterative estimation, image warping or multi-scale reasoning, thus paving the way towards universal vision models.
This paper introduces a rigorous approach to establish the sharp minimax optimalities of both LASSO and SLOPE within the framework of double sparse structures, notably without relying on RIP-type conditions. Crucially, our findings illuminate that the achievement of these optimalities is fundamentally anchored in a sparse group normalization condition, complemented by several novel sparse group restricted eigenvalue (RE)-type conditions introduced in this study. We further provide a comprehensive comparative analysis of these eigenvalue conditions. Furthermore, we demonstrate that these conditions hold with high probability across a wide range of random matrices. Our exploration extends to encompass the random design, where we prove the random design properties and optimal sample complexity under both weak moment distribution and sub-Gaussian distribution.
We consider two classes of natural stochastic processes on finite unlabeled graphs. These are Euclidean stochastic optimization algorithms on the adjacency matrix of weighted graphs and a modified version of the Metropolis MCMC algorithm on stochastic block models over unweighted graphs. In both cases we show that, as the size of the graph goes to infinity, the random trajectories of the stochastic processes converge to deterministic limits. These deterministic limits are curves on the space of measure-valued graphons. Measure-valued graphons, introduced by Lov\'{a}sz and Szegedy, are a refinement of the concept of graphons that can distinguish between two infinite exchangeable arrays that give rise to the same graphon limit. We introduce new metrics on this space which provide us with a natural notion of convergence for our limit theorems. This notion is equivalent to the convergence of infinite-exchangeable arrays. Under a suitable time-scaling, the Metropolis chain admits a diffusion limit as the number of vertices go to infinity. We then demonstrate that, in an appropriately formulated zero-noise limit, the stochastic process of adjacency matrices of this diffusion converge to a deterministic gradient flow curve on the space of graphons introduced in arXiv:2111.09459 [math.PR]. Under suitable assumptions, this allows us to estimate an exponential convergence rate for the Metropolis chain in a certain limiting regime. To the best of our knowledge, both the actual rate and the connection between a natural Metropolis chain commonly used in exponential random graph models and gradient flows on graphons are new in the literature.
The probe and singular sources methods are well-known two classical direct reconstruction methods in inverse obstacle problems governed by partial differential equations. The common part of both methods is the notion of the indicator functions which are defined outside an unknown obstacle and blow up on the surface of the obstacle. However, their appearance is completely different. In this paper, by considering an inverse obstacle problem governed by the Laplace equation in a bounded domain as a prototype case, an integrated version of the probe and singular sources methods which fills the gap between their indicator functions is introduced. The main result is decomposed into three parts. First, the singular sources method combined with the probe method and notion of the Carleman function is formulated. Second, the indicator functions of both methods can be obtained as a result of decomposing a third indicator function into two ways. The third indicator function blows up on both the outer and obstacle surfaces. Third, the probe and singular sources methods are reformulated and it is shown that the indicator functions on which both reformulated methods based, completely coincide with each other. As a byproduct, it turns out that the reformulated singular sources method has also the Side B of the probe method, which is a characterization of the unknown obstacle by means of the blowing up property of an indicator sequence.
We propose a generalization of nonlinear stability of numerical one-step integrators to Riemannian manifolds in the spirit of Butcher's notion of B-stability. Taking inspiration from Simpson-Porco and Bullo, we introduce non-expansive systems on such manifolds and define B-stability of integrators. In this first exposition, we provide concrete results for a geodesic version of the Implicit Euler (GIE) scheme. We prove that the GIE method is B-stable on Riemannian manifolds with non-positive sectional curvature. We show through numerical examples that the GIE method is expansive when applied to a certain non-expansive vector field on the 2-sphere, and that the GIE method does not necessarily possess a unique solution for large enough step sizes. Finally, we derive a new improved global error estimate for general Lie group integrators.
Discrete latent space models have recently achieved performance on par with their continuous counterparts in deep variational inference. While they still face various implementation challenges, these models offer the opportunity for a better interpretation of latent spaces, as well as a more direct representation of naturally discrete phenomena. Most recent approaches propose to train separately very high-dimensional prior models on the discrete latent data which is a challenging task on its own. In this paper, we introduce a latent data model where the discrete state is a Markov chain, which allows fast end-to-end training. The performance of our generative model is assessed on a building management dataset and on the publicly available Electricity Transformer Dataset.
We present a novel way to model diffusion magnetic resonance imaging (dMRI) datasets, that benefits from the structural coherence of the human brain while only using data from a single subject. Current methods model the dMRI signal in individual voxels, disregarding the intervoxel coherence that is present. We use a neural network to parameterize a spherical harmonics series (NeSH) to represent the dMRI signal of a single subject from the Human Connectome Project dataset, continuous in both the angular and spatial domain. The reconstructed dMRI signal using this method shows a more structurally coherent representation of the data. Noise in gradient images is removed and the fiber orientation distribution functions show a smooth change in direction along a fiber tract. We showcase how the reconstruction can be used to calculate mean diffusivity, fractional anisotropy, and total apparent fiber density. These results can be achieved with a single model architecture, tuning only one hyperparameter. In this paper we also demonstrate how upsampling in both the angular and spatial domain yields reconstructions that are on par or better than existing methods.
Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-of-the-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By training 8,730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.