In this paper, we introduce a novel statistical model for the integrative analysis of Riemannian-valued functional data and high-dimensional data. We apply this model to explore the dependence structure between each subject's dynamic functional connectivity -- represented by a temporally indexed collection of positive definite covariance matrices -- and high-dimensional data representing lifestyle, demographic, and psychometric measures. Specifically, we employ a reformulation of canonical correlation analysis that enables efficient control of the complexity of the functional canonical directions using tangent space sieve approximations. Additionally, we enforce an interpretable group structure on the high-dimensional canonical directions via a sparsity-promoting penalty. The proposed method shows improved empirical performance over alternative approaches and comes with theoretical guarantees. Its application to data from the Human Connectome Project reveals a dominant mode of covariation between dynamic functional connectivity and lifestyle, demographic, and psychometric measures. This mode aligns with results from static connectivity studies but reveals a unique temporal non-stationary pattern that such studies fail to capture.
In this paper we give a broad overview of the intersection of partial differential equations (PDEs) and graph-based semi-supervised learning. The overview is focused on a large body of recent work on PDE continuum limits of graph-based learning, which have been used to prove well-posedness of semi-supervised learning algorithms in the large data limit. We highlight some interesting research directions revolving around consistency of graph-based semi-supervised learning, and present some new results on the consistency of $p$-Laplacian semi-supervised learning using the stochastic tug-of-war game interpretation of the $p$-Laplacian. We also present the results of some numerical experiments that illustrate our results and suggest directions for future work.
Model-based clustering of moderate or large dimensional data is notoriously difficult. We propose a model for simultaneous dimensionality reduction and clustering by assuming a mixture model for a set of latent scores, which are then linked to the observations via a Gaussian latent factor model. This approach was recently investigated by Chandra et al. (2023). The authors use a factor-analytic representation and assume a mixture model for the latent factors. However, performance can deteriorate in the presence of model misspecification. Assuming a repulsive point process prior for the component-specific means of the mixture for the latent scores is shown to yield a more robust model that outperforms the standard mixture model for the latent factors in several simulated scenarios. The repulsive point process must be anisotropic to favor well-separated clusters of data, and its density should be tractable for efficient posterior inference. We address these issues by proposing a general construction for anisotropic determinantal point processes. We illustrate our model in simulations as well as a plant species co-occurrence dataset.
Many combinatorial optimization problems can be formulated as the search for a subgraph that satisfies certain properties and minimizes the total weight. We assume here that the vertices correspond to points in a metric space and can take any position in given uncertainty sets. Then, the cost function to be minimized is the sum of the distances for the worst positions of the vertices in their uncertainty sets. We propose two types of polynomial-time approximation algorithms. The first one relies on solving a deterministic counterpart of the problem where the uncertain distances are replaced with maximum pairwise distances. We study in details the resulting approximation ratio, which depends on the structure of the feasible subgraphs and whether the metric space is Ptolemaic or not. The second algorithm is a fully-polynomial time approximation scheme for the special case of $s-t$ paths.
In this work, we propose two information generating functions: general weighted information and relative information generating functions, and study their properties. { It is shown that the general weighted information generating function (GWIGF) is shift-dependent and can be expressed in terms of the weighted Shannon entropy. The GWIGF of a transformed random variable has been obtained in terms of the GWIGF of a known distribution. Several bounds of the GWIGF have been proposed. We have obtained sufficient conditions under which the GWIGFs of two distributions are comparable. Further, we have established a connection between the weighted varentropy and varentropy with proposed GWIGF. An upper bound for GWIGF of the sum of two independent random variables is derived. The effect of general weighted relative information generating function (GWRIGF) for two transformed random variables under strictly monotone functions has been studied. } Further, these information generating functions are studied for escort, generalized escort and mixture distributions. {Specially, we propose weighted $\beta$-cross informational energy and establish a close connection with GWIGF for escort distribution.} The residual versions of the newly proposed generating functions are considered and several similar properties have been explored. A non-parametric estimator of the residual general weighted information generating function is proposed. A simulated data set and two real data sets are considered for the purpose of illustration. { Finally, we have compared the non-parametric approach with a parametric approach in terms of the absolute bias and mean squared error values.}
We study an interacting particle method (IPM) for computing the large deviation rate function of entropy production for diffusion processes, with emphasis on the vanishing-noise limit and high dimensions. The crucial ingredient to obtain the rate function is the computation of the principal eigenvalue $\lambda$ of elliptic, non-self-adjoint operators. We show that this principal eigenvalue can be approximated in terms of the spectral radius of a discretized evolution operator obtained from an operator splitting scheme and an Euler--Maruyama scheme with a small time step size, and we show that this spectral radius can be accessed through a large number of iterations of this discretized semigroup, suitable for the IPM. The IPM applies naturally to problems in unbounded domains, scales easily to high dimensions, and adapts to singular behaviors in the vanishing-noise limit. We show numerical examples in dimensions up to 16. The numerical results show that our numerical approximation of $\lambda$ converges to the analytical vanishing-noise limit within visual tolerance with a fixed number of particles and a fixed time step size. Our paper appears to be the first one to obtain numerical results of principal eigenvalue problems for non-self-adjoint operators in such high dimensions.
In this paper, to address the optimization problem on a compact matrix manifold, we introduce a novel algorithmic framework called the Transformed Gradient Projection (TGP) algorithm, using the projection onto this compact matrix manifold. Compared with the existing algorithms, the key innovation in our approach lies in the utilization of a new class of search directions and various stepsizes, including the Armijo, nonmonotone Armijo, and fixed stepsizes, to guide the selection of the next iterate. Our framework offers flexibility by encompassing the classical gradient projection algorithms as special cases, and intersecting the retraction-based line-search algorithms. Notably, our focus is on the Stiefel or Grassmann manifold, revealing that many existing algorithms in the literature can be seen as specific instances within our proposed framework, and this algorithmic framework also induces several new special cases. Then, we conduct a thorough exploration of the convergence properties of these algorithms, considering various search directions and stepsizes. To achieve this, we extensively analyze the geometric properties of the projection onto compact matrix manifolds, allowing us to extend classical inequalities related to retractions from the literature. Building upon these insights, we establish the weak convergence, convergence rate, and global convergence of TGP algorithms under three distinct stepsizes. In cases where the compact matrix manifold is the Stiefel or Grassmann manifold, our convergence results either encompass or surpass those found in the literature. Finally, through a series of numerical experiments, we observe that the TGP algorithms, owing to their increased flexibility in choosing search directions, outperform classical gradient projection and retraction-based line-search algorithms in several scenarios.
Differential abundance analysis is a key component of microbiome studies. While dozens of methods for it exist, currently, there is no consensus on the preferred methods. Correctness of results in differential abundance analysis is an ambiguous concept that cannot be evaluated without employing simulated data, but we argue that consistency of results across datasets should be considered as an essential quality of a well-performing method. We compared the performance of 14 differential abundance analysis methods employing datasets from 54 taxonomic profiling studies based on 16S rRNA gene or shotgun sequencing. For each method, we examined how the results replicated between random partitions of each dataset and between datasets from independent studies. While certain methods showed good consistency, some widely used methods were observed to produce a substantial number of conflicting findings. Overall, the highest consistency without unnecessary reduction in sensitivity was attained by analyzing relative abundances with a non-parametric method (Wilcoxon test or ordinal regression model) or linear regression (MaAsLin2). Comparable performance was also attained by analyzing presence/absence of taxa with logistic regression.
This paper studies the convergence of a spatial semidiscretization of a three-dimensional stochastic Allen-Cahn equation with multiplicative noise. For non-smooth initial values, the regularity of the mild solution is investigated, and an error estimate is derived with the spatial $ L^2 $-norm. For smooth initial values, two error estimates with the general spatial $ L^q $-norms are established.
The problems of optimal recovering univariate functions and their derivatives are studied. To solve these problems, two variants of the truncation method are constructed, which are order-optimal both in the sense of accuracy and in terms of the amount of involved Galerkin information. For numerical summation, it has been established how the parameters characterizing the problem being solved affect its stability.
Both Morse theory and Lusternik-Schnirelmann theory link algebra, topology and analysis in a geometric setting. The two theories can be formulated in finite geometries like graph theory or within finite abstract simplicial complexes. We work here mostly in graph theory and review the Morse inequalities b(k)-b(k-1) + ... + b(0) less of equal than c(k)-c(k-1) + ... + c(0) for the Betti numbers b(k) and the minimal number c(k) of Morse critical points of index k and the Lusternik-Schnirelmann inequalities cup+1 less or equal than cat less or equal than cri, between the algebraic cup length cup, the topological category cat and the analytic number cri counting the minimal number of critical points of a function.