Consider a scenario where a large number of explanatory features targeting a response variable are analyzed, such that these features are partitioned into different groups according to their domain-specific structures. Furthermore, there may be several such partitions. Such multiple partitions may exist in many real-life scenarios. One such example is spatial genome-wide association studies. Researchers may not only be interested in identifying the features relevant to the response but also aim to determine the relevant groups within each partition. A group is considered relevant if it contains at least one relevant feature. To ensure the replicability of the findings at various resolutions, it is essential to provide false discovery rate (FDR) control for findings at multiple layers simultaneously. This paper presents a general approach that leverages various existing controlled selection procedures to generate more stable results using multilayer FDR control. The key contributions of our proposal are the development of a generalized e-filter that provides multilayer FDR control and the construction of a specific type of generalized e-values to evaluate feature importance. A primary application of our method is an improved version of Data Splitting (DS), called the eDS-filter. Furthermore, we combine the eDS-filter with the version of the group knockoff filter (gKF), resulting in a more flexible approach called the eDS+gKF filter. Simulation studies demonstrate that the proposed methods effectively control the FDR at multiple levels while maintaining or even improving power compared to other approaches. Finally, we apply the proposed method to analyze HIV mutation data.
The balanced incomplete block design (BIBD) problem is a difficult combinatorial problem with a large number of symmetries, which add complexity to its resolution. In this paper, we propose a dual (integer) problem representation that serves as an alternative to the classical binary formulation of the problem. We attack this problem incrementally: firstly, we propose basic algorithms (i.e. local search techniques and genetic algorithms) intended to work separately on the two different search spaces (i.e. binary and integer); secondly, we propose two hybrid schemes: an integrative approach (i.e. a memetic algorithm) and a collaborative model in which the previous methods work in parallel, occasionally exchanging information. Three distinct two-dimensional structures are proposed as communication topology among the algorithms involved in the collaborative model, as well as a number of migration and acceptance criteria for sending and receiving data. An empirical analysis comparing a large number of instances of our schemes (with algorithms possibly working on different search spaces and with/without symmetry breaking methods) shows that some of these algorithms can be considered the state of the art of the metaheuristic methods applied to finding BIBDs. Moreover, our cooperative proposal is a general scheme from which distinct algorithmic variants can be instantiated to handle symmetrical optimisation problems. For this reason, we have also analysed its key parameters, thereby providing general guidelines for the design of efficient/robust cooperative algorithms devised from our proposal.
Cognitive and neurological impairments are very common, but only a small proportion of affected individuals are diagnosed and treated, partly because of the high costs associated with frequent screening. Detecting pre-illness stages and analyzing the progression of neurological disorders through effective and efficient intelligent systems can be beneficial for timely diagnosis and early intervention. We propose using Large Language Models to extract features from free dialogues to detect cognitive decline. These features comprise high-level reasoning content-independent features (such as comprehension, decreased awareness, increased distraction, and memory problems). Our solution comprises (i) preprocessing, (ii) feature engineering via Natural Language Processing techniques and prompt engineering, (iii) feature analysis and selection to optimize performance, and (iv) classification, supported by automatic explainability. We also explore how to improve Chatgpt's direct cognitive impairment prediction capabilities using the best features in our models. Evaluation metrics obtained endorse the effectiveness of a mixed approach combining feature extraction with Chatgpt and a specialized Machine Learning model to detect cognitive decline within free-form conversational dialogues with older adults. Ultimately, our work may facilitate the development of an inexpensive, non-invasive, and rapid means of detecting and explaining cognitive decline.
Mobile devices and the Internet of Things (IoT) devices nowadays generate a large amount of heterogeneous spatial-temporal data. It remains a challenging problem to model the spatial-temporal dynamics under privacy concern. Federated learning (FL) has been proposed as a framework to enable model training across distributed devices without sharing original data which reduce privacy concern. Personalized federated learning (PFL) methods further address data heterogenous problem. However, these methods don't consider natural spatial relations among nodes. For the sake of modeling spatial relations, Graph Neural Netowork (GNN) based FL approach have been proposed. But dynamic spatial-temporal relations among edge nodes are not taken into account. Several approaches model spatial-temporal dynamics in a centralized environment, while less effort has been made under federated setting. To overcome these challeges, we propose a novel Federated Adaptive Spatial-Temporal Attention (FedASTA) framework to model the dynamic spatial-temporal relations. On the client node, FedASTA extracts temporal relations and trend patterns from the decomposed terms of original time series. Then, on the server node, FedASTA utilize trend patterns from clients to construct adaptive temporal-spatial aware graph which captures dynamic correlation between clients. Besides, we design a masked spatial attention module with both static graph and constructed adaptive graph to model spatial dependencies among clients. Extensive experiments on five real-world public traffic flow datasets demonstrate that our method achieves state-of-art performance in federated scenario. In addition, the experiments made in centralized setting show the effectiveness of our novel adaptive graph construction approach compared with other popular dynamic spatial-temporal aware methods.
We consider two hypothesis testing problems for low-rank and high-dimensional tensor signals, namely the tensor signal alignment and tensor signal matching problems. These problems are challenging due to the high dimension of tensors and lack of meaningful test statistics. By exploiting a recent tensor contraction method, we propose and validate relevant test statistics using eigenvalues of a data matrix resulting from the tensor contraction. The matrix has a long range dependence among its entries, which makes the analysis of the matrix challenging, involved and distinct from standard random matrix theory. Our approach provides a novel framework for addressing hypothesis testing problems in the context of high-dimensional tensor signals.
In recent years, denoising diffusion models have become a crucial area of research due to their abundance in the rapidly expanding field of generative AI. While recent statistical advances have delivered explanations for the generation ability of idealised denoising diffusion models for high-dimensional target data, implementations introduce thresholding procedures for the generating process to overcome issues arising from the unbounded state space of such models. This mismatch between theoretical design and implementation of diffusion models has been addressed empirically by using a \emph{reflected} diffusion process as the driver of noise instead. In this paper, we study statistical guarantees of these denoising reflected diffusion models. In particular, we establish minimax optimal rates of convergence in total variation, up to a polylogarithmic factor, under Sobolev smoothness assumptions. Our main contributions include the statistical analysis of this novel class of denoising reflected diffusion models and a refined score approximation method in both time and space, leveraging spectral decomposition and rigorous neural network analysis.
Aperiodic autocorrelation is an important indicator of performance of sequences used in communications, remote sensing, and scientific instrumentation. Knowing a sequence's autocorrelation function, which reports the autocorrelation at every possible translation, is equivalent to knowing the magnitude of the sequence's Fourier transform. The phase problem is the difficulty in resolving this lack of phase information. We say that two sequences are equicorrelational to mean that they have the same aperiodic autocorrelation function. Sequences used in technological applications often have restrictions on their terms: they are not arbitrary complex numbers, but come from a more restricted alphabet. For example, binary sequences involve terms equal to only $+1$ and $-1$. We investigate the necessary and sufficient conditions for two sequences to be equicorrelational, where we take their alphabet into consideration. There are trivial forms of equicorrelationality arising from modifications that predictably preserve the autocorrelation, for example, negating a binary sequence or reversing the order of its terms. By a search of binary sequences up to length $44$, we find that nontrivial equicorrelationality among binary sequences does occur, but is rare. An integer $n$ is said to be equivocal when there are binary sequences of length $n$ that are nontrivially equicorrelational; otherwise $n$ is unequivocal. For $n \leq 44$, we found that the unequivocal lengths are $1$--$8$, $10$, $11$, $13$, $14$, $19$, $22$, $23$, $26$, $29$, $37$, and $38$. We pose open questions about the finitude of unequivocal numbers and the probability of nontrivial equicorrelationality occurring among binary sequences.
Modelling multivariate spatio-temporal data with complex dependency structures is a challenging task but can be simplified by assuming that the original variables are generated from independent latent components. If these components are found, they can be modelled univariately. Blind source separation aims to recover the latent components by estimating the unmixing transformation based on the observed data only. The current methods for spatio-temporal blind source separation are restricted to linear unmixing, and nonlinear variants have not been implemented. In this paper, we extend identifiable variational autoencoder to the nonlinear nonstationary spatio-temporal blind source separation setting and demonstrate its performance using comprehensive simulation studies. Additionally, we introduce two alternative methods for the latent dimension estimation, which is a crucial task in order to obtain the correct latent representation. Finally, we illustrate the proposed methods using a meteorological application, where we estimate the latent dimension and the latent components, interpret the components, and show how nonstationarity can be accounted and prediction accuracy can be improved by using the proposed nonlinear blind source separation method as a preprocessing method.
We develop EigenVI, an eigenvalue-based approach for black-box variational inference (BBVI). EigenVI constructs its variational approximations from orthogonal function expansions. For distributions over $\mathbb{R}^D$, the lowest order term in these expansions provides a Gaussian variational approximation, while higher-order terms provide a systematic way to model non-Gaussianity. These approximations are flexible enough to model complex distributions (multimodal, asymmetric), but they are simple enough that one can calculate their low-order moments and draw samples from them. EigenVI can also model other types of random variables (e.g., nonnegative, bounded) by constructing variational approximations from different families of orthogonal functions. Within these families, EigenVI computes the variational approximation that best matches the score function of the target distribution by minimizing a stochastic estimate of the Fisher divergence. Notably, this optimization reduces to solving a minimum eigenvalue problem, so that EigenVI effectively sidesteps the iterative gradient-based optimizations that are required for many other BBVI algorithms. (Gradient-based methods can be sensitive to learning rates, termination criteria, and other tunable hyperparameters.) We use EigenVI to approximate a variety of target distributions, including a benchmark suite of Bayesian models from posteriordb. On these distributions, we find that EigenVI is more accurate than existing methods for Gaussian BBVI.
We propose new copula-based models for multivariate time series having continuous or discrete distributions, or a mixture of both. These models include stochastic volatility models and regime-switching models. We also propose statistics for testing independence between the generalized errors of these models, extending previous results of Duchesne, Ghoudi and Remillard (2012) obtained for stochastic volatility models. We define families of empirical processes constructed from lagged generalized errors, and we show that their joint asymptotic distributions are Gaussian and independent of the estimated parameters of the individual time series. Moebius transformations of the empirical processes are used to obtain tractable covariances. Several tests statistics are then proposed, based on Cramer-von Mises statistics and dependence measures, as well as graphical methods to visualize the dependence. In addition, numerical experiments are performed to assess the power of the proposed tests. Finally, to show the usefulness of our methodologies, examples of applications for financial data and crime data are given to cover both discrete and continuous cases. ll developed methodologies are implemented in the CRAN package IndGenErrors.
This book is written to offer a humble, but unified, treatment of e-values in hypothesis testing. The book is organized into three parts: Fundamental Concepts, Core Ideas, and Advanced Topics. The first part includes three chapters that introduce the basic concepts. The second part includes five chapters of core ideas such as universal inference, log-optimality, e-processes, operations on e-values, and e-values in multiple testing. The third part contains five chapters of advanced topics. We hope that, by putting the materials together in this book, the concept of e-values becomes more accessible for educational, research, and practical use.