Longitudinal characterization of cognitive change in late-life has received increasing attention to better understand age-related cognitive aging and cognitive changes reflecting pathology-related and mortality-related processes. Several mixed-effects models have been proposed to accommodate the non-linearity of cognitive decline and assess the putative influence of covariates on it. In this work, we examine the standard linear mixed model (LMM) with a linear function of time and five alternative models capturing non-linearity of change over time, including the LMM with a quadratic term, LMM with splines, the functional mixed model, the piecewise linear mixed model and the sigmoidal mixed model. We first theoretically describe the models. Next, using data from deceased participants from two prospective cohorts with annual cognitive testing, we compared the interpretation of the models by investigating the association of education on cognitive change before death. Finally, we performed a simulation study to empirically evaluate the models and provide practical recommendations. In particular, models were challenged by increasing follow-up spacing, increasing missing data, and decreasing sample size. With the exception of the LMM with a quadratic term, the fit of all models was generally adequate to capture non-linearity of cognitive change and models were relatively robust. Although spline-based models do not have interpretable nonlinearity parameters, their convergence was easier to achieve and they allow for graphical interpretation. In contrast the piecewise and the sigmoidal models, with interpretable non-linear parameters may require more data to achieve convergence.
We describe a family of iterative algorithms that involve the repeated execution of discrete and inverse discrete Fourier transforms. One interesting member of this family is motivated by the discrete Fourier transform uncertainty principle and involves the application of a sparsification operation to both the real domain and frequency domain data with convergence obtained when real domain sparsity hits a stable pattern. This sparsification variant has practical utility for signal denoising, in particular the recovery of a periodic spike signal in the presence of Gaussian noise. General convergence properties and denoising performance relative to existing methods are demonstrated using simulation studies. An R package implementing this technique and related resources can be found at //hrfrost.host.dartmouth.edu/IterativeFT.
We prove the uniform convergence of the geometric multigrid V-cycle for hybrid high-order (HHO) and other discontinuous skeletal methods. Our results generalize previously established results for HDG methods, and our multigrid method uses standard smoothers and local solvers that are bounded, convergent, and consistent. We use a weak version of elliptic regularity in our proofs. Numerical experiments confirm our theoretical results.
We propose a method for obtaining parsimonious decompositions of networks into higher order interactions which can take the form of arbitrary motifs.The method is based on a class of analytically solvable generative models, where vertices are connected via explicit copies of motifs, which in combination with non-parametric priors allow us to infer higher order interactions from dyadic graph data without any prior knowledge on the types or frequencies of such interactions. Crucially, we also consider 'degree--corrected' models that correctly reflect the degree distribution of the network and consequently prove to be a better fit for many real world--networks compared to non-degree corrected models. We test the presented approach on simulated data for which we recover the set of underlying higher order interactions to a high degree of accuracy. For empirical networks the method identifies concise sets of atomic subgraphs from within thousands of candidates that cover a large fraction of edges and include higher order interactions of known structural and functional significance. The method not only produces an explicit higher order representation of the network but also a fit of the network to analytically tractable models opening new avenues for the systematic study of higher order network structures.
Decision making and learning in the presence of uncertainty has attracted significant attention in view of the increasing need to achieve robust and reliable operations. In the case where uncertainty stems from the presence of adversarial attacks this need is becoming more prominent. In this paper we focus on linear and nonlinear classification problems and propose a novel adversarial training method for robust classifiers, inspired by Support Vector Machine (SVM) margins. We view robustness under a data driven lens, and derive finite sample complexity bounds for both linear and non-linear classifiers in binary and multi-class scenarios. Notably, our bounds match natural classifiers' complexity. Our algorithm minimizes a worst-case surrogate loss using Linear Programming (LP) and Second Order Cone Programming (SOCP) for linear and non-linear models. Numerical experiments on the benchmark MNIST and CIFAR10 datasets show our approach's comparable performance to state-of-the-art methods, without needing adversarial examples during training. Our work offers a comprehensive framework for enhancing binary linear and non-linear classifier robustness, embedding robustness in learning under the presence of adversaries.
Dynamical systems across the sciences, from electrical circuits to ecological networks, undergo qualitative and often catastrophic changes in behavior, called bifurcations, when their underlying parameters cross a threshold. Existing methods predict oncoming catastrophes in individual systems but are primarily time-series-based and struggle both to categorize qualitative dynamical regimes across diverse systems and to generalize to real data. To address this challenge, we propose a data-driven, physically-informed deep-learning framework for classifying dynamical regimes and characterizing bifurcation boundaries based on the extraction of topologically invariant features. We focus on the paradigmatic case of the supercritical Hopf bifurcation, which is used to model periodic dynamics across a wide range of applications. Our convolutional attention method is trained with data augmentations that encourage the learning of topological invariants which can be used to detect bifurcation boundaries in unseen systems and to design models of biological systems like oscillatory gene regulatory networks. We further demonstrate our method's use in analyzing real data by recovering distinct proliferation and differentiation dynamics along pancreatic endocrinogenesis trajectory in gene expression space based on single-cell data. Our method provides valuable insights into the qualitative, long-term behavior of a wide range of dynamical systems, and can detect bifurcations or catastrophic transitions in large-scale physical and biological systems.
The Linear Model of Co-regionalization (LMC) is a very general model of multitask gaussian process for regression or classification. While its expressivity and conceptual simplicity are appealing, naive implementations have cubic complexity in the number of datapoints and number of tasks, making approximations mandatory for most applications. However, recent work has shown that under some conditions the latent processes of the model can be decoupled, leading to a complexity that is only linear in the number of said processes. We here extend these results, showing from the most general assumptions that the only condition necessary to an efficient exact computation of the LMC is a mild hypothesis on the noise model. We introduce a full parametrization of the resulting \emph{projected LMC} model, and an expression of the marginal likelihood enabling efficient optimization. We perform a parametric study on synthetic data to show the excellent performance of our approach, compared to an unrestricted exact LMC and approximations of the latter. Overall, the projected LMC appears as a credible and simpler alternative to state-of-the art models, which greatly facilitates some computations such as leave-one-out cross-validation and fantasization.
We formulate a uniform tail bound for empirical processes indexed by a class of functions, in terms of the individual deviations of the functions rather than the worst-case deviation in the considered class. The tail bound is established by introducing an initial "deflation" step to the standard generic chaining argument. The resulting tail bound is the sum of the complexity of the "deflated function class" in terms of a generalization of Talagrand's $\gamma$ functional, and the deviation of the function instance, both of which are formulated based on the natural seminorm induced by the corresponding Cram\'{e}r functions. We also provide certain approximations for the mentioned seminorm when the function class lies in a given (exponential type) Orlicz space, that can be used to make the complexity term and the deviation term more explicit.
The latency location routing problem integrates the facility location problem and the multi-depot cumulative capacitated vehicle routing problem. This problem involves making simultaneous decisions about depot locations and vehicle routes to serve customers while aiming to minimize the sum of waiting (arriving) times for all customers. To address this computationally challenging problem, we propose a reinforcement learning guided hybrid evolutionary algorithm following the framework of the memetic algorithm. The proposed algorithm relies on a diversity-enhanced multi-parent edge assembly crossover to build promising offspring and a reinforcement learning guided variable neighborhood descent to determine the exploration order of multiple neighborhoods. Additionally, strategic oscillation is used to achieve a balanced exploration of both feasible and infeasible solutions. The competitiveness of the algorithm against state-of-the-art methods is demonstrated by experimental results on the three sets of 76 popular instances, including 51 improved best solutions (new upper bounds) for the 59 instances with unknown optima and equal best results for the remaining instances. We also conduct additional experiments to shed light on the key components of the algorithm.
In many communication contexts, the capabilities of the involved actors cannot be known beforehand, whether it is a cell, a plant, an insect, or even a life form unknown to Earth. Regardless of the recipient, the message space and time scale could be too fast, too slow, too large, or too small and may never be decoded. Therefore, it pays to devise a way to encode messages agnostic of space and time scales. We propose the use of fractal functions as self-executable infinite-frequency carriers for sending messages, given their properties of structural self-similarity and scale invariance. We call it `fractal messaging'. Starting from a spatial embedding, we introduce a framework for a space-time scale-free messaging approach to this challenge. When considering a space and time-agnostic framework for message transmission, it would be interesting to encode a message such that it could be decoded at several spatio-temporal scales. Hence, the core idea of the framework proposed herein is to encode a binary message as waves along infinitely many frequencies (in power-like distributions) and amplitudes, transmit such a message, and then decode and reproduce it. To do so, the components of the Weierstrass function, a known fractal, are used as carriers of the message. Each component will have its amplitude modulated to embed the binary stream, allowing for a space-time-agnostic approach to messaging.
Existing statistical methods for the analysis of micro-randomized trials (MRTs) are designed to estimate causal excursion effects using data from a single MRT. In practice, however, researchers can often find previous MRTs that employ similar interventions. In this paper, we develop data integration methods that capitalize on this additional information, leading to statistical efficiency gains. To further increase efficiency, we demonstrate how to combine these approaches according to a generalization of multivariate precision weighting that allows for correlation between estimates, and we show that the resulting meta-estimator possesses an asymptotic optimality property. We illustrate our methods in simulation and in a case study involving two MRTs in the area of smoking cessation.