There is by now an extensive and well-developed theory of weak convergence for moving averages and continuous-time random walks (CTRWs) with respect to Skorokhod's M1 and J1 topologies. Here we address the fundamental question of how this translates into functional limit theorems, in the M1 or J1 topology, for stochastic integrals driven by these processes. As a key application, we provide weak approximation results for a general class of SDEs driven by time-changed L\'evy processes. Such SDEs and their associated fractional Fokker--Planck--Kolmogorov equations are central to models of anomalous diffusion in statistical physics, and our results provide a rigorous functional characterisation of these as continuum limits of the corresponding models driven by CTRWs. In regard to strictly M1 convergent moving averages and so-called correlated CTRWs, it turns out that the convergence of stochastic integrals can fail markedly. Nevertheless, we are able to identify natural classes of integrand processes for which the convergence holds. We end by showing that these results are general enough to yield functional limit theorems, in the M1 topology, for certain stochastic delay differential equations driven by moving averages.
We construct Bayesian and frequentist finite-sample goodness-of-fit tests for three different variants of the stochastic blockmodel for network data. Since all of the stochastic blockmodel variants are log-linear in form when block assignments are known, the tests for the \emph{latent} block model versions combine a block membership estimator with the algebraic statistics machinery for testing goodness-of-fit in log-linear models. We describe Markov bases and marginal polytopes of the variants of the stochastic blockmodel, and discuss how both facilitate the development of goodness-of-fit tests and understanding of model behavior. The general testing methodology developed here extends to any finite mixture of log-linear models on discrete data, and as such is the first application of the algebraic statistics machinery for latent-variable models.
During the evolution of large models, performance evaluation is necessarily performed to assess their capabilities and ensure safety before practical application. However, current model evaluations mainly rely on specific tasks and datasets, lacking a united framework for assessing the multidimensional intelligence of large models. In this perspective, we advocate for a comprehensive framework of cognitive science-inspired artificial general intelligence (AGI) tests, aimed at fulfilling the testing needs of large models with enhanced capabilities. The cognitive science-inspired AGI tests encompass the full spectrum of intelligence facets, including crystallized intelligence, fluid intelligence, social intelligence, and embodied intelligence. To assess the multidimensional intelligence of large models, the AGI tests consist of a battery of well-designed cognitive tests adopted from human intelligence tests, and then naturally encapsulates into an immersive virtual community. We propose increasing the complexity of AGI testing tasks commensurate with advancements in large models and emphasizing the necessity for the interpretation of test results to avoid false negatives and false positives. We believe that cognitive science-inspired AGI tests will effectively guide the targeted improvement of large models in specific dimensions of intelligence and accelerate the integration of large models into human society.
A significant part of modern topological data analysis is concerned with the design and study of algebraic invariants of poset representations -- often referred to as multi-parameter persistence modules. One such invariant is the minimal rank decomposition, which encodes the ranks of all the structure morphisms of the persistence module by a single ordered pair of rectangle-decomposable modules, interpreted as a signed barcode. This signed barcode generalizes the concept of persistence barcode from one-parameter persistence to any number of parameters, raising the question of its bottleneck stability. We show in this paper that the minimal rank decomposition is not stable under the natural notion of signed bottleneck matching between signed barcodes. We remedy this by turning our focus to the rank exact decomposition, a related signed barcode induced by the minimal projective resolution of the module relative to the so-called rank exact structure, which we prove to be bottleneck stable under signed matchings. As part of our proof, we obtain two intermediate results of independent interest: we compute the global dimension of the rank exact structure on the category of finitely presentable multi-parameter persistence modules, and we prove a bottleneck stability result for hook-decomposable modules. We also give a bound for the size of the rank exact decomposition that is polynomial in the size of the usual minimal projective resolution, we prove a universality result for the dissimilarity function induced by the notion of signed matching, and we compute, in the two-parameter case, the global dimension of a different exact structure related to the upsets of the indexing poset. This set of results combines concepts from topological data analysis and from the representation theory of posets, and we believe is relevant to both areas.
It has been shown that deep neural networks of a large enough width are universal approximators but they are not if the width is too small. There were several attempts to characterize the minimum width $w_{\min}$ enabling the universal approximation property; however, only a few of them found the exact values. In this work, we show that the minimum width for $L^p$ approximation of $L^p$ functions from $[0,1]^{d_x}$ to $\mathbb R^{d_y}$ is exactly $\max\{d_x,d_y,2\}$ if an activation function is ReLU-Like (e.g., ReLU, GELU, Softplus). Compared to the known result for ReLU networks, $w_{\min}=\max\{d_x+1,d_y\}$ when the domain is $\smash{\mathbb R^{d_x}}$, our result first shows that approximation on a compact domain requires smaller width than on $\smash{\mathbb R^{d_x}}$. We next prove a lower bound on $w_{\min}$ for uniform approximation using general activation functions including ReLU: $w_{\min}\ge d_y+1$ if $d_x<d_y\le2d_x$. Together with our first result, this shows a dichotomy between $L^p$ and uniform approximations for general activation functions and input/output dimensions.
High-dimensional matrix regression has been studied in various aspects, such as statistical properties, computational efficiency and application to specific instances including multivariate regression, system identification and matrix compressed sensing. Current studies mainly consider the idealized case that the covariate matrix is obtained without noise, while the more realistic scenario that the covariates may always be corrupted with noise or missing data has received little attention. We consider the general errors-in-variables matrix regression model and proposed a unified framework for low-rank estimation based on nonconvex spectral regularization. Then in the statistical aspect, recovery bounds for any stationary points are provided to achieve statistical consistency. In the computational aspect, the proximal gradient method is applied to solve the nonconvex optimization problem and is proved to converge in polynomial time. Consequences for specific matrix compressed sensing models with additive noise and missing data are obtained via verifying corresponding regularity conditions. Finally, the performance of the proposed nonconvex estimation method is illustrated by numerical experiments.
Iterated conditional expectation (ICE) g-computation is an estimation approach for addressing time-varying confounding for both longitudinal and time-to-event data. Unlike other g-computation implementations, ICE avoids the need to specify models for each time-varying covariate. For variance estimation, previous work has suggested the bootstrap. However, bootstrapping can be computationally intense and sensitive to the number of resamples used. Here, we present ICE g-computation as a set of stacked estimating equations. Therefore, the variance for the ICE g-computation estimator can be consistently estimated using the empirical sandwich variance estimator. Performance of the variance estimator was evaluated empirically with a simulation study. The proposed approach is also demonstrated with an illustrative example on the effect of cigarette smoking on the prevalence of hypertension. In the simulation study, the empirical sandwich variance estimator appropriately estimated the variance. When comparing runtimes between the sandwich variance estimator and the bootstrap for the applied example, the sandwich estimator was substantially faster, even when bootstraps were run in parallel. The empirical sandwich variance estimator is a viable option for variance estimation with ICE g-computation.
We provide finitely generated infinite groups on which natural random walks are noise sensitive in total variation as well as ones on which natural random walks are noise stable in total variation.
The exponential growth in scientific publications poses a severe challenge for human researchers. It forces attention to more narrow sub-fields, which makes it challenging to discover new impactful research ideas and collaborations outside one's own field. While there are ways to predict a scientific paper's future citation counts, they need the research to be finished and the paper written, usually assessing impact long after the idea was conceived. Here we show how to predict the impact of onsets of ideas that have never been published by researchers. For that, we developed a large evolving knowledge graph built from more than 21 million scientific papers. It combines a semantic network created from the content of the papers and an impact network created from the historic citations of papers. Using machine learning, we can predict the dynamic of the evolving network into the future with high accuracy, and thereby the impact of new research directions. We envision that the ability to predict the impact of new ideas will be a crucial component of future artificial muses that can inspire new impactful and interesting scientific ideas.
This work develops, for the first time, a face-centred finite volume (FCFV) solver for the simulation of laminar and turbulent viscous incompressible flows. The formulation relies on the Reynolds-averaged Navier-Stokes (RANS) equations coupled with the negative Spalart-Allmaras (SA) model and three novel convective stabilisations, inspired by Riemann solvers, are derived and compared numerically. The resulting method achieves first-order convergence of the velocity, the velocity-gradient tensor and the pressure. FCFV accurately predicts engineering quantities of interest, such as drag and lift, on unstructured meshes and, by avoiding gradient reconstruction, the method is insensitive to mesh quality, even in the presence of highly distorted and stretched cells. A monolithic and a staggered solution strategies for the RANS-SA system are derived and compared numerically. Numerical benchmarks, involving laminar and turbulent, steady and transient cases are used to assess the performance, accuracy and robustness of the proposed FCFV method.
Legged robots have the potential to traverse complex terrain and access confined spaces beyond the reach of traditional platforms thanks to their ability to carefully select footholds and flexibly adapt their body posture while walking. However, robust deployment in real-world applications is still an open challenge. In this paper, we present a method for legged locomotion control using reinforcement learning and 3D volumetric representations to enable robust and versatile locomotion in confined and unstructured environments. By employing a two-layer hierarchical policy structure, we exploit the capabilities of a highly robust low-level policy to follow 6D commands and a high-level policy to enable three-dimensional spatial awareness for navigating under overhanging obstacles. Our study includes the development of a procedural terrain generator to create diverse training environments. We present a series of experimental evaluations in both simulation and real-world settings, demonstrating the effectiveness of our approach in controlling a quadruped robot in confined, rough terrain. By achieving this, our work extends the applicability of legged robots to a broader range of scenarios.