G{\"o}del's second incompleteness theorem forbids to prove, in a given theory U, the consistency of many theories-in particular, of the theory U itself-as well as it forbids to prove the normalization property for these theories, since this property implies their consistency. When we cannot prove in a theory U the consistency of a theory T , we can try to prove a relative consistency theorem, that is, a theorem of the form: If U is consistent then T is consistent. Following the same spirit, we show in this paper how to prove relative normalization theorems, that is, theorems of the form: If U is 1-consistent, then T has the normalization property.
Transformers play a central role in the inner workings of large language models. We develop a mathematical framework for analyzing Transformers based on their interpretation as interacting particle systems, which reveals that clusters emerge in long time. Our study explores the underlying theory and offers new perspectives for mathematicians as well as computer scientists.
Ensemble methods such as bagging and random forests are ubiquitous in various fields, from finance to genomics. Despite their prevalence, the question of the efficient tuning of ensemble parameters has received relatively little attention. This paper introduces a cross-validation method, ECV (Extrapolated Cross-Validation), for tuning the ensemble and subsample sizes in randomized ensembles. Our method builds on two primary ingredients: initial estimators for small ensemble sizes using out-of-bag errors and a novel risk extrapolation technique that leverages the structure of prediction risk decomposition. By establishing uniform consistency of our risk extrapolation technique over ensemble and subsample sizes, we show that ECV yields $\delta$-optimal (with respect to the oracle-tuned risk) ensembles for squared prediction risk. Our theory accommodates general ensemble predictors, only requires mild moment assumptions, and allows for high-dimensional regimes where the feature dimension grows with the sample size. As a practical case study, we employ ECV to predict surface protein abundances from gene expressions in single-cell multiomics using random forests. In comparison to sample-split cross-validation and $K$-fold cross-validation, ECV achieves higher accuracy avoiding sample splitting. At the same time, its computational cost is considerably lower owing to the use of the risk extrapolation technique. Additional numerical results validate the finite-sample accuracy of ECV for several common ensemble predictors under a computational constraint on the maximum ensemble size.
A major technique in learning-augmented online algorithms is combining multiple algorithms or predictors. Since the performance of each predictor may vary over time, it is desirable to use not the single best predictor as a benchmark, but rather a dynamic combination which follows different predictors at different times. We design algorithms that combine predictions and are competitive against such dynamic combinations for a wide class of online problems, namely, metrical task systems. Against the best (in hindsight) unconstrained combination of $\ell$ predictors, we obtain a competitive ratio of $O(\ell^2)$, and show that this is best possible. However, for a benchmark with slightly constrained number of switches between different predictors, we can get a $(1+\epsilon)$-competitive algorithm. Moreover, our algorithms can be adapted to access predictors in a bandit-like fashion, querying only one predictor at a time. An unexpected implication of one of our lower bounds is a new structural insight about covering formulations for the $k$-server problem.
We investigate the performance of two approximation algorithms for the Hafnian of a nonnegative square matrix, namely the Barvinok and Godsil-Gutman estimators. We observe that, while there are examples of matrices for which these algorithms fail to provide a good approximation, the algorithms perform surprisingly well for adjacency matrices of random graphs. In most cases, the Godsil-Gutman estimator provides a far superior accuracy. For dense graphs, however, both estimators demonstrate a slow growth of the variance. For complete graphs, we show analytically that the relative variance $\sigma / \mu$ grows as a square root of the size of the graph. Finally, we simulate a Gaussian Boson Sampling experiment using the Godsil-Gutman estimator and show that the technique used can successfully reproduce low-order correlation functions.
With the increasing demand of intelligent systems capable of operating in different contexts (e.g. users on the move) the correct interpretation of the user-need by such systems has become crucial to give consistent answers to the user questions. The most effective applications addressing such task are in the fields of natural language processing and semantic expansion of terms. These techniques are aimed at estimating the goal of an input query reformulating it as an intent, commonly relying on textual resources built exploiting different semantic relations like \emph{synonymy}, \emph{antonymy} and many others. The aim of this paper is to generate such resources using the labels of a given taxonomy as source of information. The obtained resources are integrated into a plain classifier for reformulating a set of input queries as intents and tracking the effect of each relation, in order to quantify the impact of each semantic relation on the classification. As an extension to this, the best tradeoff between improvement and noise introduction when combining such relations is evaluated. The assessment is made generating the resources and their combinations and using them for tuning the classifier which is used to reformulate the user questions as labels. The evaluation employs a wide and varied taxonomy as a use-case, exploiting its labels as basis for the semantic expansion and producing several corpora with the purpose of enhancing the pseudo-queries estimation.
The field of adversarial textual attack has significantly grown over the last few years, where the commonly considered objective is to craft adversarial examples (AEs) that can successfully fool the target model. However, the imperceptibility of attacks, which is also essential for practical attackers, is often left out by previous studies. In consequence, the crafted AEs tend to have obvious structural and semantic differences from the original human-written text, making them easily perceptible. In this work, we advocate leveraging multi-objectivization to address such issue. Specifically, we reformulate the problem of crafting AEs as a multi-objective optimization problem, where the attack imperceptibility is considered as an auxiliary objective. Then, we propose a simple yet effective evolutionary algorithm, dubbed HydraText, to solve this problem. To the best of our knowledge, HydraText is currently the only approach that can be effectively applied to both score-based and decision-based attack settings. Exhaustive experiments involving 44237 instances demonstrate that HydraText consistently achieves competitive attack success rates and better attack imperceptibility than the recently proposed attack approaches. A human evaluation study also shows that the AEs crafted by HydraText are more indistinguishable from human-written text. Finally, these AEs exhibit good transferability and can bring notable robustness improvement to the target model by adversarial training.
We study estimation of an $s$-sparse signal in the $p$-dimensional Gaussian sequence model with equicorrelated observations and derive the minimax rate. A new phenomenon emerges from correlation, namely the rate scales with respect to $p-2s$ and exhibits a phase transition at $p-2s \asymp \sqrt{p}$. Correlation is shown to be a blessing provided it is sufficiently strong, and the critical correlation level exhibits a delicate dependence on the sparsity level. Due to correlation, the minimax rate is driven by two subproblems: estimation of a linear functional (the average of the signal) and estimation of the signal's $(p-1)$-dimensional projection onto the orthogonal subspace. The high-dimensional projection is estimated via sparse regression and the linear functional is cast as a robust location estimation problem. Existing robust estimators turn out to be suboptimal, and we show a kernel mode estimator with a widening bandwidth exploits the Gaussian character of the data to achieve the optimal estimation rate.
We use Stein characterisations to derive new moment-type estimators for the parameters of several multivariate distributions in the i.i.d. case; we also derive the asymptotic properties of these estimators. Our examples include the multivariate truncated normal distribution and several spherical distributions. The estimators are explicit and therefore provide an interesting alternative to the maximum-likelihood estimator. The quality of these estimators is assessed through competitive simulation studies in which we compare their behaviour to the performance of other estimators available in the literature.
Inverse problems, which are related to Maxwell's equations, in the presence of nonlinear materials is a quite new topic in the literature. The lack of contributions in this area can be ascribed to the significant challenges that such problems pose. Retrieving the spatial behaviour of some unknown physical property, from boundary measurements, is a nonlinear and highly ill-posed problem even in the presence of linear materials. Furthermore, this complexity grows exponentially in the presence of nonlinear materials. In the tomography of linear materials, the Monotonicity Principle (MP) is the foundation of a class of non-iterative algorithms able to guarantee excellent performances and compatibility with real-time applications. Recently, the MP has been extended to nonlinear materials under very general assumptions. Starting from the theoretical background for this extension, we develop a first real-time inversion method for the inverse obstacle problem in the presence of nonlinear materials. The proposed method is intendend for all problems governed by the quasilinear Laplace equation, i.e. static problems involving nonlinear materials. In this paper, we provide some preliminary results which give the foundation of our method and some extended numerical examples.
Graph-centric artificial intelligence (graph AI) has achieved remarkable success in modeling interacting systems prevalent in nature, from dynamical systems in biology to particle physics. The increasing heterogeneity of data calls for graph neural architectures that can combine multiple inductive biases. However, combining data from various sources is challenging because appropriate inductive bias may vary by data modality. Multimodal learning methods fuse multiple data modalities while leveraging cross-modal dependencies to address this challenge. Here, we survey 140 studies in graph-centric AI and realize that diverse data types are increasingly brought together using graphs and fed into sophisticated multimodal models. These models stratify into image-, language-, and knowledge-grounded multimodal learning. We put forward an algorithmic blueprint for multimodal graph learning based on this categorization. The blueprint serves as a way to group state-of-the-art architectures that treat multimodal data by choosing appropriately four different components. This effort can pave the way for standardizing the design of sophisticated multimodal architectures for highly complex real-world problems.