In the regression framework, the empirical measure based on the responses resulting from the nearest neighbors, among the covariates, to a given point $x$ is introduced and studied as a central statistical quantity. First, the associated empirical process is shown to satisfy a uniform central limit theorem under a local bracketing entropy condition on the underlying class of functions reflecting the localizing nature of the nearest neighbor algorithm. Second a uniform non-asymptotic bound is established under a well-known condition, often referred to as Vapnik-Chervonenkis, on the uniform entropy numbers. The covariance of the Gaussian limit obtained in the uniform central limit theorem is simply equal to the conditional covariance operator given the covariate value. This suggests the possibility of using standard formulas to estimate the variance by using only the nearest neighbors instead of the full data. This is illustrated on two problems: the estimation of the conditional cumulative distribution function and local linear regression.
Large Language Models (LLMs), benefiting from the auto-regressive modelling approach performed on massive unannotated texts corpora, demonstrates powerful perceptual and reasoning capabilities. However, as for extending auto-regressive modelling to multi-modal scenarios to build Large Multi-modal Models (LMMs), there lies a great difficulty that the image information is processed in the LMM as continuous visual embeddings, which cannot obtain discrete supervised labels for classification. In this paper, we successfully perform multi-modal auto-regressive modeling with a unified objective for the first time. Specifically, we propose the concept of visual words, which maps the visual features to probability distributions over LLM's vocabulary, providing supervision information for visual modelling. We further explore the distribution of visual features in the semantic space within LMM and the possibility of using text embeddings to represent visual information. Experimental results and ablation studies on 5 VQA tasks and 4 benchmark toolkits validate the powerful performance of our proposed approach.
In the realm of personalization, integrating diverse information sources such as consumption signals and content-based representations is becoming increasingly critical to build state-of-the-art solutions. In this regard, two of the biggest trends in research around this subject are Graph Neural Networks (GNNs) and Foundation Models (FMs). While GNNs emerged as a popular solution in industry for powering personalization at scale, FMs have only recently caught attention for their promising performance in personalization tasks like ranking and retrieval. In this paper, we present a graph-based foundation modeling approach tailored to personalization. Central to this approach is a Heterogeneous GNN (HGNN) designed to capture multi-hop content and consumption relationships across a range of recommendable item types. To ensure the generality required from a Foundation Model, we employ a Large Language Model (LLM) text-based featurization of nodes that accommodates all item types, and construct the graph using co-interaction signals, which inherently transcend content specificity. To facilitate practical generalization, we further couple the HGNN with an adaptation mechanism based on a two-tower (2T) architecture, which also operates agnostically to content type. This multi-stage approach ensures high scalability; while the HGNN produces general purpose embeddings, the 2T component models in a continuous space the sheer size of user-item interaction data. Our comprehensive approach has been rigorously tested and proven effective in delivering recommendations across a diverse array of products within a real-world, industrial audio streaming platform.
In the present study, the efficiency of preconditioners for solving linear systems associated with the discretized variable-density incompressible Navier-Stokes equations with semiimplicit second-order accuracy in time and spectral accuracy in space is investigated. The method, in which the inverse operator for the constant-density flow system acts as preconditioner, is implemented for three iterative solvers: the General Minimal Residual, the Conjugate Gradient and the Richardson Minimal Residual. We discuss the method, first, in the context of the one-dimensional flow case where a top-hat like profile for the density is used. Numerical evidence shows that the convergence is significantly improved due to the notable decrease in the condition number of the operators. Most importantly, we then validate the robustness and convergence properties of the method on two more realistic problems: the two-dimensional Rayleigh-Taylor instability problem and the three-dimensional variable-density swirling jet.
We consider linear bounded operators acting in Banach spaces with a basis, such operators can be represented by an infinite matrix. We prove that for an invertible operator there exists a sequence of invertible finite-dimensional operators so that the family of norms of their inverses is uniformly bounded. It leads to the fact that solutions of finite-dimensional equations converge to the solution of initial operator equation with infinite-dimensional matrix.
In 1934, the American statistician Samuel S. Wilks derived remarkable formulas for the joint moments of embedded principal minors of sample covariance matrices in multivariate normal populations, and he used them to compute the moments of sample statistics in various applications related to multivariate linear regression. These important but little-known moment results were extended in 1963 by the Australian statistician A. Graham Constantine using Bartlett's decomposition. In this note, a new proof of Wilks' results is derived using the concept of iterated Schur complements, thereby bypassing Bartlett's decomposition. Furthermore, Wilks' open problem of evaluating joint moments of disjoint principal minors of Wishart random matrices is related to the Gaussian product inequality conjecture.
For the ground state of the Gross-Pitaevskii (GP) eigenvalue problem, we consider a fully discretized Sobolev gradient flow, which can be regarded as the Riemannian gradient descent on the sphere under a metric induced by a modified $H^1$-norm. We prove its global convergence to a critical point of the discrete GP energy and its local exponential convergence to the ground state of the discrete GP energy. The local exponential convergence rate depends on the eigengap of the discrete GP energy. When the discretization is the classical second-order finite difference in two dimensions, such an eigengap can be further proven to be mesh independent, i.e., it has a uniform positive lower bound, thus the local exponential convergence rate is mesh independent. Numerical experiments with discretization by high order $Q^k$ spectral element methods in two and three dimensions are provided to validate the efficiency of the proposed method.
We study a generalization of the well-known disjoint paths problem which we call the metric Menger problem, denoted MM(r,k), where one is given two subsets of a graph and must decide whether they can be connected by $k$ paths of pairwise distance at least $r$. We prove that this problem is NP-complete for every $r\geq 3$ and $k\geq 2$ by giving a reduction from 3SAT. This resolves a conjecture recently stated by Georgakopoulos and Papasoglu. On the other hand, we show that the problem is in XP when parameterised by treewidth and maximum degree by observing that it is `locally checkable'. In the case $r\leq 3$, we prove that it suffices to parameterise by treewidth. We also state some open questions relating to this work.
To date, most methods for simulating conditioned diffusions are limited to the Euclidean setting. The conditioned process can be constructed using a change of measure known as Doob's $h$-transform. The specific type of conditioning depends on a function $h$ which is typically unknown in closed form. To resolve this, we extend the notion of guided processes to a manifold $M$, where one replaces $h$ by a function based on the heat kernel on $M$. We consider the case of a Brownian motion with drift, constructed using the frame bundle of $M$, conditioned to hit a point $x_T$ at time $T$. We prove equivalence of the laws of the conditioned process and the guided process with a tractable Radon-Nikodym derivative. Subsequently, we show how one can obtain guided processes on any manifold $N$ that is diffeomorphic to $M$ without assuming knowledge of the heat kernel on $N$. We illustrate our results with numerical simulations and an example of parameter estimation where a diffusion process on the torus is observed discretely in time.
Reasoning, a crucial ability for complex problem-solving, plays a pivotal role in various real-world settings such as negotiation, medical diagnosis, and criminal investigation. It serves as a fundamental methodology in the field of Artificial General Intelligence (AGI). With the ongoing development of foundation models, e.g., Large Language Models (LLMs), there is a growing interest in exploring their abilities in reasoning tasks. In this paper, we introduce seminal foundation models proposed or adaptable for reasoning, highlighting the latest advancements in various reasoning tasks, methods, and benchmarks. We then delve into the potential future directions behind the emergence of reasoning abilities within foundation models. We also discuss the relevance of multimodal learning, autonomous agents, and super alignment in the context of reasoning. By discussing these future research directions, we hope to inspire researchers in their exploration of this field, stimulate further advancements in reasoning with foundation models, and contribute to the development of AGI.
We present self-supervised geometric perception (SGP), the first general framework to learn a feature descriptor for correspondence matching without any ground-truth geometric model labels (e.g., camera poses, rigid transformations). Our first contribution is to formulate geometric perception as an optimization problem that jointly optimizes the feature descriptor and the geometric models given a large corpus of visual measurements (e.g., images, point clouds). Under this optimization formulation, we show that two important streams of research in vision, namely robust model fitting and deep feature learning, correspond to optimizing one block of the unknown variables while fixing the other block. This analysis naturally leads to our second contribution -- the SGP algorithm that performs alternating minimization to solve the joint optimization. SGP iteratively executes two meta-algorithms: a teacher that performs robust model fitting given learned features to generate geometric pseudo-labels, and a student that performs deep feature learning under noisy supervision of the pseudo-labels. As a third contribution, we apply SGP to two perception problems on large-scale real datasets, namely relative camera pose estimation on MegaDepth and point cloud registration on 3DMatch. We demonstrate that SGP achieves state-of-the-art performance that is on-par or superior to the supervised oracles trained using ground-truth labels.