Most categorical models for dependent types have traditionally been heavily set based: contexts form a category, and for each we have a set of types in said context -- and for each type a set of terms of said type. This is the case for categories with families, categories with attributes, and natural models; in particular, all of them can be traced back to certain discrete Grothendieck fibrations. We extend this intuition to the case of general, non necessarily discrete, fibrations, so that over a given context one has not only a set but a category of types. We argue that the added structure can be attributed to a notion of subtyping that shares many features with that of coercive subtyping, in the sense that it is the product of thinking about subtyping as an abbreviation mechanism: we say that a given type $A'$ is a subtype of $A$ if there is a unique coercion from $A'$ to $A$. Whenever we need a term of type $A$, then, it suffices to have a term of type $A'$, which we can `plug-in' into $A$. For this version of subtyping we provide rules, coherences, and explicit models, and we compare and contrast it to coercive subtyping as introduced by Z. Luo and others. We conclude by suggesting how the tools we present can be employed in finding appropriate rules relating subtyping and certain type constructors.
Language models (LMs) have demonstrated remarkable proficiency in generating linguistically coherent text, sparking discussions about their relevance to understanding human language learnability. However, a significant gap exists between the training data for these models and the linguistic input a child receives. LMs are typically trained on data that is orders of magnitude larger and fundamentally different from child-directed speech (Warstadt and Bowman, 2022; Warstadt et al., 2023; Frank, 2023a). Addressing this discrepancy, our research focuses on training LMs on subsets of a single child's linguistic input. Previously, Wang, Vong, Kim, and Lake (2023) found that LMs trained in this setting can form syntactic and semantic word clusters and develop sensitivity to certain linguistic phenomena, but they only considered LSTMs and simpler neural networks trained from just one single-child dataset. Here, to examine the robustness of learnability from single-child input, we systematically train six different model architectures on five datasets (3 single-child and 2 baselines). We find that the models trained on single-child datasets showed consistent results that matched with previous work, underscoring the robustness of forming meaningful syntactic and semantic representations from a subset of a child's linguistic input.
We present a new framework for modelling multivariate extremes, based on an angular-radial representation of the probability density function. Under this representation, the problem of modelling multivariate extremes is transformed to that of modelling an angular density and the tail of the radial variable, conditional on angle. Motivated by univariate theory, we assume that the tail of the conditional radial distribution converges to a generalised Pareto (GP) distribution. To simplify inference, we also assume that the angular density is continuous and finite and the GP parameter functions are continuous with angle. We refer to the resulting model as the semi-parametric angular-radial (SPAR) model for multivariate extremes. We consider the effect of the choice of polar coordinate system and introduce generalised concepts of angular-radial coordinate systems and generalised scalar angles in two dimensions. We show that under certain conditions, the choice of polar coordinate system does not affect the validity of the SPAR assumptions. However, some choices of coordinate system lead to simpler representations. In contrast, we show that the choice of margin does affect whether the model assumptions are satisfied. In particular, the use of Laplace margins results in a form of the density function for which the SPAR assumptions are satisfied for many common families of copula, with various dependence classes. We show that the SPAR model provides a more versatile framework for characterising multivariate extremes than provided by existing approaches, and that several commonly-used approaches are special cases of the SPAR model.
Consistency models, which were proposed to mitigate the high computational overhead during the sampling phase of diffusion models, facilitate single-step sampling while attaining state-of-the-art empirical performance. When integrated into the training phase, consistency models attempt to train a sequence of consistency functions capable of mapping any point at any time step of the diffusion process to its starting point. Despite the empirical success, a comprehensive theoretical understanding of consistency training remains elusive. This paper takes a first step towards establishing theoretical underpinnings for consistency models. We demonstrate that, in order to generate samples within $\varepsilon$ proximity to the target in distribution (measured by some Wasserstein metric), it suffices for the number of steps in consistency learning to exceed the order of $d^{5/2}/\varepsilon$, with $d$ the data dimension. Our theory offers rigorous insights into the validity and efficacy of consistency models, illuminating their utility in downstream inference tasks.
The evaluation of text-generative vision-language models is a challenging yet crucial endeavor. By addressing the limitations of existing Visual Question Answering (VQA) benchmarks and proposing innovative evaluation methodologies, our research seeks to advance our understanding of these models' capabilities. We propose a novel VQA benchmark based on well-known visual classification datasets which allows a granular evaluation of text-generative vision-language models and their comparison with discriminative vision-language models. To improve the assessment of coarse answers on fine-grained classification tasks, we suggest using the semantic hierarchy of the label space to ask automatically generated follow-up questions about the ground-truth category. Finally, we compare traditional NLP and LLM-based metrics for the problem of evaluating model predictions given ground-truth answers. We perform a human evaluation study upon which we base our decision on the final metric. We apply our benchmark to a suite of vision-language models and show a detailed comparison of their abilities on object, action, and attribute classification. Our contributions aim to lay the foundation for more precise and meaningful assessments, facilitating targeted progress in the exciting field of vision-language modeling.
This paper addresses the multiple two-sample test problem in a graph-structured setting, which is a common scenario in fields such as Spatial Statistics and Neuroscience. Each node $v$ in fixed graph deals with a two-sample testing problem between two node-specific probability density functions (pdfs), $p_v$ and $q_v$. The goal is to identify nodes where the null hypothesis $p_v = q_v$ should be rejected, under the assumption that connected nodes would yield similar test outcomes. We propose the non-parametric collaborative two-sample testing (CTST) framework that efficiently leverages the graph structure and minimizes the assumptions over $p_v$ and $q_v$. Our methodology integrates elements from f-divergence estimation, Kernel Methods, and Multitask Learning. We use synthetic experiments and a real sensor network detecting seismic activity to demonstrate that CTST outperforms state-of-the-art non-parametric statistical tests that apply at each node independently, hence disregard the geometry of the problem.
The paper is concerned with inference for a parameter of interest in models that share a common interpretation for that parameter, but that may differ appreciably in other respects. We study the general structure of models under which the maximum likelihood estimator of the parameter of interest is consistent under arbitrary misspecification of the nuisance part of the model. A specialization of the general results to matched-comparison and two-groups problems gives a more explicit condition in terms of a new notion of symmetric parametrization, leading to an appreciable broadening and unification of existing results in those problems. The role of a generalized definition of parameter orthogonality is highlighted, as well as connections to Neyman orthogonality. The issues involved in obtaining inferential guarantees beyond consistency are discussed.
The logistic regression model is one of the most popular data generation model in noisy binary classification problems. In this work, we study the sample complexity of estimating the parameters of the logistic regression model up to a given $\ell_2$ error, in terms of the dimension and the inverse temperature, with standard normal covariates. The inverse temperature controls the signal-to-noise ratio of the data generation process. While both generalization bounds and asymptotic performance of the maximum-likelihood estimator for logistic regression are well-studied, the non-asymptotic sample complexity that shows the dependence on error and the inverse temperature for parameter estimation is absent from previous analyses. We show that the sample complexity curve has two change-points in terms of the inverse temperature, clearly separating the low, moderate, and high temperature regimes.
The aim of this paper is to develop a numerical scheme to approximate evolving interface problems for parabolic equations based on the abstract evolving finite element framework proposed in (C M Elliott, T Ranner, IMA J Num Anal, 41:3, 2021, doi:10.1093/imanum/draa062). An appropriate weak formulation of the problem is derived for the use of evolving finite elements designed to accommodate for a moving interface. Optimal order error bounds are proved for arbitrary order evolving isoparametric finite elements. The paper concludes with numerical results for a model problem verifying orders of convergence.
Multi-fidelity models provide a framework for integrating computational models of varying complexity, allowing for accurate predictions while optimizing computational resources. These models are especially beneficial when acquiring high-accuracy data is costly or computationally intensive. This review offers a comprehensive analysis of multi-fidelity models, focusing on their applications in scientific and engineering fields, particularly in optimization and uncertainty quantification. It classifies publications on multi-fidelity modeling according to several criteria, including application area, surrogate model selection, types of fidelity, combination methods and year of publication. The study investigates techniques for combining different fidelity levels, with an emphasis on multi-fidelity surrogate models. This work discusses reproducibility, open-sourcing methodologies and benchmarking procedures to promote transparency. The manuscript also includes educational toy problems to enhance understanding. Additionally, this paper outlines best practices for presenting multi-fidelity-related savings in a standardized, succinct and yet thorough manner. The review concludes by examining current trends in multi-fidelity modeling, including emerging techniques, recent advancements, and promising research directions.
We formulate a uniform tail bound for empirical processes indexed by a class of functions, in terms of the individual deviations of the functions rather than the worst-case deviation in the considered class. The tail bound is established by introducing an initial "deflation" step to the standard generic chaining argument. The resulting tail bound is the sum of the complexity of the "deflated function class" in terms of a generalization of Talagrand's $\gamma$ functional, and the deviation of the function instance, both of which are formulated based on the natural seminorm induced by the corresponding Cram\'{e}r functions. We also provide certain approximations for the mentioned seminorm when the function class lies in a given (exponential type) Orlicz space, that can be used to make the complexity term and the deviation term more explicit.