This note continues and extends the study from Spokoiny (2023a) about estimation for parametric models with possibly large or even infinite parameter dimension. We consider a special class of stochastically linear smooth (SLS) models satisfying three major conditions: the stochastic component of the log-likelihood is linear in the model parameter, while the expected log-likelihood is a smooth and concave function. For the penalized maximum likelihood estimators (pMLE), we establish several finite sample bounds about its concentration and large deviations as well as the Fisher and Wilks expansions and risk bounds. In all results, the remainder is given explicitly and can be evaluated in terms of the effective sample size $ n $ and effective parameter dimension $ \mathbb{p} $ which allows us to identify the so-called \emph{critical parameter dimension}. The results are also dimension and coordinate-free. Despite generality, all the presented bounds are nearly sharp and the classical asymptotic results can be obtained as simple corollaries. Our results indicate that the use of advanced fourth-order expansions allows to relax the critical dimension condition $ \mathbb{p}^{3} \ll n $ from Spokoiny (2023a) to $ \mathbb{p}^{3/2} \ll n $. Examples for classical models like logistic regression, log-density and precision matrix estimation illustrate the applicability of general results.
We investigate the problem of explainability for machine learning models, focusing on Feature Attribution Methods (FAMs) that evaluate feature importance through perturbation tests. Despite their utility, FAMs struggle to distinguish the contributions of different features, when their prediction changes are similar after perturbation. To enhance FAMs' discriminative power, we introduce Feature Attribution with Necessity and Sufficiency (FANS), which find a neighborhood of the input such that perturbing samples within this neighborhood have a high Probability of being Necessity and Sufficiency (PNS) cause for the change in predictions, and use this PNS as the importance of the feature. Specifically, FANS compute this PNS via a heuristic strategy for estimating the neighborhood and a perturbation test involving two stages (factual and interventional) for counterfactual reasoning. To generate counterfactual samples, we use a resampling-based approach on the observed samples to approximate the required conditional distribution. We demonstrate that FANS outperforms existing attribution methods on six benchmarks. Please refer to the source code via \url{//github.com/DMIRLAB-Group/FANS}.
There is a wide range of mathematical models that describe populations of large numbers of neurons. In this article, we focus on nonlinear noisy leaky integrate and fire (NNLIF) models that describe neuronal activity at the level of the membrane potential of neurons. We introduce a set of novel states, which we call "pseudo-equilibria", and give evidence of their defining role in the behaviour of the NNLIF system when a significant synaptic delay is considered. The advantage is that these states are determined solely by the system's parameters and are derived from a sequence of firing rates that result from solving a recurrence equation. We propose a new strategy to show convergence to an equilibrium for a weakly connected system with large transmission delay, based on following the sequence of pseudo-equilibria. Unlike with the direct entropy dissipation method, this technique allows us to see how a large delay favours convergence. We also present a detailed numerical study to support our results. This study explores the overall behaviour of the NNLIF system and helps us understand, among other phenomena, periodic solutions in strongly inhibitory networks.
This article aims to study efficient/trace optimal designs for crossover trials with multiple responses recorded from each subject in the time periods. A multivariate fixed effects model is proposed with direct and carryover effects corresponding to the multiple responses. The corresponding error dispersion matrix is chosen to be either of the proportional or the generalized Markov covariance type, permitting the existence of direct and cross-correlations within and between the multiple responses. The corresponding information matrices for direct effects under the two types of dispersions are used to determine efficient designs. The efficiency of orthogonal array designs of Type $I$ and strength $2$ is investigated for a wide choice of covariance functions, namely, Mat($0.5$), Mat($1.5$) and Mat($\infty$). To motivate these multivariate crossover designs, a gene expression dataset in a $3 \times 3$ framework is utilized.
The costs of training frontier AI models have grown dramatically in recent years, but there is limited public data on the magnitude and growth of these expenses. This paper develops a detailed cost model to address this gap, estimating training costs using three approaches that account for hardware, energy, cloud rental, and staff expenses. The analysis reveals that the amortized cost to train the most compute-intensive models has grown precipitously at a rate of 2.4x per year since 2016 (95% CI: 2.0x to 3.1x). For key frontier models, such as GPT-4 and Gemini, the most significant expenses are AI accelerator chips and staff costs, each costing tens of millions of dollars. Other notable costs include server components (15-22%), cluster-level interconnect (9-13%), and energy consumption (2-6%). If the trend of growing development costs continues, the largest training runs will cost more than a billion dollars by 2027, meaning that only the most well-funded organizations will be able to finance frontier AI models.
Item parameter estimation in pharmacometric item response theory (IRT) models is predominantly performed using the Laplace estimation algorithm as implemented in NONMEM. In psychometrics a wide range of different software tools, including several packages for the open-source software R for implementation of IRT are also available. Each have their own set of benefits and limitations and to date a systematic comparison of the primary estimation algorithms has not been evaluated. A simulation study evaluating varying number of hypothetical sample sizes and item scenarios at baseline was performed using both Laplace and Gauss-hermite quadrature (GHQ-EM). In scenarios with at least 20 items and more than 100 subjects, item parameters were estimated with good precision and were similar between estimation algorithms as demonstrated by several measures of bias and precision. The minimal differences observed for certain parameters or sample size scenarios were reduced when translating to the total score scale. The ease of use, speed of estimation and relative accuracy of the GHQ-EM method employed in mirt make it an appropriate alternative or supportive analytical approach to NONMEM for potential pharmacometrics IRT applications.
Bayesian nonparametric mixture models are common for modeling complex data. While these models are well-suited for density estimation, recent results proved posterior inconsistency of the number of clusters when the true number of components is finite, for the Dirichlet process and Pitman--Yor process mixture models. We extend these results to additional Bayesian nonparametric priors such as Gibbs-type processes and finite-dimensional representations thereof. The latter include the Dirichlet multinomial process, the recently proposed Pitman-Yor, and normalized generalized gamma multinomial processes. We show that mixture models based on these processes are also inconsistent in the number of clusters and discuss possible solutions. Notably, we show that a post-processing algorithm introduced for the Dirichlet process can be extended to more general models and provides a consistent method to estimate the number of components.
Growing interest and investment in the capabilities of foundation models has positioned such systems to impact a wide array of public services. Alongside these opportunities is the risk that these systems reify existing power imbalances and cause disproportionate harm to marginalized communities. Participatory approaches hold promise to instead lend agency and decision-making power to marginalized stakeholders. But existing approaches in participatory AI/ML are typically deeply grounded in context - how do we apply these approaches to foundation models, which are, by design, disconnected from context? Our paper interrogates this question. First, we examine existing attempts at incorporating participation into foundation models. We highlight the tension between participation and scale, demonstrating that it is intractable for impacted communities to meaningfully shape a foundation model that is intended to be universally applicable. In response, we develop a blueprint for participatory foundation models that identifies more local, application-oriented opportunities for meaningful participation. In addition to the "foundation" layer, our framework proposes the "subfloor'' layer, in which stakeholders develop shared technical infrastructure, norms and governance for a grounded domain, and the "surface'' layer, in which affected communities shape the use of a foundation model for a specific downstream task. The intermediate "subfloor'' layer scopes the range of potential harms to consider, and affords communities more concrete avenues for deliberation and intervention. At the same time, it avoids duplicative effort by scaling input across relevant use cases. Through three case studies in clinical care, financial services, and journalism, we illustrate how this multi-layer model can create more meaningful opportunities for participation than solely intervening at the foundation layer.
This paper surveys vision-language pre-training (VLP) methods for multimodal intelligence that have been developed in the last few years. We group these approaches into three categories: ($i$) VLP for image-text tasks, such as image captioning, image-text retrieval, visual question answering, and visual grounding; ($ii$) VLP for core computer vision tasks, such as (open-set) image classification, object detection, and segmentation; and ($iii$) VLP for video-text tasks, such as video captioning, video-text retrieval, and video question answering. For each category, we present a comprehensive review of state-of-the-art methods, and discuss the progress that has been made and challenges still being faced, using specific systems and models as case studies. In addition, for each category, we discuss advanced topics being actively explored in the research community, such as big foundation models, unified modeling, in-context few-shot learning, knowledge, robustness, and computer vision in the wild, to name a few.
In the past few years, the emergence of pre-training models has brought uni-modal fields such as computer vision (CV) and natural language processing (NLP) to a new era. Substantial works have shown they are beneficial for downstream uni-modal tasks and avoid training a new model from scratch. So can such pre-trained models be applied to multi-modal tasks? Researchers have explored this problem and made significant progress. This paper surveys recent advances and new frontiers in vision-language pre-training (VLP), including image-text and video-text pre-training. To give readers a better overall grasp of VLP, we first review its recent advances from five aspects: feature extraction, model architecture, pre-training objectives, pre-training datasets, and downstream tasks. Then, we summarize the specific VLP models in detail. Finally, we discuss the new frontiers in VLP. To the best of our knowledge, this is the first survey on VLP. We hope that this survey can shed light on future research in the VLP field.
We study the problem of embedding-based entity alignment between knowledge graphs (KGs). Previous works mainly focus on the relational structure of entities. Some further incorporate another type of features, such as attributes, for refinement. However, a vast of entity features are still unexplored or not equally treated together, which impairs the accuracy and robustness of embedding-based entity alignment. In this paper, we propose a novel framework that unifies multiple views of entities to learn embeddings for entity alignment. Specifically, we embed entities based on the views of entity names, relations and attributes, with several combination strategies. Furthermore, we design some cross-KG inference methods to enhance the alignment between two KGs. Our experiments on real-world datasets show that the proposed framework significantly outperforms the state-of-the-art embedding-based entity alignment methods. The selected views, cross-KG inference and combination strategies all contribute to the performance improvement.