In a Stackelberg congestion game (SCG), a leader aims to maximize their own gain by anticipating and manipulating the equilibrium state at which the followers settle by playing a congestion game. Often formulated as bilevel programs, large-scale SCGs are well known for their intractability and complexity. Here, we attempt to tackle this computational challenge by marrying traditional methodologies with the latest differentiable programming techniques in machine learning. The core idea centers on replacing the lower-level equilibrium problem with a smooth evolution trajectory defined by the imitative logit dynamic (ILD), which we prove converges to the equilibrium of the congestion game under mild conditions. Building upon this theoretical foundation, we propose two new local search algorithms for SCGs. The first is a gradient descent algorithm that obtains the derivatives by unrolling ILD via differentiable programming. Thanks to the smoothness of ILD, the algorithm promises both efficiency and scalability. The second algorithm adds a heuristic twist by cutting short the followers' evolution trajectory. Behaviorally, this means that, instead of anticipating the followers' best response at equilibrium, the leader seeks to approximate that response by only looking ahead a limited number of steps. Our numerical experiments are carried out over various instances of classic SCG applications, ranging from toy benchmarks to large-scale real-world examples. The results show the proposed algorithms are reliable and scalable local solvers that deliver high-quality solutions with greater regularity and significantly less computational effort compared to the many incumbents included in our study.
In task-oriented conversational AI evaluation, unsupervised methods poorly correlate with human judgments, and supervised approaches lack generalization. Recent advances in large language models (LLMs) show robust zeroshot and few-shot capabilities across NLP tasks. This paper explores using LLMs for automated dialogue quality evaluation, experimenting with various configurations on public and proprietary datasets. Manipulating factors such as model size, in-context examples, and selection techniques, we examine "chain-of-thought" (CoT) reasoning and label extraction procedures. Our results show that (1) larger models yield more accurate dialogue labels; (2) algorithmic selection of in-context examples outperforms random selection; (3) CoT reasoning where an LLM is asked to provide justifications before outputting final labels improves performance; and (4) fine-tuned LLMs outperform out-of-the-box ones. Our results indicate that LLMs that are suitably fine-tuned and have sufficient reasoning capabilities can be leveraged for automated dialogue evaluation.
As an alternative to Kripke models, simplicial complexes are a versatile semantic primitive on which to interpret epistemic logic. Given a set of vertices, a simplicial complex is a downward closed set of subsets, called simplexes, of the vertex set. A maximal simplex is called a facet. Impure simplicial complexes represent that some agents (processes) are dead. It is known that impure simplicial complexes categorically correspond to so-called partial epistemic (Kripke) models. In this contribution, we define a notion of bisimulation to compare impure simplicial complexes and show that it has the Hennessy-Milner property. These results are for a logical language including atoms that express whether agents are alive or dead. Without these atoms no reasonable standard notion of bisimulation exists, as we amply justify by counterexamples, because such a restricted language is insufficiently expressive.
The interest in linear complexity models for large language models is on the rise, although their scaling capacity remains uncertain. In this study, we present the scaling laws for linear complexity language models to establish a foundation for their scalability. Specifically, we examine the scaling behaviors of three efficient linear architectures. These include TNL, a linear attention model with data-independent decay; HGRN2, a linear RNN with data-dependent decay; and cosFormer2, a linear attention model without decay. We also include LLaMA as a baseline architecture for softmax attention for comparison. These models were trained with six variants, ranging from 70M to 7B parameters on a 300B-token corpus, and evaluated with a total of 1,376 intermediate checkpoints on various downstream tasks. These tasks include validation loss, commonsense reasoning, and information retrieval and generation. The study reveals that existing linear complexity language models exhibit similar scaling capabilities as conventional transformer-based models while also demonstrating superior linguistic proficiency and knowledge retention.
The Full Bayesian Significance Test (FBST) possesses many desirable aspects, such as not requiring a non-zero prior probability for hypotheses while also producing a measure of evidence for $H_0$. Still, few attempts have been made to bring the FBST to nonparametric settings, with the main drawback being the need to obtain the highest posterior density (HPD) in a function space. In this work, we use Gaussian processes to provide an analytically tractable FBST for hypotheses of the type $$ H_0: g(\boldsymbol{x}) = \boldsymbol{b}(\boldsymbol{x})\boldsymbol{\beta}, \quad \forall \boldsymbol{x} \in \mathcal{X}, \quad \boldsymbol{\beta} \in \mathbb{R}^k, $$ where $g(\cdot)$ is the regression function, $\boldsymbol{b}(\cdot)$ is a vector of linearly independent linear functions -- such as $\boldsymbol{b}(\boldsymbol{x}) = \boldsymbol{x}'$ -- and $\mathcal{X}$ is the covariates' domain. We also make use of pragmatic hypotheses to verify if the adherence of linear models may be approximately instead of exactly true, allowing for the inclusion of valuable information such as measurement errors and utility judgments. This contribution extends the theory of the FBST, allowing its application in nonparametric settings and providing a procedure that easily tests if linear models are adequate for the data and that can automatically perform variable selection.
We explore a kind of first-order predicate logic with intended semantics in the reals. Compared to other approaches in the literature, we work predominantly in the multiplicative reals $[0,\infty]$, showing they support three generations of connectives, that we call non-linear, linear additive, and linear multiplicative. Means and harmonic means emerge as natural candidates for bounded existential and universal quantifiers, and in fact we see they behave as expected in relation to the other logical connectives. We explain this fact through the well-known fact that min/max and arithmetic mean/harmonic mean sit at opposite ends of a spectrum, that of p-means. We give syntax and semantics for this quantitative predicate logic, and as example applications, we show how softmax is the quantitative semantics of argmax, and R\'enyi entropy/Hill numbers are additive/multiplicative semantics of the same formula. Indeed, the additive reals also fit into the story by exploiting the Napierian duality $-\log \dashv 1/\exp$, which highlights a formal distinction between 'additive' and 'multiplicative' quantities. Finally, we describe two attempts at a categorical semantics via enriched hyperdoctrines. We discuss why hyperdoctrines are in fact probably inadequate for this kind of logic.
The conventional perspective on Markov chains considers decision problems concerning the probabilities of temporal properties being satisfied by traces of visited states. However, consider the following query made of a stochastic system modelling the weather: given the conditions today, will there be a day with less than 50\% chance of rain? The conventional perspective is ill-equipped to decide such problems regarding the evolution of the initial distribution. The alternate perspective we consider views Markov chains as distribution transformers: the focus is on the sequence of distributions on states at each step, where the evolution is driven by the underlying stochastic transition matrix. More precisely, given an initial distribution vector $\mu$, a stochastic update transition matrix $M$, we ask whether the ensuing sequence of distributions $(\mu, M\mu, M^2\mu, \dots)$ satisfies a given temporal property. This is a special case of the model-checking problem for linear dynamical systems, which is not known to be decidable in full generality. The goal of this article is to delineate the classes of instances for which this problem can be solved, under the assumption that the dynamics is governed by stochastic matrices.
Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multiple modalities such as vision and audio. Machine perception models, in stark contrast, are typically modality-specific and optimised for unimodal benchmarks, and hence late-stage fusion of final representations or predictions from each modality (`late-fusion') is still a dominant paradigm for multimodal video classification. Instead, we introduce a novel transformer based architecture that uses `fusion bottlenecks' for modality fusion at multiple layers. Compared to traditional pairwise self-attention, our model forces information between different modalities to pass through a small number of bottleneck latents, requiring the model to collate and condense the most relevant information in each modality and only share what is necessary. We find that such a strategy improves fusion performance, at the same time reducing computational cost. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All code and models will be released.
Recent contrastive representation learning methods rely on estimating mutual information (MI) between multiple views of an underlying context. E.g., we can derive multiple views of a given image by applying data augmentation, or we can split a sequence into views comprising the past and future of some step in the sequence. Contrastive lower bounds on MI are easy to optimize, but have a strong underestimation bias when estimating large amounts of MI. We propose decomposing the full MI estimation problem into a sum of smaller estimation problems by splitting one of the views into progressively more informed subviews and by applying the chain rule on MI between the decomposed views. This expression contains a sum of unconditional and conditional MI terms, each measuring modest chunks of the total MI, which facilitates approximation via contrastive bounds. To maximize the sum, we formulate a contrastive lower bound on the conditional MI which can be approximated efficiently. We refer to our general approach as Decomposed Estimation of Mutual Information (DEMI). We show that DEMI can capture a larger amount of MI than standard non-decomposed contrastive bounds in a synthetic setting, and learns better representations in a vision domain and for dialogue generation.
Graph neural networks (GNNs) are a popular class of machine learning models whose major advantage is their ability to incorporate a sparse and discrete dependency structure between data points. Unfortunately, GNNs can only be used when such a graph-structure is available. In practice, however, real-world graphs are often noisy and incomplete or might not be available at all. With this work, we propose to jointly learn the graph structure and the parameters of graph convolutional networks (GCNs) by approximately solving a bilevel program that learns a discrete probability distribution on the edges of the graph. This allows one to apply GCNs not only in scenarios where the given graph is incomplete or corrupted but also in those where a graph is not available. We conduct a series of experiments that analyze the behavior of the proposed method and demonstrate that it outperforms related methods by a significant margin.
Multi-relation Question Answering is a challenging task, due to the requirement of elaborated analysis on questions and reasoning over multiple fact triples in knowledge base. In this paper, we present a novel model called Interpretable Reasoning Network that employs an interpretable, hop-by-hop reasoning process for question answering. The model dynamically decides which part of an input question should be analyzed at each hop; predicts a relation that corresponds to the current parsed results; utilizes the predicted relation to update the question representation and the state of the reasoning process; and then drives the next-hop reasoning. Experiments show that our model yields state-of-the-art results on two datasets. More interestingly, the model can offer traceable and observable intermediate predictions for reasoning analysis and failure diagnosis, thereby allowing manual manipulation in predicting the final answer.