Cooperative game theory aims to study how to divide a joint value created by a set of players. These games are often studied through the characteristic function form with transferable utility, which represents the value obtainable by each coalition. In the presence of externalities, there are many ways to define this value. Various models that account for different levels of player cooperation and the influence of external players on coalition value have been studied. Although there are different approaches, typically, the optimistic and pessimistic approaches provide sufficient insights into strategic interactions. This paper clarifies the interpretation of these approaches by providing a unified framework. We show that making sure that no coalition receives more than their (optimistic) upper bounds is always at least as difficult as guaranteeing their (pessimistic) lower bounds. We also show that if externalities are negative, providing these guarantees is always feasible. Then, we explore applications and show how our findings can be applied to derive results from the existing literature.
Forecasting players in sports has grown in popularity due to the potential for a tactical advantage and the applicability of such research to multi-agent interaction systems. Team sports contain a significant social component that influences interactions between teammates and opponents. However, it still needs to be fully exploited. In this work, we hypothesize that each participant has a specific function in each action and that role-based interaction is critical for predicting players' future moves. We create RolFor, a novel end-to-end model for Role-based Forecasting. RolFor uses a new module we developed called Ordering Neural Networks (OrderNN) to permute the order of the players such that each player is assigned to a latent role. The latent role is then modeled with a RoleGCN. Thanks to its graph representation, it provides a fully learnable adjacency matrix that captures the relationships between roles and is subsequently used to forecast the players' future trajectories. Extensive experiments on a challenging NBA basketball dataset back up the importance of roles and justify our goal of modeling them using optimizable models. When an oracle provides roles, the proposed RolFor compares favorably to the current state-of-the-art (it ranks first in terms of ADE and second in terms of FDE errors). However, training the end-to-end RolFor incurs the issues of differentiability of permutation methods, which we experimentally review. Finally, this work restates differentiable ranking as a difficult open problem and its great potential in conjunction with graph-based interaction models. Project is available at: //www.pinlab.org/aboutlatentroles
We examine behavior in an experimental collaboration game that incorporates endogenous network formation. The environment is modeled as a generalization of the voluntary contributions mechanism. By varying the information structure in a controlled laboratory experiment, we examine the underlying mechanisms of reciprocity that generate emergent patterns in linking and contribution decisions. Providing players more detailed information about the sharing behavior of others drastically increases efficiency, and positively affects a number of other key outcomes. To understand the driving causes of these changes in behavior we develop and estimate a structural model for actions and small network panels and identify how social preferences affect behavior. We find that the treatment reduces altruism but stimulates reciprocity, helping players coordinate to reach mutually beneficial outcomes. In a set of counterfactual simulations, we show that increasing trust in the community would encourage higher average contributions at the cost of mildly increased free-riding. Increasing overall reciprocity greatly increases collaborative behavior when there is limited information but can backfire in the treatment, suggesting that negative reciprocity and punishment can reduce efficiency. The largest returns would come from an intervention that drives players away from negative and toward positive reciprocity.
The simulation hypothesis suggests that we live in a computer simulation. That notion has attracted significant scholarly and popular interest. This article explores the simulation hypothesis from a business perspective. Due to the lack of a name for a universe consistent with the simulation hypothesis, we propose the term simuverse. We argue that if we live in a simulation, there must be a business justification. Therefore, we ask: If we live in a simuverse, what is its business model? We identify and explore business model scenarios, such as simuverse as a project, service, or platform. We also explore business model pathways and risk management issues. The article contributes to the simulation hypothesis literature and is the first to provide a business model perspective on the simulation hypothesis. The article discusses theoretical and practical implications and identifies opportunities for future research related to sustainability, digital transformation, and Artificial Intelligence (AI).
Riemannian optimization is concerned with problems, where the independent variable lies on a smooth manifold. There is a number of problems from numerical linear algebra that fall into this category, where the manifold is usually specified by special matrix structures, such as orthogonality or definiteness. Following this line of research, we investigate tools for Riemannian optimization on the symplectic Stiefel manifold. We complement the existing set of numerical optimization algorithms with a Riemannian trust region method tailored to the symplectic Stiefel manifold. To this end, we derive a matrix formula for the Riemannian Hessian under a right-invariant metric. Moreover, we propose a novel retraction for approximating the Riemannian geodesics. Finally, we conduct a comparative study in which we juxtapose the performance of the Riemannian variants of the steepest descent, conjugate gradients, and trust region methods on selected matrix optimization problems that feature symplectic constraints.
Analyses of voting algorithms often overlook informational externalities shaping individual votes. For example, pre-polling information often skews voters towards candidates who may not be their top choice, but who they believe would be a worthwhile recipient of their vote. In this work, we aim to understand the role of external information in voting outcomes. We study this by analyzing (1) the probability that voting outcomes align with external information, and (2) the effect of external information on the total utility across voters, or social welfare. In practice, voting mechanisms elicit coarse information about voter utilities, such as ordinal preferences, which initially prevents us from directly analyzing the effect of informational externalities with standard voting mechanisms. To overcome this, we present an intermediary mechanism for learning how preferences change with external information which does not require eliciting full cardinal preferences. With this tool in hand, we find that voting mechanisms are generally more likely to select the alternative most favored by the external information, and when external information reflects the population's true preferences, social welfare increases in expectation.
The expected loss is an upper bound to the model generalization error which admits robust PAC-Bayes bounds for learning. However, loss minimization is known to ignore misspecification, where models cannot exactly reproduce observations. This leads to significant underestimates of parameter uncertainties in the large data, or underparameterized, limit. We analyze the generalization error of near-deterministic, misspecified and underparametrized surrogate models, a regime of broad relevance in science and engineering. We show posterior distributions must cover every training point to avoid a divergent generalization error and derive an ensemble \textit{ansatz} that respects this constraint, which for linear models incurs minimal overhead. The efficient approach is demonstrated on model problems before application to high dimensional datasets in atomistic machine learning. Parameter uncertainties from misspecification survive in the underparametrized limit, giving accurate prediction and bounding of test errors.
Threshold selection is a fundamental problem in any threshold-based extreme value analysis. While models are asymptotically motivated, selecting an appropriate threshold for finite samples is difficult and highly subjective through standard methods. Inference for high quantiles can also be highly sensitive to the choice of threshold. Too low a threshold choice leads to bias in the fit of the extreme value model, while too high a choice leads to unnecessary additional uncertainty in the estimation of model parameters. We develop a novel methodology for automated threshold selection that directly tackles this bias-variance trade-off. We also develop a method to account for the uncertainty in the threshold estimation and propagate this uncertainty through to high quantile inference. Through a simulation study, we demonstrate the effectiveness of our method for threshold selection and subsequent extreme quantile estimation, relative to the leading existing methods, and show how the method's effectiveness is not sensitive to the tuning parameters. We apply our method to the well-known, troublesome example of the River Nidd dataset.
The performance of machine learning models can be impacted by changes in data over time. A promising approach to address this challenge is invariant learning, with a particular focus on a method known as invariant risk minimization (IRM). This technique aims to identify a stable data representation that remains effective with out-of-distribution (OOD) data. While numerous studies have developed IRM-based methods adaptive to data augmentation scenarios, there has been limited attention on directly assessing how well these representations preserve their invariant performance under varying conditions. In our paper, we propose a novel method to evaluate invariant performance, specifically tailored for IRM-based methods. We establish a bridge between the conditional expectation of an invariant predictor across different environments through the likelihood ratio. Our proposed criterion offers a robust basis for evaluating invariant performance. We validate our approach with theoretical support and demonstrate its effectiveness through extensive numerical studies.These experiments illustrate how our method can assess the invariant performance of various representation techniques.
This work introduces a new general approach for the numerical analysis of stable equilibria to second order mean field games systems in cases where the uniqueness of solutions may fail. For the sake of simplicity, we focus on a simple stationary case. We propose an abstract framework to study these solutions by reformulating the mean field game system as an abstract equation in a Banach space. In this context, stable equilibria turn out to be regular solutions to this equation, meaning that the linearized system is well-posed. We provide three applications of this property: we study the sensitivity analysis of stable solutions, establish error estimates for their finite element approximations, and prove the local converge of Newton's method in infinite dimensions.
An interesting case of the well-known Dataset Shift Problem is the classification of Electroencephalogram (EEG) signals in the context of Brain-Computer Interface (BCI). The non-stationarity of EEG signals can lead to poor generalisation performance in BCI classification systems used in different sessions, also from the same subject. In this paper, we start from the hypothesis that the Dataset Shift problem can be alleviated by exploiting suitable eXplainable Artificial Intelligence (XAI) methods to locate and transform the relevant characteristics of the input for the goal of classification. In particular, we focus on an experimental analysis of explanations produced by several XAI methods on an ML system trained on a typical EEG dataset for emotion recognition. Results show that many relevant components found by XAI methods are shared across the sessions and can be used to build a system able to generalise better. However, relevant components of the input signal also appear to be highly dependent on the input itself.