The sequential equilibrium is a standard solution concept for extensive-form games with imperfect information that includes an explicit representation of the players' beliefs. An assessment consisting of a strategy and a belief is a sequential equilibrium if it satisfies the properties of sequential rationality and consistency. Our main result is that both properties together can be written as a single finite system of polynomial equations and inequalities. The solutions to this system are exactly the sequential equilibria of the game. We construct this system explicitly and describe an implementation that solves it using cylindrical algebraic decomposition. To write consistency as a finite system of equations, we need to compute the extreme directions of a set of polyhedral cones. We propose a modified version of the double description method, optimized for this specific purpose. To the best of our knowledge, our implementation is the first to symbolically solve general finite imperfect information games for sequential equilibria.
Machine learning methods can be a valuable aid in the scientific process, but they need to face challenging settings where data come from inhomogeneous experimental conditions. Recent meta-learning methods have made significant progress in multi-task learning, but they rely on black-box neural networks, resulting in high computational costs and limited interpretability. Leveraging the structure of the learning problem, we argue that multi-environment generalization can be achieved using a simpler learning model, with an affine structure with respect to the learning task. Crucially, we prove that this architecture can identify the physical parameters of the system, enabling interpreable learning. We demonstrate the competitive generalization performance and the low computational cost of our method by comparing it to state-of-the-art algorithms on physical systems, ranging from toy models to complex, non-analytical systems. The interpretability of our method is illustrated with original applications to physical-parameter-induced adaptation and to adaptive control.
With the advancement of quantum technologies, there is a potential threat to traditional encryption systems based on integer factorization. Therefore, developing techniques for accurately measuring the performance of associated quantum algorithms is crucial, as it can provide insights into the practical feasibility from the current perspective. In this chapter, we aim to analyze the time required for integer factorization tasks using Shor's algorithm within a gate-based quantum circuit simulator of the matrix product state type. Additionally, we observe the impact of parameter pre-selection in Shor's algorithm. Specifically, this pre-selection is expected to increase the success rate of integer factorization by reducing the number of iterations and facilitating performance measurement under fixed conditions, thus enabling scalable performance evaluation even on real quantum hardware.
We consider a two-player dynamic information design problem between a principal and a receiver -- a game is played between the two agents on top of a Markovian system controlled by the receiver's actions, where the principal obtains and strategically shares some information about the underlying system with the receiver in order to influence their actions. In our setting, both players have long-term objectives, and the principal sequentially commits to their strategies instead of committing at the beginning. Further, the principal cannot directly observe the system state, but at every turn they can choose randomized experiments to observe the system partially. The principal can share details about the experiments to the receiver. For our analysis we impose the truthful disclosure rule: the principal is required to truthfully announce the details and the result of each experiment to the receiver immediately after the experiment result is revealed. Based on the received information, the receiver takes an action when its their turn, with the action influencing the state of the underlying system. We show that there exist Perfect Bayesian equilibria in this game where both agents play Canonical Belief Based (CBB) strategies using a compressed version of their information, rather than full information, to choose experiments (for the principal) or actions (for the receiver). We also provide a backward inductive procedure to solve for an equilibrium in CBB strategies.
Multiparameter persistence modules can be uniquely decomposed into indecomposable summands. Among these indecomposables, intervals stand out for their simplicity, making them preferable for their ease of interpretation in practical applications and their computational efficiency. Empirical observations indicate that modules that decompose into only intervals are rare. To support this observation, we show that for numerous common multiparameter constructions, such as density- or degree-Rips bifiltrations, and across a general category of point samples, the probability of the homology-induced persistence module decomposing into intervals goes to zero as the sample size goes to infinity.
Network flow problems, which involve distributing traffic such that the underlying infrastructure is used effectively, are ubiquitous in transportation and logistics. Among them, the general Multi-Commodity Network Flow (MCNF) problem concerns the distribution of multiple flows of different sizes between several sources and sinks, while achieving effective utilization of the links. Due to the appeal of data-driven optimization, these problems have increasingly been approached using graph learning methods. In this paper, we propose a novel graph learning architecture for network flow problems called Per-Edge Weights (PEW). This method builds on a Graph Attention Network and uses distinctly parametrized message functions along each link. We extensively evaluate the proposed solution through an Internet flow routing case study using $17$ Service Provider topologies and $2$ routing schemes. We show that PEW yields substantial gains over architectures whose global message function constrains the routing unnecessarily. We also find that an MLP is competitive with other standard architectures. Furthermore, we analyze the relationship between graph structure and predictive performance for data-driven routing of flows, an aspect that has not been considered by existing work in the area.
In recent years, significant advances have been made in the field of game research. However, there has been a noticeable dearth of scholarly research focused on the domain of dynamics, despite the widespread recognition among researchers of its existence and importance. The objective of this paper is to address this research gap by presenting a vocabulary dedicated to boardgame dynamics. To achieve this goal, we employ a focus group to generate a set of dynamic concepts that are subsequently subjected to validation and refinement through a survey. The resulting concepts are then organized into a vocabulary using a taxonomic structure, allowing the grouping of these concepts into broader and more general ideas.
Algorithmic contract design is a new frontier in the intersection of economics and computation, with combinatorial contracts being a core problem in this domain. A central model within combinatorial contracts explores a setting where a principal delegates the execution of a task, which can either succeed or fail, to an agent. The agent can choose any subset among a given set of costly actions, where every subset is associated with a success probability. The principal incentivizes the agent through a contract that specifies the payment upon success of the task. A natural setting of interest is one with submodular success probabilities. It is known that finding the optimal contract for the principal is $\mathsf{NP}$-hard, but the hardness result is derived from the hardness of demand queries. A major open problem is whether the hardness arises solely from the hardness of demand queries, or if the complexity lies within the optimal contract problem itself. In other words: does the problem retain its hardness, even when provided access to a demand oracle? We resolve this question in the affirmative, showing that any algorithm that computes the optimal contract for submodular success probabilities requires an exponential number of demand queries, thus settling the query complexity problem.
Reasoning, a crucial ability for complex problem-solving, plays a pivotal role in various real-world settings such as negotiation, medical diagnosis, and criminal investigation. It serves as a fundamental methodology in the field of Artificial General Intelligence (AGI). With the ongoing development of foundation models, e.g., Large Language Models (LLMs), there is a growing interest in exploring their abilities in reasoning tasks. In this paper, we introduce seminal foundation models proposed or adaptable for reasoning, highlighting the latest advancements in various reasoning tasks, methods, and benchmarks. We then delve into the potential future directions behind the emergence of reasoning abilities within foundation models. We also discuss the relevance of multimodal learning, autonomous agents, and super alignment in the context of reasoning. By discussing these future research directions, we hope to inspire researchers in their exploration of this field, stimulate further advancements in reasoning with foundation models, and contribute to the development of AGI.
The concept of causality plays an important role in human cognition . In the past few decades, causal inference has been well developed in many fields, such as computer science, medicine, economics, and education. With the advancement of deep learning techniques, it has been increasingly used in causal inference against counterfactual data. Typically, deep causal models map the characteristics of covariates to a representation space and then design various objective optimization functions to estimate counterfactual data unbiasedly based on the different optimization methods. This paper focuses on the survey of the deep causal models, and its core contributions are as follows: 1) we provide relevant metrics under multiple treatments and continuous-dose treatment; 2) we incorporate a comprehensive overview of deep causal models from both temporal development and method classification perspectives; 3) we assist a detailed and comprehensive classification and analysis of relevant datasets and source code.
Promoting behavioural diversity is critical for solving games with non-transitive dynamics where strategic cycles exist, and there is no consistent winner (e.g., Rock-Paper-Scissors). Yet, there is a lack of rigorous treatment for defining diversity and constructing diversity-aware learning dynamics. In this work, we offer a geometric interpretation of behavioural diversity in games and introduce a novel diversity metric based on \emph{determinantal point processes} (DPP). By incorporating the diversity metric into best-response dynamics, we develop \emph{diverse fictitious play} and \emph{diverse policy-space response oracle} for solving normal-form games and open-ended games. We prove the uniqueness of the diverse best response and the convergence of our algorithms on two-player games. Importantly, we show that maximising the DPP-based diversity metric guarantees to enlarge the \emph{gamescape} -- convex polytopes spanned by agents' mixtures of strategies. To validate our diversity-aware solvers, we test on tens of games that show strong non-transitivity. Results suggest that our methods achieve much lower exploitability than state-of-the-art solvers by finding effective and diverse strategies.