Designing efficient algorithms to compute Nash equilibria poses considerable challenges in Algorithmic Game Theory and Optimization. In this work, we employ integer programming techniques to compute Nash equilibria in Integer Programming Games, a class of simultaneous and non-cooperative games where each player solves a parametrized integer program. We introduce ZERO Regrets, a general and efficient cutting plane algorithm to compute, enumerate, and select Nash equilibria. Our framework leverages the concept of equilibrium inequality, an inequality valid for any Nash equilibrium, and the associated equilibrium separation oracle. We evaluate our algorithmic framework on a wide range of practical and methodological problems from the literature, providing a solid benchmark against the existing approaches.
When performing Bayesian computations in practice, one is often faced with the challenge that the constituent model components and/or the data are only available in a distributed fashion, e.g. due to privacy concerns or sheer volume. While various methods have been proposed for performing posterior inference in such federated settings, these either make very strong assumptions on the data and/or model or otherwise introduce significant bias when the local posteriors are combined to form an approximation of the target posterior. By leveraging recently developed methods for Markov Chain Monte Carlo (MCMC) based on Piecewise Deterministic Markov Processes (PDMPs), we develop a computation -- and communication -- efficient family of posterior inference algorithms (Fed-PDMC) which provides asymptotically exact approximations of the full posterior over a large class of Bayesian models, allowing heterogenous model and data contributions from each client. We show that communication between clients and the server preserves the privacy of the individual data sources by establishing differential privacy guarantees. We quantify the performance of Fed-PDMC over a class of illustrative analytical case-studies and demonstrate its efficacy on a number of synthetic examples along with realistic Bayesian computation benchmarks.
A new approach to calculating the finite Fourier transform is suggested throughout the process of this study. The idea that the series has been updated with the appropriate modification and purification, which serves as the basis for the study, and that this update functions as the basis for the investigation is the conceptual goal of this method, which was designed especially for the purpose of this study. It is provided here that this methodology, which was designed especially for the purpose of this study, has been updated with the appropriate modification and purification, which serves as the basis for the study, is provided here. This study also used this update as the premise to get started. In order for this approach to be successful, the starting point must be the presumption that the series has been appropriately purified and organized to the point where it can be considered adequate. The attributes of this series were discovered as a result of the work that was ordered to choose an acceptable application of the Fourier series, to apply it, and to conduct an analysis of it in relation to the finite Fourier transform. These qualities were determined this study. The results of this study provided a better understanding of the characteristics of this series.
Multi-agent interactions are increasingly important in the context of reinforcement learning, and the theoretical foundations of policy gradient methods have attracted surging research interest. We investigate the global convergence of natural policy gradient (NPG) algorithms in multi-agent learning. We first show that vanilla NPG may not have parameter convergence, i.e., the convergence of the vector that parameterizes the policy, even when the costs are regularized (which enabled strong convergence guarantees in the policy space in the literature). This non-convergence of parameters leads to stability issues in learning, which becomes especially relevant in the function approximation setting, where we can only operate on low-dimensional parameters, instead of the high-dimensional policy. We then propose variants of the NPG algorithm, for several standard multi-agent learning scenarios: two-player zero-sum matrix and Markov games, and multi-player monotone games, with global last-iterate parameter convergence guarantees. We also generalize the results to certain function approximation settings. Note that in our algorithms, the agents take symmetric roles. Our results might also be of independent interest for solving nonconvex-nonconcave minimax optimization problems with certain structures. Simulations are also provided to corroborate our theoretical findings.
Model-free learning for multi-agent stochastic games is an active area of research. Existing reinforcement learning algorithms, however, are often restricted to zero-sum games, and are applicable only in small state-action spaces or other simplified settings. Here, we develop a new data efficient Deep-Q-learning methodology for model-free learning of Nash equilibria for general-sum stochastic games. The algorithm uses a local linear-quadratic expansion of the stochastic game, which leads to analytically solvable optimal actions. The expansion is parametrized by deep neural networks to give it sufficient flexibility to learn the environment without the need to experience all state-action pairs. We study symmetry properties of the algorithm stemming from label-invariant stochastic games and as a proof of concept, apply our algorithm to learning optimal trading strategies in competitive electronic markets.
We address two major obstacles to practical use of supervised classifiers on distributed private data. Whether a classifier was trained by a federation of cooperating clients or trained centrally out of distribution, (1) the output scores must be calibrated, and (2) performance metrics must be evaluated -- all without assembling labels in one place. In particular, we show how to perform calibration and compute precision, recall, accuracy and ROC-AUC in the federated setting under three privacy models (i) secure aggregation, (ii) distributed differential privacy, (iii) local differential privacy. Our theorems and experiments clarify tradeoffs between privacy, accuracy, and data efficiency. They also help decide whether a given application has sufficient data to support federated calibration and evaluation.
Many real-world strategic games involve interactions between multiple players. We study a hierarchical multi-player game structure, where players with asymmetric roles can be separated into leaders and followers, a setting often referred to as Stackelberg game or leader-follower game. In particular, we focus on a Stackelberg game scenario where there are multiple leaders and a single follower, called the Multi-Leader-Single-Follower (MLSF) game. We propose a novel asymmetric equilibrium concept for the MLSF game called Correlated Stackelberg Equilibrium (CSE). We design online learning algorithms that enable the players to interact in a distributed manner, and prove that it can achieve no-external Stackelberg-regret learning. This further translates to the convergence to approximate CSE via a reduction from no-external regret to no-swap regret. At the core of our works, we solve the intricate problem of how to learn equilibrium in leader-follower games with noisy bandit feedback by balancing exploration and exploitation in different learning structures.
Motion planning and control are crucial components of robotics applications. Here, spatio-temporal hard constraints like system dynamics and safety boundaries (e.g., obstacles in automated driving) restrict the robot's motions. Direct methods from optimal control solve a constrained optimization problem. However, in many applications finding a proper cost function is inherently difficult because of the weighting of partially conflicting objectives. On the other hand, Imitation Learning (IL) methods such as Behavior Cloning (BC) provide a intuitive framework for learning decision-making from offline demonstrations and constitute a promising avenue for planning and control in complex robot applications. Prior work primarily relied on soft-constraint approaches, which use additional auxiliary loss terms describing the constraints. However, catastrophic safety-critical failures might occur in out-of-distribution (OOD) scenarios. This work integrates the flexibility of IL with hard constraint handling in optimal control. Our approach constitutes a general framework for constraint robotic motion planning and control using offline IL. Hard constraints are integrated into the learning problem in a differentiable manner, via explicit completion and gradient-based correction. Simulated experiments of mobile robot navigation and automated driving provide evidence for the performance of the proposed method.
Stackelberg Equilibria arise naturally in a range of popular learning problems, such as in security games or automated mechanism design, and have received increasing attention in the reinforcement learning literature recently. We present a general framework for implementing Stackelberg Equilibria search as a multi-agent RL problem, allowing a wide range of design choices. We discuss how previous approaches can be seen as specific instantiations of this framework. As a key insight, we note that the design space allows for approaches not previously seen in the literature, for instance by leveraging multitask and meta-RL techniques for follower convergence. We evaluate examples of novel approaches predicted by our framework experimentally on standard benchmark domains. Finally, we discuss directions for future work implied by our work.
This paper presents a succinct review of attempts in the literature to use game theory to model decision making scenarios relevant to defence applications. Game theory has been proven as a very effective tool in modelling decision making processes of intelligent agents, entities, and players. It has been used to model scenarios from diverse fields such as economics, evolutionary biology, and computer science. In defence applications, there is often a need to model and predict actions of hostile actors, and players who try to evade or out-smart each other. Modelling how the actions of competitive players shape the decision making of each other is the forte of game theory. In past decades, there have been several studies which applied different branches of game theory to model a range of defence-related scenarios. This paper provides a structured review of such attempts, and classifies existing literature in terms of the kind of warfare modelled, the types of game used, and the players involved. The presented analysis provides a concise summary about the state-of-the-art with regards to the use of game theory in defence applications, and highlights the benefits and limitations of game theory in the considered scenarios.
Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.