We study the problem of maximizing Nash welfare (MNW) while allocating indivisible goods to asymmetric agents. The Nash welfare of an allocation is the weighted geometric mean of agents' utilities, and the allocation with maximum Nash welfare is known to satisfy several desirable fairness and efficiency properties. However, computing such an MNW allocation is APX-hard (hard to approximate) in general, even when agents have additive valuation functions. Hence, we aim to identify tractable classes which either admit a polynomial-time approximation scheme (PTAS) or an exact polynomial-time algorithm. To this end, we design a PTAS for finding an MNW allocation for the case of asymmetric agents with identical, additive valuations, thus generalizing a similar result for symmetric agents. Our techniques can also be adapted to give a PTAS for the problem of computing the optimal $p$-mean welfare. We also show that an MNW allocation can be computed exactly in polynomial time for identical agents with $k$-ary valuations when $k$ is a constant, where every agent has at most $k$ different values for the goods. Next, we consider the special case where every agent finds at most two goods valuable, and show that this class admits an efficient algorithm, even for general monotone valuations. In contrast, we show that when agents can value three or more goods, maximizing Nash welfare is APX-hard, even when agents are symmetric and have additive valuations. Finally, we show that for constantly many asymmetric agents with additive valuations, the MNW problem admits a fully polynomial-time approximation scheme (FPTAS).
We study the robust maximum flow problem and the robust maximum flow over time problem where a given number of arcs $\Gamma$ may fail or may be delayed. Two prominent models have been introduced for these problems: either one assigns flow to arcs fulfilling weak flow conservation in any scenario, or one assigns flow to paths where an arc failure or delay affects a whole path. We provide a unifying framework by presenting novel general models, in which we assign flow to subpaths. These models contain the known models as special cases and unify their advantages in order to obtain less conservative robust solutions. We give a thorough analysis with respect to complexity of the general models. In particular, we show that the general models are essentially NP-hard, whereas, e.g. in the static case with $\Gamma = 1$ an optimal solution can be computed in polynomial time. Further, we answer the open question about the complexity of the dynamic path model for $\Gamma = 1$. We also compare the solution quality of the different models. In detail, we show that the general models have better robust optimal values than the known models and we prove bounds on these gaps.
We introduce the maximum $n$-times coverage problem that selects $k$ overlays to maximize the summed coverage of weighted elements, where each element must be covered at least $n$ times. We also define the min-cost $n$-times coverage problem where the objective is to select the minimum set of overlays such that the sum of the weights of elements that are covered at least $n$ times is at least $\tau$. Maximum $n$-times coverage is a generalization of the multi-set multi-cover problem, is NP-complete, and is not submodular. We introduce two new practical solutions for $n$-times coverage based on integer linear programming and sequential greedy optimization. We show that maximum $n$-times coverage is a natural way to frame peptide vaccine design, and find that it produces a pan-strain COVID-19 vaccine design that is superior to 29 other published designs in predicted population coverage and the expected number of peptides displayed by each individual's HLA molecules.
We study the equilibrium computation problem for two classical resource allocation games: atomic splittable congestion games and multimarket Cournot oligopolies. For atomic splittable congestion games with singleton strategies and player-specific affine cost functions, we devise the first polynomial time algorithm computing a pure Nash equilibrium. Our algorithm is combinatorial and computes the exact equilibrium assuming rational input. The idea is to compute an equilibrium for an associated integrally-splittable singleton congestion game in which the players can only split their demands in integral multiples of a common packet size. While integral games have been considered in the literature before, no polynomial time algorithm computing an equilibrium was known. Also for this class, we devise the first polynomial time algorithm and use it as a building block for our main algorithm. We then develop a polynomial time computable transformation mapping a multimarket Cournot competition game with firm-specific affine price functions and quadratic costs to an associated atomic splittable congestion game as described above. The transformation preserves equilibria in either games and, thus, leads -- via our first algorithm -- to a polynomial time algorithm computing Cournot equilibria. Finally, our analysis for integrally-splittable games implies new bounds on the difference between real and integral Cournot equilibria. The bounds can be seen as a generalization of the recent bounds for single market oligopolies obtained by Todd [2016].
We consider generalized Nash equilibrium problems (GNEPs) with non-convex strategy spaces and non-convex cost functions. This general class of games includes the important case of games with mixed-integer variables for which only a few results are known in the literature. We present a new approach to characterize equilibria via a convexification technique using the Nikaido-Isoda function. To any given instance of the GNEP, we construct a set of convexified instances and show that a feasible strategy profile is an equilibrium for the original instance if and only if it is an equilibrium for any convexified instance and the convexified cost functions coincide with the initial ones. We further develop this approach along three dimensions. We first show that for quasi-linear models, where a convexified instance exists in which for fixed strategies of the opponent players, the cost function of every player is linear and the respective strategy space is polyhedral, the convexification reduces the GNEP to a standard (non-linear) optimization problem. Secondly, we derive two complete characterizations of those GNEPs for which the convexification leads to a jointly constrained or a jointly convex GNEP, respectively. These characterizations require new concepts related to the interplay of the convex hull operator applied to restricted subsets of feasible strategies and may be interesting on their own. Finally, we demonstrate the applicability of our results by presenting a numerical study regarding the computation of equilibria for a class of integral network flow GNEPs.
We consider the problem of allocating multiple indivisible items to a set of networked agents to maximize the social welfare subject to network externalities. Here, the social welfare is given by the sum of agents' utilities and externalities capture the effect that one user of an item has on the item's value to others. We first provide a general formulation that captures some of the existing models as a special case. We then show that the social welfare maximization problem benefits some nice diminishing or increasing marginal return properties. That allows us to devise polynomial-time approximation algorithms using the Lovasz extension and multilinear extension of the objective functions. Our principled approach recovers or improves some of the existing algorithms and provides a simple and unifying framework for maximizing social welfare subject to network externalities.
The standard game-theoretic solution concept, Nash equilibrium, assumes that all players behave rationally. If we follow a Nash equilibrium and opponents are irrational (or follow strategies from a different Nash equilibrium), then we may obtain an extremely low payoff. On the other hand, a maximin strategy assumes that all opposing agents are playing to minimize our payoff (even if it is not in their best interest), and ensures the maximal possible worst-case payoff, but results in exceedingly conservative play. We propose a new solution concept called safe equilibrium that models opponents as behaving rationally with a specified probability and behaving potentially arbitrarily with the remaining probability. We prove that a safe equilibrium exists in all strategic-form games (for all possible values of the rationality parameters), and prove that its computation is PPAD-hard. We present exact algorithms for computing a safe equilibrium in both 2 and $n$-player games, as well as scalable approximation algorithms.
In a single-parameter mechanism design problem, a provider is looking to sell a service to a group of potential buyers. Each buyer $i$ has a private value $v_i$ for receiving the service and a feasibility constraint restricts which sets of buyers can be served simultaneously. Recent work in economics introduced clock auctions as a superior class of auctions for this problem, due to their transparency, simplicity, and strong incentive guarantees. Subsequent work focused on evaluating the social welfare approximation guarantees of these auctions, leading to strong impossibility results: in the absence of prior information regarding the buyers' values, no deterministic clock auction can achieve a bounded approximation, even for simple feasibility constraints with only two maximal feasible sets. We show that these negative results can be circumvented by using prior information or by leveraging randomization. We provide clock auctions that give a $O(\log\log k)$ approximation for general downward-closed feasibility constraints with $k$ maximal feasible sets for three different information models, ranging from full access to the value distributions to complete absence of information. The more information the seller has, the simpler these auctions are. Under full access, we use a particularly simple deterministic clock auction, called a single-price clock auction, which is only slightly more complex than posted price mechanisms. In this auction, each buyer is offered a single price and a feasible set is selected among those who accept their offers. In the other extreme, where no prior information is available, this approximation guarantee is obtained using a complex randomized clock auction. In addition to our main results, we propose a parameterization that interpolates between single-price clock auctions and general clock auctions, paving the way for an exciting line of future research.
The scope of this paper is the analysis and approximation of an optimal control problem related to the Allen-Cahn equation. A tracking functional is minimized subject to the Allen-Cahn equation using distributed controls that satisfy point-wise control constraints. First and second order necessary and sufficient conditions are proved. The lowest order discontinuous Galerkin - in time - scheme is considered for the approximation of the control to state and adjoint state mappings. Under a suitable restriction on maximum size of the temporal and spatial discretization parameters $k$, $h$ respectively in terms of the parameter $\epsilon$ that describes the thickness of the interface layer, a-priori estimates are proved with constants depending polynomially upon $1/ \epsilon$. Unlike to previous works for the uncontrolled Allen-Cahn problem our approach does not rely on a construction of an approximation of the spectral estimate, and as a consequence our estimates are valid under low regularity assumptions imposed by the optimal control setting. These estimates are also valid in cases where the solution and its discrete approximation do not satisfy uniform space-time bounds independent of $\epsilon$. These estimates and a suitable localization technique, via the second order condition (see \cite{Arada-Casas-Troltzsch_2002,Casas-Mateos-Troltzsch_2005,Casas-Raymond_2006,Casas-Mateos-Raymond_2007}), allows to prove error estimates for the difference between local optimal controls and their discrete approximation as well as between the associated state and adjoint state variables and their discrete approximations
Assortment optimization describes a retailer's general problem of deciding which variants in a product category to offer. In a typical formulation, there is a universe of substitute products whose prices have been pre-determined, and a model for how customers choose between these products. The goal is to find a subset to offer that maximizes aggregate revenue. In this paper we ask whether offering an assortment is actually optimal, given the recent emergence of more sophisticated selling practices, such as offering certain products only through lotteries. To formalize this question, we introduce a mechanism design problem where the items have fixed prices and the seller optimizes over (randomized) allocations. The seller has a Bayesian prior on the buyer's ranking of the items along with an outside option. Under our formulation, revenue maximization over deterministic mechanisms is equivalent to assortment optimization, while randomized mechanisms allow for lotteries that sell fixed-price items. We derive a sufficient condition, based purely on the buyer's ranking distribution, that guarantees assortments to be optimal within this larger class of randomized mechanisms. Our sufficient condition captures many preference distributions commonly studied in the assortment optimization literature -- Multi-Nomial Logit (MNL), Markov Chain, Tversky's Elimination by Aspects model, a mixture of MNL with an Independent Demand model, and simple cases of Nested Logit. When our condition does not hold, we also bound the suboptimality of assortments in comparison to lotteries. Finally, from these results emerge two findings of independent interest: an example showing that Nested Logit is not captured by Markov Chain choice models, and a tighter Linear Programming relaxation for assortment optimization.
Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.