We study an online version of the max-min fair allocation problem for indivisible items. In this problem, items arrive one by one, and each item must be allocated irrevocably on arrival to one of $n$ agents, who have additive valuations for the items. Our goal is to make the least happy agent as happy as possible. In research on the topic of online allocation, this is a fundamental and natural problem. Our main result is to reveal the asymptotic competitive ratios of the problem for both the adversarial and i.i.d. input models. We design a polynomial-time deterministic algorithm that is asymptotically $1/n$-competitive for the adversarial model, and we show that this guarantee is optimal. To this end, we present a randomized algorithm with the same competitive ratio first and then derandomize it. A natural derandomization fails to achieve the competitive ratio of $1/n$. We instead build the algorithm by introducing a novel technique. When the items are drawn from an unknown identical and independent distribution, we construct a simple polynomial-time deterministic algorithm that outputs a nearly optimal allocation. We analyze the strict competitive ratio and show almost tight bounds for the solution. We further mention some implications of our results on variants of the problem.
The recently proposed open-radio access network (O-RAN) architecture embraces cloudification and network function virtualization techniques to perform the base-band function processing by dis-aggregated radio units (RUs), distributed units (DUs), and centralized units (CUs). This enables the cloud-RAN vision in full, where mobile network operators (MNOs) could install their own RUs, but then lease on-demand computational resources for the processing of DU and CU functions from commonly available open-cloud (O-Cloud) servers via open x-haul interfaces due to variation of load over the day. This creates a multi-tenant scenario where multiple MNOs share networking as well as computational resources. In this paper, we propose a framework that dynamically allocates x-haul and DU/CU resources in a multi-tenant O-RAN ecosystem with min-max (dual of max-min) fairness. This framework ensures that a maximum number of RUs get sufficient resources while minimizing the OPEX for their MNOs. Moreover, in order to provide an access network architecture capable of sustaining low-latency and high capacity between RUs and edge-computing devices, we consider time-wavelength division multiplexed (TWDM) passive optical network (PON)-based x-haul interfaces where the PON virtualization technique is used to provide a direct optical connection between end-points. This creates a virtual mesh interconnection among all the nodes such that the RUs can be connected to the Edge-Clouds at macro-cell RU locations as well as to the O-Cloud servers at the central office locations. Furthermore, we analyze the system performance with our proposed framework and show that MNOs can operate with a better cost-efficiency than uniform resource allocation.
This paper presents a tractable sufficient condition for the consistency of maximum likelihood estimators (MLEs) in partially observed diffusion models, stated in terms of stationary distributions of the associated test processes, under the assumption that the set of unknown parameter values is finite. We illustrate the tractability of this sufficient condition by verifying it in the context of a latent price model of market microstructure. Finally, we describe an algorithm for computing MLEs in partially observed diffusion models and test it on historical data to estimate the parameters of the latent price model.
We present an analytical policy update rule that is independent of parameterized function approximators. The update rule is suitable for general stochastic policies with monotonic improvement guarantee. The update rule is derived from a closed-form trust-region solution using calculus of variation, following a new theoretical result that tightens existing bounds for policy search using trust-region methods. An explanation building a connection between the policy update rule and value-function methods is provided. Based on a recursive form of the update rule, an off-policy algorithm is derived naturally, and the monotonic improvement guarantee remains. Furthermore, the update rule extends immediately to multi-agent systems when updates are performed by one agent at a time.
We study the problem of allocating indivisible goods among agents in a fair manner. While envy-free allocations of indivisible goods are not guaranteed to exist, envy-freeness can be achieved by additionally providing some subsidy to the agents. These subsidies can be alternatively viewed as a divisible good (money) that is fractionally assigned among the agents to realize an envy-free outcome. In this setup, we bound the subsidy required to attain envy-freeness among agents with dichotomous valuations, i.e., among agents whose marginal value for any good is either zero or one. We prove that, under dichotomous valuations, there exists an allocation that achieves envy-freeness with a per-agent subsidy of either $0$ or $1$. Furthermore, such an envy-free solution can be computed efficiently in the standard value-oracle model. Notably, our results hold for general dichotomous valuations and, in particular, do not require the (dichotomous) valuations to be additive, submodular, or even subadditive. Also, our subsidy bounds are tight and provide a linear (in the number of agents) factor improvement over the bounds known for general monotone valuations.
Platforms often host multiple online groups with overlapping topics and members. How can researchers and designers understand how related groups affect each other? Inspired by population ecology, prior research in social computing and human-computer interaction has studied related groups by correlating group size with degrees of overlap in content and membership, but has produced puzzling results: overlap is associated with competition in some contexts but with mutualism in others. We suggest that this inconsistency results from aggregating intergroup relationships into an overall environmental effect that obscures the diversity of competition and mutualism among related groups. Drawing on the framework of community ecology, we introduce a time-series method for inferring competition and mutualism. We then use this framework to inform a large-scale analysis of clusters of subreddits that all have high user overlap. We find that mutualism is more common than competition.
Schelling's model considers $k$ types of agents each of whom needs to select a vertex on an undirected graph, where every agent prefers to neighbor agents of the same type. We are motivated by a recent line of work that studies solutions that are optimal with respect to notions related to the welfare of the agents. We explore the parameterized complexity of computing such solutions. We focus on the well-studied notions of social welfare (WO) and Pareto optimality (PO), alongside the recently proposed notions of group-welfare optimality (GWO) and utility-vector optimality (UVO), both of which lie between WO and PO. Firstly, we focus on the fundamental case where $k=2$ and there are $r$ red agents and $b$ blue agents. We show that all solution-notions we consider are $\textsf{NP}$-hard to compute even when $b=1$ and that they are $\textsf{W}[1]$-hard when parameterized by $r$ and $b$. In addition, we show that WO and GWO are $\textsf{NP}$-hard even on cubic graphs. We complement these negative results by an $\textsf{FPT}$ algorithm parameterized by $r, b$ and the maximum degree of the graph. For the general case with $k$ types of agents, we prove that for any of the notions we consider the problem is $\textsf{W}[1]$-hard when parameterized by $k$ for a large family of graphs that includes trees. We accompany these negative results with an $\textsf{XP}$ algorithm parameterized by $k$ and the treewidth of the graph.
Given an $n$-point metric space $(\mathcal{X},d)$ where each point belongs to one of $m=O(1)$ different categories or groups and a set of integers $k_1, \ldots, k_m$, the fair Max-Min diversification problem is to select $k_i$ points belonging to category $i\in [m]$, such that the minimum pairwise distance between selected points is maximized. The problem was introduced by Moumoulidou et al. [ICDT 2021] and is motivated by the need to down-sample large data sets in various applications so that the derived sample achieves a balance over diversity, i.e., the minimum distance between a pair of selected points, and fairness, i.e., ensuring enough points of each category are included. We prove the following results: 1. We first consider general metric spaces. We present a randomized polynomial time algorithm that returns a factor $2$-approximation to the diversity but only satisfies the fairness constraints in expectation. Building upon this result, we present a $6$-approximation that is guaranteed to satisfy the fairness constraints up to a factor $1-\epsilon$ for any constant $\epsilon$. We also present a linear time algorithm returning an $m+1$ approximation with exact fairness. The best previous result was a $3m-1$ approximation. 2. We then focus on Euclidean metrics. We first show that the problem can be solved exactly in one dimension. For constant dimensions, categories and any constant $\epsilon>0$, we present a $1+\epsilon$ approximation algorithm that runs in $O(nk) + 2^{O(k)}$ time where $k=k_1+\ldots+k_m$. We can improve the running time to $O(nk)+ poly(k)$ at the expense of only picking $(1-\epsilon) k_i$ points from category $i\in [m]$. Finally, we present algorithms suitable to processing massive data sets including single-pass data stream algorithms and composable coresets for the distributed processing.
Niching methods have been developed to maintain the population diversity, to investigate many peaks in parallel and to reduce the effect of genetic drift. We present the first rigorous runtime analyses of restricted tournament selection (RTS), embedded in a ($\mu$+1) EA, and analyse its effectiveness at finding both optima of the bimodal function ${\rm T{\small WO}M{\small AX}}$. In RTS, an offspring competes against the closest individual, with respect to some distance measure, amongst $w$ (window size) population members (chosen uniformly at random with replacement), to encourage competition within the same niche. We prove that RTS finds both optima on ${\rm T{\small WO}M{\small AX}}$ efficiently if the window size $w$ is large enough. However, if $w$ is too small, RTS fails to find both optima even in exponential time, with high probability. We further consider a variant of RTS selecting individuals for the tournament \emph{without} replacement. It yields a more diverse tournament and is more effective at preventing one niche from taking over the other. However, this comes at the expense of a slower progress towards optima when a niche collapses to a single individual. Our theoretical results are accompanied by experimental studies that shed light on parameters not covered by the theoretical results and support a conjectured lower runtime bound.
Join query evaluation with ordering is a fundamental data processing task in relational database management systems. SQL and custom graph query languages such as Cypher offer this functionality by allowing users to specify the order via the ORDER BY clause. In many scenarios, the users also want to see the first $k$ results quickly (expressed by the LIMIT clause), but the value of $k$ is not predetermined as user queries are arriving in an online fashion. Recent work has made considerable progress in identifying optimal algorithms for ranked enumeration of join queries that do not contain any projections. In this paper, we initiate the study of the problem of enumerating results in ranked order for queries with projections. Our main result shows that for any acyclic query, it is possible to obtain a near-linear (in the size of the database) delay algorithm after only a linear time preprocessing step for two important ranking functions: sum and lexicographic ordering. For a practical subset of acyclic queries known as star queries, we show an even stronger result that allows a user to obtain a smooth tradeoff between faster answering time guarantees using more preprocessing time. Our results are also extensible to queries containing cycles and unions. We also perform a comprehensive experimental evaluation to demonstrate that our algorithms, which are simple to implement, improve up to three orders of magnitude in the running time over state-of-the-art algorithms implemented within open-source RDBMS and specialized graph databases.
We investigate the problem of fair recommendation in the context of two-sided online platforms, comprising customers on one side and producers on the other. Traditionally, recommendation services in these platforms have focused on maximizing customer satisfaction by tailoring the results according to the personalized preferences of individual customers. However, our investigation reveals that such customer-centric design may lead to unfair distribution of exposure among the producers, which may adversely impact their well-being. On the other hand, a producer-centric design might become unfair to the customers. Thus, we consider fairness issues that span both customers and producers. Our approach involves a novel mapping of the fair recommendation problem to a constrained version of the problem of fairly allocating indivisible goods. Our proposed FairRec algorithm guarantees at least Maximin Share (MMS) of exposure for most of the producers and Envy-Free up to One item (EF1) fairness for every customer. Extensive evaluations over multiple real-world datasets show the effectiveness of FairRec in ensuring two-sided fairness while incurring a marginal loss in the overall recommendation quality.