亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The organiser of the UEFA Champions League, one of the most prestigious football tournaments in the world, faces a non-trivial mechanism design problem each autumn: how to choose a perfect matching in a balanced bipartite graph randomly. For the sake of credibility and transparency, the Round of 16 draw consists of some discrete uniform choices from two urns whose compositions are dynamically updated with computer assistance. Even though the adopted mechanism is unevenly distributed over all valid assignments, it resembles the fairest possible lottery according to a recent result. We challenge this finding by analysing the effect of reversing the order of the urns. The optimal draw procedure is shown to be primarily dependent on the lexicographic order of degree sequences for the two sets of teams. An example is provided where exchanging the urns can reduce unfairness by one-third on average and almost halve the worst bias for all pairs of teams. Nonetheless, the current policy of starting the draw with the runners-up remains the best option if the draw order should be determined before the national associations of the clubs are known.

相關內容

A machine-learned system that is fair in static decision-making tasks may have biased societal impacts in the long-run. This may happen when the system interacts with humans and feedback patterns emerge, reinforcing old biases in the system and creating new biases. While existing works try to identify and mitigate long-run biases through smart system design, we introduce techniques for monitoring fairness in real time. Our goal is to build and deploy a monitor that will continuously observe a long sequence of events generated by the system in the wild, and will output, with each event, a verdict on how fair the system is at the current point in time. The advantages of monitoring are two-fold. Firstly, fairness is evaluated at run-time, which is important because unfair behaviors may not be eliminated a priori, at design-time, due to partial knowledge about the system and the environment, as well as uncertainties and dynamic changes in the system and the environment, such as the unpredictability of human behavior. Secondly, monitors are by design oblivious to how the monitored system is constructed, which makes them suitable to be used as trusted third-party fairness watchdogs. They function as computationally lightweight statistical estimators, and their correctness proofs rely on the rigorous analysis of the stochastic process that models the assumptions about the underlying dynamics of the system. We show, both in theory and experiments, how monitors can warn us (1) if a bank's credit policy over time has created an unfair distribution of credit scores among the population, and (2) if a resource allocator's allocation policy over time has made unfair allocations. Our experiments demonstrate that the monitors introduce very low overhead. We believe that runtime monitoring is an important and mathematically rigorous new addition to the fairness toolbox.

Tanglegrams are drawings of two rooted binary phylogenetic trees and a matching between their leaf sets. The trees are drawn crossing-free on opposite sides with their leaf sets facing each other on two vertical lines. Instead of minimizing the number of pairwise edge crossings, we consider the problem of minimizing the number of block crossings, that is, two bundles of lines crossing each other locally. With one tree fixed, the leaves of the second tree can be permuted according to its tree structure. We give a complete picture of the algorithmic complexity of minimizing block crossings in one-sided tanglegrams by showing NP-completeness, constant-factor approximations, and a fixed-parameter algorithm. We also state first results for non-binary trees.

Motivated by recent works on streaming algorithms for constraint satisfaction problems (CSPs), we define and analyze oblivious algorithms for the Max-$k$AND problem. This generalizes the definition by Feige and Jozeph (Algorithmica '15) of oblivious algorithms for Max-DICUT, a special case of Max-$2$AND. Oblivious algorithms round each variable with probability depending only on a quantity called the variable's bias. For each oblivious algorithm, we design a so-called "factor-revealing linear program" (LP) which captures its worst-case instance, generalizing one of Feige and Jozeph for Max-DICUT. Then, departing from their work, we perform a fully explicit analysis of these (infinitely many!) LPs. In particular, we show that for all $k$, oblivious algorithms for Max-$k$AND provably outperform a special subclass of algorithms we call "superoblivious" algorithms. Our result has implications for streaming algorithms: Generalizing the result for Max-DICUT of Saxena, Singer, Sudan, and Velusamy (SODA'23), we prove that certain separation results hold between streaming models for infinitely many CSPs: for every $k$, $O(\log n)$-space sketching algorithms for Max-$k$AND known to be optimal in $o(\sqrt n)$-space can be beaten in (a) $O(\log n)$-space under a random-ordering assumption, and (b) $O(n^{1-1/k} D^{1/k})$ space under a maximum-degree-$D$ assumption. Even in the previously-known case of Max-DICUT, our analytic proof gives a fuller, computer-free picture of these separation results.

Signaling game problems investigate communication scenarios where encoder(s) and decoder(s) have misaligned objectives due to the fact that they either employ different cost functions or have inconsistent priors. This problem has been studied in the literature for scalar sources under various setups. In this paper, we consider multi-dimensional sources under quadratic criteria in the presence of a bias leading to a mismatch in the criteria, where we show that the generalization from the scalar setup is more than technical. We show that the Nash equilibrium solutions lead to structural richness due to the subtle geometric analysis the problem entails, with consequences in both system design, the presence of linear Nash equilibria, and an information theoretic problem formulation. We first provide a set of geometric conditions that must be satisfied in equilibrium considering any multi-dimensional source. Then, we consider independent and identically distributed sources and characterize necessary and sufficient conditions under which an informative linear Nash equilibrium exists. These conditions involve the bias vector that leads to misaligned costs. Depending on certain conditions related to the bias vector, the existence of linear Nash equilibria requires sources with a Gaussian or a symmetric density. Moreover, in the case of Gaussian sources, our results have a rate-distortion theoretic implication that achievable rates and distortions in the considered game theoretic setup can be obtained from its team theoretic counterpart.

We consider a multi-agent delegated search without money, which is the first to study the multi-agent extension of Kleinberg and Kleinberg (EC'18). In our model, given a set of agents, each agent samples a fixed number of solutions, and privately sends a signal, e.g., a subset of solutions, to the principal. Then, the principal selects a final solution based on the agents' signals. Our model captures a variety of real-world scenarios, spanning classical economical applications to modern intelligent system. In stark contrast to single-agent setting by Kleinberg and Kleinberg (EC'18) with an approximate Bayesian mechanism, we show that there exist efficient approximate prior-independent mechanisms with both information and performance gain, thanks to the competitive tension between the agents. Interestingly, however, the amount of such a compelling power significantly varies with respect to the information available to the agents, and the degree of correlation between the principal's and the agent's utility. Technically, we conduct a comprehensive study on the multi-agent delegated search problem and derive several results on the approximation factors of Bayesian/prior-independent mechanisms in complete/incomplete information settings. As a special case of independent interest, we obtain comparative statics regarding the number of agents which implies the dominance of the multi-agent setting ($n \ge 2$) over the single-agent setting ($n=1$) in terms of the principal's utility. We further extend our problem by considering an examination cost of the mechanism and derive some analogous results in the complete information setting.

We initiate the study of fair distribution of delivery tasks among a set of agents wherein delivery jobs are placed along the vertices of a graph. Our goal is to fairly distribute delivery costs (modeled as a submodular function) among a fixed set of agents while satisfying some desirable notions of economic efficiency. We adopt well-established fairness concepts$\unicode{x2014}$such as envy-freeness up to one item (EF1) and minimax share (MMS)$\unicode{x2014}$to our setting and show that fairness is often incompatible with the efficiency notion of social optimality. Yet, we characterize instances that admit fair and socially optimal solutions by exploiting graph structures. We further show that achieving fairness along with Pareto optimality is computationally intractable. Nonetheless, we design an XP algorithm (parameterized by the number of agents) for finding MMS and Pareto optimal solutions on every instance, and show that the same algorithm can be modified to find efficient solutions along with EF1, when such solutions exist. We complement our theoretical results by experimentally analyzing the price of fairness on randomly generated graph structures.

In Bayesian inference, the approximation of integrals of the form $\psi = \mathbb{E}_{F}{l(X)} = \int_{\chi} l(\mathbf{x}) d F(\mathbf{x})$ is a fundamental challenge. Such integrals are crucial for evidence estimation, which is important for various purposes, including model selection and numerical analysis. The existing strategies for evidence estimation are classified into four categories: deterministic approximation, density estimation, importance sampling, and vertical representation (Llorente et al., 2020). In this paper, we show that the Riemann sum estimator due to Yakowitz (1978) can be used in the context of nested sampling (Skilling, 2006) to achieve a $O(n^{-4})$ rate of convergence, faster than the usual Ergodic Central Limit Theorem. We provide a brief overview of the literature on the Riemann sum estimators and the nested sampling algorithm and its connections to vertical likelihood Monte Carlo. We provide theoretical and numerical arguments to show how merging these two ideas may result in improved and more robust estimators for evidence estimation, especially in higher dimensional spaces. We also briefly discuss the idea of simulating the Lorenz curve that avoids the problem of intractable $\Lambda$ functions, essential for the vertical representation and nested sampling.

We show that for each reduced odd latin square of even order there exists at least one map such that its image is a reduced even latin square of the same order. We prove that this map is injective. As a consequence, we can show that the number of even latin squares of even order is bounded from below by the number of odd latin squares of the same order. This gives a positive answer to the Alon-Tarsi conjecture on even latin squares

Most of the work in auction design literature assumes that bidders behave rationally based on the information available for each individual auction. However, in today's online advertising markets, one of the most important real-life applications of auction design, the data and computational power required to bid optimally are only available to the auction designer, and an advertiser can only participate by setting performance objectives (clicks, conversions, etc.) for the campaign. In this paper, we focus on value-maximizing campaigns with return-on-investment (ROI) constraints, which is widely adopted in many global-scale auto-bidding platforms. Through theoretical analysis and empirical experiments on both synthetic and realistic data, we find that second price auction exhibits many undesirable properties and loses its dominant theoretical advantages in single-item scenarios. In particular, second price auction brings equilibrium multiplicity, non-monotonicity, vulnerability to exploitation by both bidders and even auctioneers, and PPAD-hardness for the system to reach a steady-state. We also explore the broader impacts of the auto-bidding mechanism beyond efficiency and strategyproofness. In particular, the multiplicity of equilibria and the input sensitivity make advertisers' utilities unstable. In addition, the interference among both bidders and advertising slots introduces bias into A/B testing, which hinders the development of even non-bidding components of the platform. The aforementioned phenomena have been widely observed in practice, and our results indicate that one of the reasons might be intrinsic to the underlying auto-bidding mechanism. To deal with these challenges, we provide suggestions and candidate solutions for practitioners.

Layer Normalization (LayerNorm) is an inherent component in all Transformer-based models. In this paper, we show that LayerNorm is crucial to the expressivity of the multi-head attention layer that follows it. This is in contrast to the common belief that LayerNorm's only role is to normalize the activations during the forward pass, and their gradients during the backward pass. We consider a geometric interpretation of LayerNorm and show that it consists of two components: (a) projection of the input vectors to a $d-1$ space that is orthogonal to the $\left[1,1,...,1\right]$ vector, and (b) scaling of all vectors to the same norm of $\sqrt{d}$. We show that each of these components is important for the attention layer that follows it in Transformers: (a) projection allows the attention mechanism to create an attention query that attends to all keys equally, offloading the need to learn this operation by the attention; and (b) scaling allows each key to potentially receive the highest attention, and prevents keys from being "un-select-able". We show empirically that Transformers do indeed benefit from these properties of LayeNorm in general language modeling and even in computing simple functions such as "majority". Our code is available at //github.com/tech-srl/layer_norm_expressivity_role .

北京阿比特科技有限公司