亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the problem of fair division, where a set of indivisible goods should be distributed fairly among a set of agents with combinatorial valuations. To capture fairness, we adopt the notion of shares, where each agent is entitled to a fair share, based on some fairness criterion, and an allocation is considered fair if the value of every agent (weakly) exceeds her fair share. A share-based notion is considered universally feasible if it admits a fair allocation for every profile of monotone valuations. A major question arises: is there a non-trivial share-based notion that is universally feasible? The most well-known share-based notions, namely proportionality and maximin share, are not universally feasible, nor are any constant approximations of them. We propose a novel share notion, where an agent assesses the fairness of a bundle by comparing it to her valuation in a random allocation. In this framework, a bundle is considered $q$-quantile fair, for $q\in[0,1]$, if it is at least as good as a bundle obtained in a uniformly random allocation with probability at least $q$. Our main question is whether there exists a constant value of $q$ for which the $q$-quantile share is universally feasible. Our main result establishes a strong connection between the feasibility of quantile shares and the classical Erd\H{o}s Matching Conjecture. Specifically, we show that if a version of this conjecture is true, then the $\frac{1}{2e}$-quantile share is universally feasible. Furthermore, we provide unconditional feasibility results for additive, unit-demand and matroid-rank valuations for constant values of $q$. Finally, we discuss the implications of our results for other share notions.

相關內容

Collaborative perception aims to mitigate the limitations of single-agent perception, such as occlusions, by facilitating data exchange among multiple agents. However, most current works consider a homogeneous scenario where all agents use identity sensors and perception models. In reality, heterogeneous agent types may continually emerge and inevitably face a domain gap when collaborating with existing agents. In this paper, we introduce a new open heterogeneous problem: how to accommodate continually emerging new heterogeneous agent types into collaborative perception, while ensuring high perception performance and low integration cost? To address this problem, we propose HEterogeneous ALliance (HEAL), a novel extensible collaborative perception framework. HEAL first establishes a unified feature space with initial agents via a novel multi-scale foreground-aware Pyramid Fusion network. When heterogeneous new agents emerge with previously unseen modalities or models, we align them to the established unified space with an innovative backward alignment. This step only involves individual training on the new agent type, thus presenting extremely low training costs and high extensibility. It also protects new agents' model details from disclosure since the training can be conducted by the agent owner locally. To enrich agents' data heterogeneity, we bring OPV2V-H, a new large-scale dataset with more diverse sensor types. Extensive experiments on OPV2V-H and DAIR-V2X datasets show that HEAL surpasses SOTA methods in performance while reducing the training parameters by 91.5% when integrating 3 new agent types. Code and data are available at: //github.com/yifanlu0227/HEAL.

We consider an M/M/1 queueing model where customers can strategically decide whether to join the queue or balk and when to renege. We characterize the class of queueing regimes such that, for any parameters of the model, the socially efficient behavior is an equilibrium outcome.

We investigate the role of the initial screening order (ISO) in candidate screening processes, such as hiring and academic admissions. ISO refers to the order in which the screener sorts the candidate pool before the evaluation. It has been largely overlooked in the literature, despite its potential impact on the optimality and fairness of the chosen set, especially under a human screener. We define two problem formulations: best-$k$, where the screener chooses the $k$ best candidates, and good-$k$, where the screener chooses the first $k$ good-enough candidates. To study the impact of ISO, we introduce a human-like screener and compare to its algorithmic counterpart. The human-like screener is conceived to be inconsistent over time due to fatigue. Our analysis shows that the ISO under a human-like screener hinders individual fairness despite meeting group level fairness. This is due to the position bias, where a candidate's evaluation is affected by its position within ISO. We report extensive simulated experiments exploring the parameters of the problem formulations both for algorithmic and human-like screeners. This work is motivated by a real world candidate screening problem studied in collaboration with a large European company.

Amodal perception, the ability to comprehend complete object structures from partial visibility, is a fundamental skill, even for infants. Its significance extends to applications like autonomous driving, where a clear understanding of heavily occluded objects is essential. However, modern detection and tracking algorithms often overlook this critical capability, perhaps due to the prevalence of modal annotations in most datasets. To address the scarcity of amodal data, we introduce the TAO-Amodal benchmark, featuring 880 diverse categories in thousands of video sequences. Our dataset includes amodal and modal bounding boxes for visible and occluded objects, including objects that are partially out-of-frame. To enhance amodal tracking with object permanence, we leverage a lightweight plug-in module, the amodal expander, to transform standard, modal trackers into amodal ones through fine-tuning on a few hundred video sequences with data augmentation. We achieve a 3.3\% and 1.6\% improvement on the detection and tracking of occluded objects on TAO-Amodal. When evaluated on people, our method produces dramatic improvements of 2x compared to state-of-the-art modal baselines.

Secure communication is considered with unreliable entanglement assistance, where the adversary may intercept the legitimate receiver's entanglement resource before communication takes place. The communication setting of unreliable assistance, without security aspects, was originally motivated by the extreme photon loss in practical communication systems. The operational principle is to adapt the transmission rate to the availability of entanglement assistance, without resorting to feedback and repetition. Here, we require secrecy as well. An achievable secrecy rate region is derived for general quantum wiretap channels, and a multi-letter secrecy capacity formula for the special class of degraded channels.

As a nonparametric statistical inference approach, empirical likelihood has been found very useful in numerous occasions. However, it encounters serious computational challenges when applied directly to the modern massive dataset. This article studies empirical likelihood inference over decentralized distributed networks, where the data are locally collected and stored by different nodes. To fully utilize the data, this article fuses Lagrange multipliers calculated in different nodes by employing a penalization technique. The proposed distributed empirical log-likelihood ratio statistic with Lagrange multipliers solved by the penalized function is asymptotically standard chi-squared under regular conditions even for a divergent machine number. Nevertheless, the optimization problem with the fused penalty is still hard to solve in the decentralized distributed network. To address the problem, two alternating direction method of multipliers (ADMM) based algorithms are proposed, which both have simple node-based implementation schemes. Theoretically, this article establishes convergence properties for proposed algorithms, and further proves the linear convergence of the second algorithm in some specific network structures. The proposed methods are evaluated by numerical simulations and illustrated with analyses of census income and Ford gobike datasets.

Recommender systems are often susceptible to well-crafted fake profiles, leading to biased recommendations. Among existing defense methods, data-processing-based methods inevitably exclude normal samples, while model-based methods struggle to enjoy both generalization and robustness. To this end, we suggest integrating data processing and the robust model to propose a general framework, Triple Cooperative Defense (TCD), which employs three cooperative models that mutually enhance data and thereby improve recommendation robustness. Furthermore, Considering that existing attacks struggle to balance bi-level optimization and efficiency, we revisit poisoning attacks in recommender systems and introduce an efficient attack strategy, Co-training Attack (Co-Attack), which cooperatively optimizes the attack optimization and model training, considering the bi-level setting while maintaining attack efficiency. Moreover, we reveal a potential reason for the insufficient threat of existing attacks is their default assumption of optimizing attacks in undefended scenarios. This overly optimistic setting limits the potential of attacks. Consequently, we put forth a Game-based Co-training Attack (GCoAttack), which frames the proposed CoAttack and TCD as a game-theoretic process, thoroughly exploring CoAttack's attack potential in the cooperative training of attack and defense. Extensive experiments on three real datasets demonstrate TCD's superiority in enhancing model robustness. Additionally, we verify that the two proposed attack strategies significantly outperform existing attacks, with game-based GCoAttack posing a greater poisoning threat than CoAttack.

The communication complexity of many fundamental problems reduces greatly when the communicating parties share randomness that is independent of the inputs to the communication task. Natural communication processes (say between humans) however often involve large amounts of shared correlations among the communicating players, but rarely allow for perfect sharing of randomness. Can the communication complexity benefit from shared correlations as well as it does from shared randomness? This question was considered mainly in the context of simultaneous communication by Bavarian et al. (ICALP 2014). In this work we study this problem in the standard interactive setting and give some general results. In particular, we show that every problem with communication complexity of $k$ bits with perfectly shared randomness has a protocol using imperfectly shared randomness with complexity $\exp(k)$ bits. We also show that this is best possible by exhibiting a promise problem with complexity $k$ bits with perfectly shared randomness which requires $\exp(k)$ bits when the randomness is imperfectly shared. Along the way we also highlight some other basic problems such as compression, and agreement distillation, where shared randomness plays a central role and analyze the complexity of these problems in the imperfectly shared randomness model. The technical highlight of this work is the lower bound that goes into the result showing the tightness of our general connection. This result builds on the intuition that communication with imperfectly shared randomness needs to be less sensitive to its random inputs than communication with perfectly shared randomness. The formal proof invokes results about the small-set expansion of the noisy hypercube and an invariance principle to convert this intuition to a proof, thus giving a new application domain for these fundamental results.

We consider the problem of explaining the predictions of graph neural networks (GNNs), which otherwise are considered as black boxes. Existing methods invariably focus on explaining the importance of graph nodes or edges but ignore the substructures of graphs, which are more intuitive and human-intelligible. In this work, we propose a novel method, known as SubgraphX, to explain GNNs by identifying important subgraphs. Given a trained GNN model and an input graph, our SubgraphX explains its predictions by efficiently exploring different subgraphs with Monte Carlo tree search. To make the tree search more effective, we propose to use Shapley values as a measure of subgraph importance, which can also capture the interactions among different subgraphs. To expedite computations, we propose efficient approximation schemes to compute Shapley values for graph data. Our work represents the first attempt to explain GNNs via identifying subgraphs explicitly and directly. Experimental results show that our SubgraphX achieves significantly improved explanations, while keeping computations at a reasonable level.

Recently, a considerable literature has grown up around the theme of Graph Convolutional Network (GCN). How to effectively leverage the rich structural information in complex graphs, such as knowledge graphs with heterogeneous types of entities and relations, is a primary open challenge in the field. Most GCN methods are either restricted to graphs with a homogeneous type of edges (e.g., citation links only), or focusing on representation learning for nodes only instead of jointly propagating and updating the embeddings of both nodes and edges for target-driven objectives. This paper addresses these limitations by proposing a novel framework, namely the Knowledge Embedding based Graph Convolutional Network (KE-GCN), which combines the power of GCNs in graph-based belief propagation and the strengths of advanced knowledge embedding (a.k.a. knowledge graph embedding) methods, and goes beyond. Our theoretical analysis shows that KE-GCN offers an elegant unification of several well-known GCN methods as specific cases, with a new perspective of graph convolution. Experimental results on benchmark datasets show the advantageous performance of KE-GCN over strong baseline methods in the tasks of knowledge graph alignment and entity classification.

北京阿比特科技有限公司