亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In a crowdsourcing contest, a requester holding a task posts it to a crowd. People in the crowd then compete with each other to win the rewards. Although in real life, a crowd is usually networked and people influence each other via social ties, existing crowdsourcing contest theories do not aim to answer how interpersonal relationships influence peoples' incentives and behaviors, and thereby affect the crowdsourcing performance. In this work, we novelly take peoples' social ties as a key factor in the modeling and designing of agents' incentives for crowdsourcing contests. We then establish a new contest mechanism by which the requester can impel agents to invite their neighbours to contribute to the task. The mechanism has a simple rule and is very easy for agents to play. According to our equilibrium analysis, in the Bayesian Nash equilibrium agents' behaviors show a vast diversity, capturing that besides the intrinsic ability, the social ties among agents also play a central role for decision-making. After that, we design an effective algorithm to automatically compute the Bayesian Nash equilibrium of the invitation crowdsourcing contest and further adapt it to large graphs. Both theoretical and empirical results show that, the invitation crowdsourcing contest can substantially enlarge the number of contributors, whereby the requester can obtain significantly better solutions without a large advertisement expenditure.

相關內容

In the quest for autonomous agents learning open-ended repertoires of skills, most works take a Piagetian perspective: learning trajectories are the results of interactions between developmental agents and their physical environment. The Vygotskian perspective, on the other hand, emphasizes the centrality of the socio-cultural environment: higher cognitive functions emerge from transmissions of socio-cultural processes internalized by the agent. This paper argues that both perspectives could be coupled within the learning of autotelic agents to foster their skill acquisition. To this end, we make two contributions: 1) a novel social interaction protocol called Help Me Explore (HME), where autotelic agents can benefit from both individual and socially guided exploration. In social episodes, a social partner suggests goals at the frontier of the learning agent knowledge. In autotelic episodes, agents can either learn to master their own discovered goals or autonomously rehearse failed social goals; 2) GANGSTR, a graph-based autotelic agent for manipulation domains capable of decomposing goals into sequences of intermediate sub-goals. We show that when learning within HME, GANGSTR overcomes its individual learning limits by mastering the most complex configurations (e.g. stacks of 5 blocks) with only few social interventions.

Fake news is an age-old phenomenon, widely assumed to be associated with political propaganda published to sway public opinion. Yet, with the growth of social media it has become a lucrative business for web publishers. Despite many studies performed and countermeasures deployed from researchers and stakeholders, unreliable news sites have increased their share of engagement among the top performing news sources in last years. Indeed, stifling fake news impact depends on the efforts from the society, and the market, in limiting the (economic) incentives of fake news producers. In this paper, we aim at enhancing the transparency around these exact incentives and explore the following main questions: Who supports the existence of fake news websites via paid ads, either as an advertiser or an ad seller? Who owns these websites and what other Web business are they into? What tracking activity do they perform in these websites? Aiming to answer these questions, we are the first to systematize the auditing process of fake news revenue flows. We develop a novel ad detection methodology to identify the companies that advertise in fake news websites and the intermediary companies responsible for facilitating those ad revenues. We study more than 2400 popular fake and real news websites and show that well-known legitimate ad networks, such as of Google, IndexExchange, and AppNexus, have a direct advertising relation with more than 40% of these fake news websites, and a re-seller advertising relation with more than 60% of them. Using a graph clustering approach on an extended set of 114.5K sites connected with 443K edges, we show that entities who own fake news websites, also own (or operate) other types of websites for entertainment, business, and politics, pointing to the fact that owning a fake news website is part of a broader business operation.

The fundamental challenge of drawing causal inference is that counterfactual outcomes are not fully observed for any unit. Furthermore, in observational studies, treatment assignment is likely to be confounded. Many statistical methods have emerged for causal inference under unconfoundedness conditions given pre-treatment covariates, including propensity score-based methods, prognostic score-based methods, and doubly robust methods. Unfortunately for applied researchers, there is no `one-size-fits-all' causal method that can perform optimally universally. In practice, causal methods are primarily evaluated quantitatively on handcrafted simulated data. Such data-generative procedures can be of limited value because they are typically stylized models of reality. They are simplified for tractability and lack the complexities of real-world data. For applied researchers, it is critical to understand how well a method performs for the data at hand. Our work introduces a deep generative model-based framework, Credence, to validate causal inference methods. The framework's novelty stems from its ability to generate synthetic data anchored at the empirical distribution for the observed sample, and therefore virtually indistinguishable from the latter. The approach allows the user to specify ground truth for the form and magnitude of causal effects and confounding bias as functions of covariates. Thus simulated data sets are used to evaluate the potential performance of various causal estimation methods when applied to data similar to the observed sample. We demonstrate Credence's ability to accurately assess the relative performance of causal estimation techniques in an extensive simulation study and two real-world data applications from Lalonde and Project STAR studies.

Data curation is the process of making a dataset fit-for-use and archiveable. It is critical to data-intensive science because it makes complex data pipelines possible, makes studies reproducible, and makes data (re)usable. Yet the complexities of the hands-on, technical and intellectual work of data curation is frequently overlooked or downplayed. Obscuring the work of data curation not only renders the labor and contributions of the data curators invisible; it also makes it harder to tease out the impact curators' work has on the later usability, reliability, and reproducibility of data. To better understand the specific work of data curation -- and thereby, explore ways of showing curators' impact -- we conducted a close examination of data curation at a large social science data repository, the Inter-university Consortium of Political and Social Research (ICPSR). We asked, What does curatorial work entail at ICPSR, and what work is more or less visible to different stakeholders and in different contexts? And, how is that curatorial work coordinated across the organization? We triangulate accounts of data curation from interviews and records of curation in Jira tickets to develop a rich and detailed account of curatorial work. We find that curators describe a number of craft practices needed to perform their work, which defies the rote sequence of events implied by many lifecycle or workflow models. Further, we show how best practices and craft practices are deeply intertwined.

Motivated by the intricacies of allocating treasury funds in blockchain settings, we study the problem of crowdsourcing reviews for many different proposals, in parallel. During the reviewing phase, every reviewer can select the proposals to write reviews for, as well as the quality of each review. The quality levels follow certain very coarse community guidelines and can have values such as 'excellent' or 'good'. Based on these scores and the distribution of reviews, every reviewer will receive some reward for their efforts. In this paper, we design a reward scheme and show that it always has pure Nash equilibria, for any set of proposals and reviewers. In addition, we show that these equilibria guarantee constant factor approximations for two natural metrics: the total quality of all reviews, as well as the fraction of proposals that received at least one review, compared to the optimal outcome.

Recent years have witnessed remarkable progress towards computational fake news detection. To mitigate its negative impact, we argue that it is critical to understand what user attributes potentially cause users to share fake news. The key to this causal-inference problem is to identify confounders -- variables that cause spurious associations between treatments (e.g., user attributes) and outcome (e.g., user susceptibility). In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities. Learning such user behavior is typically subject to selection bias in users who are susceptible to share news on social media. Drawing on causal inference theories, we first propose a principled approach to alleviating selection bias in fake news dissemination. We then consider the learned unbiased fake news sharing behavior as the surrogate confounder that can fully capture the causal links between user attributes and user susceptibility. We theoretically and empirically characterize the effectiveness of the proposed approach and find that it could be useful in protecting society from the perils of fake news.

A key challenge of big data analytics is how to collect a large volume of (labeled) data. Crowdsourcing aims to address this challenge via aggregating and estimating high-quality data (e.g., sentiment label for text) from pervasive clients/users. Existing studies on crowdsourcing focus on designing new methods to improve the aggregated data quality from unreliable/noisy clients. However, the security aspects of such crowdsourcing systems remain under-explored to date. We aim to bridge this gap in this work. Specifically, we show that crowdsourcing is vulnerable to data poisoning attacks, in which malicious clients provide carefully crafted data to corrupt the aggregated data. We formulate our proposed data poisoning attacks as an optimization problem that maximizes the error of the aggregated data. Our evaluation results on one synthetic and two real-world benchmark datasets demonstrate that the proposed attacks can substantially increase the estimation errors of the aggregated data. We also propose two defenses to reduce the impact of malicious clients. Our empirical results show that the proposed defenses can substantially reduce the estimation errors of the data poisoning attacks.

In recent years, disinformation including fake news, has became a global phenomenon due to its explosive growth, particularly on social media. The wide spread of disinformation and fake news can cause detrimental societal effects. Despite the recent progress in detecting disinformation and fake news, it is still non-trivial due to its complexity, diversity, multi-modality, and costs of fact-checking or annotation. The goal of this chapter is to pave the way for appreciating the challenges and advancements via: (1) introducing the types of information disorder on social media and examine their differences and connections; (2) describing important and emerging tasks to combat disinformation for characterization, detection and attribution; and (3) discussing a weak supervision approach to detect disinformation with limited labeled data. We then provide an overview of the chapters in this book that represent the recent advancements in three related parts: (1) user engagements in the dissemination of information disorder; (2) techniques on detecting and mitigating disinformation; and (3) trending issues such as ethics, blockchain, clickbaits, etc. We hope this book to be a convenient entry point for researchers, practitioners, and students to understand the problems and challenges, learn state-of-the-art solutions for their specific needs, and quickly identify new research problems in their domains.

Learning algorithms become more powerful, often at the cost of increased complexity. In response, the demand for algorithms to be transparent is growing. In NLP tasks, attention distributions learned by attention-based deep learning models are used to gain insights in the models' behavior. To which extent is this perspective valid for all NLP tasks? We investigate whether distributions calculated by different attention heads in a transformer architecture can be used to improve transparency in the task of abstractive summarization. To this end, we present both a qualitative and quantitative analysis to investigate the behavior of the attention heads. We show that some attention heads indeed specialize towards syntactically and semantically distinct input. We propose an approach to evaluate to which extent the Transformer model relies on specifically learned attention distributions. We also discuss what this implies for using attention distributions as a means of transparency.

In many applications, it is important to characterize the way in which two concepts are semantically related. Knowledge graphs such as ConceptNet provide a rich source of information for such characterizations by encoding relations between concepts as edges in a graph. When two concepts are not directly connected by an edge, their relationship can still be described in terms of the paths that connect them. Unfortunately, many of these paths are uninformative and noisy, which means that the success of applications that use such path features crucially relies on their ability to select high-quality paths. In existing applications, this path selection process is based on relatively simple heuristics. In this paper we instead propose to learn to predict path quality from crowdsourced human assessments. Since we are interested in a generic task-independent notion of quality, we simply ask human participants to rank paths according to their subjective assessment of the paths' naturalness, without attempting to define naturalness or steering the participants towards particular indicators of quality. We show that a neural network model trained on these assessments is able to predict human judgments on unseen paths with near optimal performance. Most notably, we find that the resulting path selection method is substantially better than the current heuristic approaches at identifying meaningful paths.

北京阿比特科技有限公司