亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

How do users and communities respond to news from unreliable sources? How does news from these sources change online conversations? In this work, we examine the role of misinformation in sparking political incivility and toxicity on the social media platform Reddit. Utilizing the Google Jigsaw Perspective API to identify toxicity, hate speech, and other forms of incivility, we find that Reddit comments posted in response to misinformation articles are 71.4% more likely to be toxic than comments responding to authentic news articles. Identifying specific instances of commenters' incivility and utilizing an exponential random graph model, we then show that when reacting to a misinformation story, Reddit users are more likely to be toxic to users of different political beliefs than in other settings. Finally, utilizing a zero-inflated negative binomial regression, we identify that as the toxicity of subreddits increases, users are more likely to comment on misinformation-related Reddit submissions.

相關內容

In this paper, we describe and present the first dataset of source code plagiarism specifically aimed at contest plagiarism. The dataset contains 251 pairs of plagiarized solutions of competitive programming tasks in Java, as well as 660 non-plagiarized ones, however, the described approach can be used to extend the dataset in the future. Importantly, each pair comes in two versions: (a) "raw" and (b) with participants' repeated template code removed, allowing for evaluating tools in different settings. We used the collected dataset to compare the available source code plagiarism detection tools, including state-of-the-art ones, specifically in their ability to detect contest plagiarism. Our results indicate that the tools show significantly worse performance on the contest plagiarism because of the template code and the presence of other misleadingly similar code. Of the tested tools, token-based ones demonstrated the best performance in both variants of the dataset.

Future astronauts living and working on the Moon will face extreme environmental conditions impeding their operational safety and performance. While it has been suggested that Augmented Reality (AR) Head-Up Displays (HUDs) could potentially help mitigate some of these adversities, the applicability of AR in the unique lunar context remains underexplored. To address this limitation, we have produced an accurate representation of the lunar setting in virtual reality (VR) which then formed our testbed for the exploration of prospective operational scenarios with aerospace experts. Herein we present findings based on qualitative reflections made by the first 6 study participants. AR was found instrumental in several use cases, including the support of navigation and risk awareness. Major design challenges were likewise identified, including the importance of redundancy and contextual appropriateness. Drawing on these findings, we conclude by outlining directions for future research aimed at developing AR-based assistive solutions tailored to the lunar setting.

At the same time that artificial intelligence (AI) and machine learning are becoming central to human life, their potential harms become more vivid. In the presence of such drawbacks, a critical question to address before using individual predictions for critical decision-making is whether those are reliable. Aligned with recent efforts on data-centric AI, this paper proposes a novel approach, complementary to the existing work on trustworthy AI, to address the reliability question through the lens of data. Specifically, it associates data sets with distrust quantification that specifies their scope of use for individual predictions. It develops novel algorithms for efficient and effective computation of distrust values. The proposed algorithms learn the necessary components of the measures from the data itself and are sublinear, which makes them scalable to very large and multi-dimensional settings. Furthermore, an estimator is designed to enable no-data access during the query time. Besides theoretical analyses, the algorithms are evaluated experimentally, using multiple real and synthetic data sets and different tasks. The experiment results reflect a consistent correlation between distrust values and model performance. This highlights the necessity of dismissing prediction outcomes for cases with high distrust values, at least for critical decisions.

Microbiome research is now moving beyond the compositional analysis of microbial taxa in a sample. Increasing evidence from large human microbiome studies suggests that functional consequences of changes in the intestinal microbiome may provide more power for studying their impact on inflammation and immune responses. Although 16S rRNA analysis is one of the most popular and a cost-effective method to profile the microbial compositions, marker-gene sequencing cannot provide direct information about the functional genes that are present in the genomes of community members. Bioinformatic tools have been developed to predict microbiome function with 16S rRNA gene data. Among them, PICRUSt2 has become one of the most popular functional profile prediction tools, which generates community-wide pathway abundances. However, no state-of-art inference tools are available to test the differences in pathway abundances between comparison groups. We have developed ggpicrust2, an R package, to do extensive differential abundance(DA) analyses and provide publishable visualization to highlight the signals.

Recently, social media platforms are heavily moderated to prevent the spread of online hate speech, which is usually fertile in toxic words and is directed toward an individual or a community. Owing to such heavy moderation, newer and more subtle techniques are being deployed. One of the most striking among these is fear speech. Fear speech, as the name suggests, attempts to incite fear about a target community. Although subtle, it might be highly effective, often pushing communities toward a physical conflict. Therefore, understanding their prevalence in social media is of paramount importance. This article presents a large-scale study to understand the prevalence of 400K fear speech and over 700K hate speech posts collected from Gab.com. Remarkably, users posting a large number of fear speech accrue more followers and occupy more central positions in social networks than users posting a large number of hate speech. They can also reach out to benign users more effectively than hate speech users through replies, reposts, and mentions. This connects to the fact that, unlike hate speech, fear speech has almost zero toxic content, making it look plausible. Moreover, while fear speech topics mostly portray a community as a perpetrator using a (fake) chain of argumentation, hate speech topics hurl direct multitarget insults, thus pointing to why general users could be more gullible to fear speech. Our findings transcend even to other platforms (Twitter and Facebook) and thus necessitate using sophisticated moderation policies and mass awareness to combat fear speech.

Widespread conspiracy theories may significantly impact our society. This paper focuses on the QAnon conspiracy theory, a consequential conspiracy theory that started on and disseminated successfully through social media. Our work characterizes how Reddit users who have participated in QAnon-focused subreddits engage in activities on the platform, especially outside their own communities. Using a large-scale Reddit moderation action against QAnon-related activities in 2018 as the source, we identified 13,000 users active in the early QAnon communities. We collected the 2.1 million submissions and 10.8 million comments posted by these users across all of Reddit from October 2016 to January 2021. The majority of these users were only active after the emergence of the QAnon Conspiracy theory and decreased in activity after Reddit's 2018 QAnon ban. A qualitative analysis of a sample of 915 subreddits where the "QAnon-enthusiastic" users were especially active shows that they participated in a diverse range of subreddits, often of unrelated topics to QAnon. However, most of the users' submissions were concentrated in subreddits that have sympathetic attitudes towards the conspiracy theory, characterized by discussions that were pro-Trump, or emphasized unconstricted behavior (often anti-establishment and anti-interventionist). Further study of a sample of 1,571 of these submissions indicates that most consist of links from low-quality sources, bringing potential harm to the broader Reddit community. These results point to the likelihood that the activities of early QAnon users on Reddit were dedicated and committed to the conspiracy, providing implications on both platform moderation design and future research.

The present study investigates the role of source characteristics, the quality of evidence, and prior beliefs of the topic in adult readers' credibility evaluations of short health-related social media posts. The researchers designed content for the posts concerning five health topics by manipulating the source characteristics (source's expertise, gender, and ethnicity), the accuracy of the claims, and the quality of evidence (research evidence, testimony, consensus, and personal experience) of the posts. After this, accurate and inaccurate social media posts varying in the other manipulated aspects were programmatically generated. The crowdworkers (N = 844) recruited from two platforms were asked to evaluate the credibility of up to ten social media posts, resulting in 8380 evaluations. Before credibility evaluation, participants' prior beliefs on the topics of the posts were assessed. The results showed that prior belief consistency and the source's expertise affected the perceived credibility of the accurate and inaccurate social media posts the most after controlling for the topic of the post and the crowdworking platform. In contrast, the quality of evidence supporting the health claim mattered relatively little. The source's gender and ethnicity did not have any effect. The results are discussed in terms of first- and second-hand evaluation strategies.

Democratization of AI means not only that people can freely use AI, but also that people can collectively decide how AI is to be used. In particular, collective decision-making power is required to redress the negative externalities from the development of increasingly advanced AI systems, including degradation of the digital commons and unemployment from automation. The rapid pace of AI development and deployment currently leaves little room for this power. Monopolized in the hands of private corporations, the development of the most capable foundation models has proceeded largely without public input. There is currently no implemented mechanism for ensuring that the economic value generated by such models is redistributed to account for their negative externalities. The citizens that have generated the data necessary to train models do not have input on how their data are to be used. In this work, we propose that a public data trust assert control over training data for foundation models. In particular, this trust should scrape the internet as a digital commons, to license to commercial model developers for a percentage cut of revenues from deployment. First, we argue in detail for the existence of such a trust. We also discuss feasibility and potential risks. Second, we detail a number of ways for a data trust to incentivize model developers to use training data only from the trust. We propose a mix of verification mechanisms, potential regulatory action, and positive incentives. We conclude by highlighting other potential benefits of our proposed data trust and connecting our work to ongoing efforts in data and compute governance.

This survey paper is an expanded version of an invited keynote at the ThEdu'22 workshop, August 2022, in Haifa (Israel). After a short introduction on the developments of CAS, DGS and other useful technologies, we show implications in Mathematics Education, and in the broader frame of STEAM Education. In particular, we discuss the transformation of Mathematics Education into exploration-discovery-conjecture-proof scheme, avoiding usage as a black box . This scheme fits well into the so-called 4 C's of 21st Century Education. Communication and Collaboration are emphasized not only between humans, but also between machines, and between man and machine. Specific characteristics of the outputs enhance the need of Critical Thinking. The usage of automated commands for exploration and discovery is discussed, with mention of limitations where they exist. We illustrate the topic with examples from parametric integrals (describing a "cognitive neighborhood" of a mathematical notion), plane geometry, and the study of plane curves (envelopes, isoptic curves). Some of the examples are fully worked out, others are explained and references are given.

ASR (automatic speech recognition) systems like Siri, Alexa, Google Voice or Cortana has become quite popular recently. One of the key techniques enabling the practical use of such systems in people's daily life is deep learning. Though deep learning in computer vision is known to be vulnerable to adversarial perturbations, little is known whether such perturbations are still valid on the practical speech recognition. In this paper, we not only demonstrate such attacks can happen in reality, but also show that the attacks can be systematically conducted. To minimize users' attention, we choose to embed the voice commands into a song, called CommandSong. In this way, the song carrying the command can spread through radio, TV or even any media player installed in the portable devices like smartphones, potentially impacting millions of users in long distance. In particular, we overcome two major challenges: minimizing the revision of a song in the process of embedding commands, and letting the CommandSong spread through the air without losing the voice "command". Our evaluation demonstrates that we can craft random songs to "carry" any commands and the modify is extremely difficult to be noticed. Specially, the physical attack that we play the CommandSongs over the air and record them can success with 94 percentage.

北京阿比特科技有限公司