亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Administering COVID-19 vaccines at a societal scale has been deemed as the most appropriate way to defend against the COVID-19 pandemic. This global vaccination drive naturally fueled a possibility of Pro-Vaxxers and Anti-Vaxxers strongly expressing their supports and concerns regarding the vaccines on social media platforms. Understanding this online discourse is crucial for policy makers. This understanding is likely to impact the success of vaccination drives and might even impact the final outcome of our fight against the pandemic. The goal of this work is to improve this understanding using the lens of Twitter-discourse data. We first develop a classifier that categorizes users according to their vaccine-related stance with high precision (97%). Using this method we detect and investigate specific user-groups who posted about vaccines in pre-COVID and COVID times. Specifically, we identify distinct topics that these users talk about, and investigate how vaccine-related discourse has changed between pre-COVID times and COVID times. Finally, for the first time, we investigate the change of vaccine-related stances in Twitter users and shed light on potential reasons for such changes in stance. Our dataset and classifier are available at //github.com/sohampoddar26/covid-vax-stance.

相關內容

The COVID-19 pandemic has significantly influenced all modes of transportation. However, it is still unclear how the pandemic affected the demand for ridesourcing services and whether these effects varied between small towns and large cities. We analyzed over 220 million ride requests in the City of Chicago (population: 2.7 million), Illinois, and 52 thousand in the Town of Innisfil (population: 37 thousand), Ontario, to investigate the impact of the COVID-19 pandemic on the ridesourcing demand in the two locations. Overall, the pandemic resulted in fewer trips in areas with higher proportions of seniors and more trips to parks and green spaces. Ridesourcing demand was adversely affected by the stringency index and COVID-19-related variables, and positively affected by vaccination rates. However, compared to Innisfil, ridesourcing services in Chicago experienced higher reductions in demand, were more affected by the number of hospitalizations and deaths, were less impacted by vaccination rates, and had lower recovery rates.

`Tracking' is the collection of data about an individual's activity across multiple distinct contexts and the retention, use, or sharing of data derived from that activity outside the context in which it occurred. This paper aims to introduce tracking on the web, smartphones, and the Internet of Things, to an audience with little or no previous knowledge. It covers these topics primarily from the perspective of computer science and human-computer interaction, but also includes relevant law and policy aspects. Rather than a systematic literature review, it aims to provide an over-arching narrative spanning this large research space. Section 1 introduces the concept of tracking. Section 2 provides a short history of the major developments of tracking on the web. Section 3 presents research covering the detection, measurement and analysis of web tracking technologies. Section 4 delves into the countermeasures against web tracking and mechanisms that have been proposed to allow users to control and limit tracking, as well as studies into end-user perspectives on tracking. Section 5 focuses on tracking on `smart' devices including smartphones and the internet of things. Section 6 covers emerging issues affecting the future of tracking across these different platforms.

Text classification is a fundamental Natural Language Processing task that has a wide variety of applications, where deep learning approaches have produced state-of-the-art results. While these models have been heavily criticized for their black-box nature, their robustness to slight perturbations in input text has been a matter of concern. In this work, we carry out a data-focused study evaluating the impact of systematic practical perturbations on the performance of the deep learning based text classification models like CNN, LSTM, and BERT-based algorithms. The perturbations are induced by the addition and removal of unwanted tokens like punctuation and stop-words that are minimally associated with the final performance of the model. We show that these deep learning approaches including BERT are sensitive to such legitimate input perturbations on four standard benchmark datasets SST2, TREC-6, BBC News, and tweet_eval. We observe that BERT is more susceptible to the removal of tokens as compared to the addition of tokens. Moreover, LSTM is slightly more sensitive to input perturbations as compared to CNN based model. The work also serves as a practical guide to assessing the impact of discrepancies in train-test conditions on the final performance of models.

The supermarket model refers to a system with a large number of queues, where new customers choose d queues at random and join the one with the fewest customers. This model demonstrates the power of even small amounts of choice, as compared to simply joining a queue chosen uniformly at random, for load balancing systems. In this work we perform simulation-based studies to consider variations where service times for a customer are predicted, as might be done in modern settings using machine learning techniques or related mechanisms. Our primary takeaway is that using even seemingly weak predictions of service times can yield significant benefits over blind First In First Out queueing in this context. However, some care must be taken when using predicted service time information to both choose a queue and order elements for service within a queue; while in many cases using the information for both choosing and ordering is beneficial, in many of our simulation settings we find that simply using the number of jobs to choose a queue is better when using predicted service times to order jobs in a queue. In our simulations, we evaluate both synthetic and real-world workloads--in the latter, service times are predicted by machine learning. Our results provide practical guidance for the design of real-world systems; moreover, we leave many natural theoretical open questions for future work, validating their relevance to real-world situations.

Statistical methods are developed for analysis of clinical and virus genetics data from phase 3 randomized, placebo-controlled trials of vaccines against novel coronavirus COVID-19. Vaccine efficacy (VE) of a vaccine to prevent COVID-19 caused by one of finitely many genetic strains of SARS-CoV-2 may vary by strain. The problem of assessing differential VE by viral genetics can be formulated under a competing risks model where the endpoint is virologically confirmed COVID-19 and the cause-of-failure is the infecting SARS-CoV-2 genotype. Strain-specific VE is defined as one minus the cause-specific hazard ratio (vaccine/placebo). For the COVID-19 VE trials, the time to COVID-19 is right-censored, and a substantial percentage of failure cases are missing the infecting virus genotype. We develop estimation and hypothesis testing procedures for strain-specific VE when the failure time is subject to right censoring and the cause-of-failure is subject to missingness, focusing on $J \ge 2$ discrete categorical unordered or ordered virus genotypes. The stratified Cox proportional hazards model is used to relate the cause-specific outcomes to explanatory variables. The inverse probability weighted complete-case (IPW) estimator and the augmented inverse probability weighted complete-case (AIPW) estimator are investigated. Hypothesis tests are developed to assess whether the vaccine provides at least a specified level of efficacy against some viral genotypes and whether VE varies across genotypes, adjusting for covariates. The finite-sample properties of the proposed tests are studied through simulations and are shown to have good performances. In preparation for the real data analyses, the developed methods are applied to a pseudo dataset mimicking the Moderna COVE trial.

The vast majority of the work on adaptive data analysis focuses on the case where the samples in the dataset are independent. Several approaches and tools have been successfully applied in this context, such as differential privacy, max-information, compression arguments, and more. The situation is far less well-understood without the independence assumption. We embark on a systematic study of the possibilities of adaptive data analysis with correlated observations. First, we show that, in some cases, differential privacy guarantees generalization even when there are dependencies within the sample, which we quantify using a notion we call Gibbs-dependence. We complement this result with a tight negative example. Second, we show that the connection between transcript-compression and adaptive data analysis can be extended to the non-iid setting.

Objective: The aims of the study were to examine the association between social media sentiments surrounding COVID-19 vaccination and the effects on vaccination rates in the United States (US), as well as other contributing factors to the COVID-19 vaccine hesitancy. Method: The dataset used in this study consists of vaccine-related English tweets collected in real-time from January 4 - May 11, 2021, posted within the US, as well as health literacy (HL), social vulnerability index (SVI), and vaccination rates at the state level. Results: The findings presented in this study demonstrate a significant correlation between the sentiments of the tweets and the vaccination rate in the US. The results also suggest a significant negative association between HL and SVI and that the state demographics correlate with both HL and SVI. Discussion: Social media activity provides insights into public opinion about vaccinations and helps determine the required public health interventions to increase the vaccination rate in the US. Conclusion: Health literacy, social vulnerability index and monitoring of social media sentiments need to be considered in public health interventions as part of vaccination campaigns.

The word embedding association test (WEAT) is an important method for measuring linguistic biases against social groups such as ethnic minorities in large text corpora. It does so by comparing the semantic relatedness of words prototypical of the groups (e.g., names unique to those groups) and attribute words (e.g., 'pleasant' and 'unpleasant' words). We show that anti-black WEAT estimates from geo-tagged social media data at the level of metropolitan statistical areas strongly correlate with several measures of racial animus--even when controlling for sociodemographic covariates. However, we also show that every one of these correlations is explained by a third variable: the frequency of Black names in the underlying corpora relative to White names. This occurs because word embeddings tend to group positive (negative) words and frequent (rare) words together in the estimated semantic space. As the frequency of Black names on social media is strongly correlated with Black Americans' prevalence in the population, this results in spurious anti-Black WEAT estimates wherever few Black Americans live. This suggests that research using the WEAT to measure bias should consider term frequency, and also demonstrates the potential consequences of using black-box models like word embeddings to study human cognition and behavior.

There is a growing body of work that proposes methods for mitigating bias in machine learning systems. These methods typically rely on access to protected attributes such as race, gender, or age. However, this raises two significant challenges: (1) protected attributes may not be available or it may not be legal to use them, and (2) it is often desirable to simultaneously consider multiple protected attributes, as well as their intersections. In the context of mitigating bias in occupation classification, we propose a method for discouraging correlation between the predicted probability of an individual's true occupation and a word embedding of their name. This method leverages the societal biases that are encoded in word embeddings, eliminating the need for access to protected attributes. Crucially, it only requires access to individuals' names at training time and not at deployment time. We evaluate two variations of our proposed method using a large-scale dataset of online biographies. We find that both variations simultaneously reduce race and gender biases, with almost no reduction in the classifier's overall true positive rate.

北京阿比特科技有限公司