The understanding of the public response to COVID-19 vaccines is the key success factor to control the COVID-19 pandemic. To understand the public response, there is a need to explore public opinion. Traditional surveys are expensive and time-consuming, address limited health topics, and obtain small-scale data. Twitter can provide a great opportunity to understand public opinion regarding COVID-19 vaccines. The current study proposes an approach using computational and human coding methods to collect and analyze a large number of tweets to provide a wider perspective on the COVID-19 vaccine. This study identifies the sentiment of tweets using a machine learning rule-based approach, discovers major topics, explores temporal trend and compares topics of negative and non-negative tweets using statistical tests, and discloses top topics of tweets having negative and non-negative sentiment. Our findings show that the negative sentiment regarding the COVID-19 vaccine had a decreasing trend between November 2020 and February 2021. We found Twitter users have discussed a wide range of topics from vaccination sites to the 2020 U.S. election between November 2020 and February 2021. The findings show that there was a significant difference between tweets having negative and non-negative sentiment regarding the weight of most topics. Our results also indicate that the negative and non-negative tweets had different topic priorities and focuses. This research illustrates that Twitter data can be used to explore public opinion regarding the COVID-19 vaccine.
The global pandemic of COVID-19 has fundamentally changed how people interact, especially with the introduction of technology-based measures that aim at curbing the spread of the virus. As the country that currently implements one of the tightest technology-based COVID prevention policy, China has protected its citizen with a prolonged peaceful time of zero case as well as a fast reaction to potential upsurging of the disease. However, such mobile-based technology does come with sacrifices, especially for senior citizens who find themselves difficult to adapt to modern technologies. In this study, we demonstrated the fact that most senior citizens find it difficult to use the health code apps called ''JKM'', to which they responded by cutting down on travel and reducing local commuting to locations where the verification of JKM is needed. Such compromise has physical and mental consequences and leads to inequalities in infrastructure, social isolation and self-sufficiency. As we illustrated in the paper, such decrease in life quality of senior citizens can be greatly reduced if improvements on the user interactions of the JKM can be implemented. To the best of our knowledge, we are the first systemic study of digital inequality due to mobile-based COVID prevention technologies for senior citizens in China. As similar technologies become widely adopted around the world, we wish to shed light on how widened digital inequality increasingly affects the life quality of senior citizens in the pandemic era.
Administering COVID-19 vaccines at a societal scale has been deemed as the most appropriate way to defend against the COVID-19 pandemic. This global vaccination drive naturally fueled a possibility of Pro-Vaxxers and Anti-Vaxxers strongly expressing their supports and concerns regarding the vaccines on social media platforms. Understanding this online discourse is crucial for policy makers. This understanding is likely to impact the success of vaccination drives and might even impact the final outcome of our fight against the pandemic. The goal of this work is to improve this understanding using the lens of Twitter-discourse data. We first develop a classifier that categorizes users according to their vaccine-related stance with high precision (97%). Using this method we detect and investigate specific user-groups who posted about vaccines in pre-COVID and COVID times. Specifically, we identify distinct topics that these users talk about, and investigate how vaccine-related discourse has changed between pre-COVID times and COVID times. Finally, for the first time, we investigate the change of vaccine-related stances in Twitter users and shed light on potential reasons for such changes in stance. Our dataset and classifier are available at //github.com/sohampoddar26/covid-vax-stance.
COVID-19 has likely been the most disruptive event at a global scale the world experienced since WWII. Our discipline never experienced such a phenomenon, whereby software engineers were forced to abruptly work from home. Nearly every developer started new working habits and organizational routines, while trying to stay mentally healthy and productive during the lockdowns. We are now starting to realize that some of these new habits and routines may stick with us in the future. Therefore, it is of importance to understand how we have worked from home so far. We investigated whether 15 psychological, social, and situational variables such as quality of social contacts or loneliness predict software engineers' well-being and productivity across a four wave longitudinal study of over 14 months. Additionally, we tested whether there were changes in any of these variables across time. We found that developers' well-being and quality of social contacts improved between April 2020 and July 2021, while their emotional loneliness went down. Other variables, such as productivity and boredom have not changed. We further found that developers' stress measured in May 2020 negatively predicted their well-being 14 months later, even after controlling for many other variables. Finally, comparisons of women and men, as well as between developers residing in the UK and USA, were not statistically different but revealed substantial similarities.
This paper presents the results and analyses stemming from the first VoicePrivacy 2020 Challenge which focuses on developing anonymization solutions for speech technology. We provide a systematic overview of the challenge design with an analysis of submitted systems and evaluation results. In particular, we describe the voice anonymization task and datasets used for system development and evaluation. Also, we present different attack models and the associated objective and subjective evaluation metrics. We introduce two anonymization baselines and provide a summary description of the anonymization systems developed by the challenge participants. We report objective and subjective evaluation results for baseline and submitted systems. In addition, we present experimental results for alternative privacy metrics and attack models developed as a part of the post-evaluation analysis. Finally, we summarize our insights and observations that will influence the design of the next VoicePrivacy challenge edition and some directions for future voice anonymization research.
In December 2019, a novel virus called COVID-19 had caused an enormous number of causalities to date. The battle with the novel Coronavirus is baffling and horrifying after the Spanish Flu 2019. While the front-line doctors and medical researchers have made significant progress in controlling the spread of the highly contiguous virus, technology has also proved its significance in the battle. Moreover, Artificial Intelligence has been adopted in many medical applications to diagnose many diseases, even baffling experienced doctors. Therefore, this survey paper explores the methodologies proposed that can aid doctors and researchers in early and inexpensive methods of diagnosis of the disease. Most developing countries have difficulties carrying out tests using the conventional manner, but a significant way can be adopted with Machine and Deep Learning. On the other hand, the access to different types of medical images has motivated the researchers. As a result, a mammoth number of techniques are proposed. This paper first details the background knowledge of the conventional methods in the Artificial Intelligence domain. Following that, we gather the commonly used datasets and their use cases to date. In addition, we also show the percentage of researchers adopting Machine Learning over Deep Learning. Thus we provide a thorough analysis of this scenario. Lastly, in the research challenges, we elaborate on the problems faced in COVID-19 research, and we address the issues with our understanding to build a bright and healthy environment.
As of December 2020, the COVID-19 pandemic has infected over 75 million people, making it the deadliest pandemic in modern history. This study develops a novel compartmental epidemiological model specific to the SARS-CoV-2 virus and analyzes the effect of common preventative measures such as testing, quarantine, social distancing, and vaccination. By accounting for the most prevalent interventions that have been enacted to minimize the spread of the virus, the model establishes a paramount foundation for future mathematical modeling of COVID-19 and other modern pandemics. Specifically, the model expands on the classic SIR model and introduces separate compartments for individuals who are in the incubation period, asymptomatic, tested-positive, quarantined, vaccinated, or deceased. It also accounts for variable infection, testing, and death rates. I first analyze the outbreak in Santa Clara County, California, and later generalize the findings. The results show that, although all preventative measures reduce the spread of COVID-19, quarantine and social distancing mandates reduce the infection rate and subsequently are the most effective policies, followed by vaccine distribution and, finally, public testing. Thus, governments should concentrate resources on enforcing quarantine and social distancing policies. In addition, I find mathematical proof that the relatively high asymptomatic rate and long incubation period are driving factors of COVID-19's rapid spread.
Recent years have witnessed remarkable progress towards computational fake news detection. To mitigate its negative impact, we argue that it is critical to understand what user attributes potentially cause users to share fake news. The key to this causal-inference problem is to identify confounders -- variables that cause spurious associations between treatments (e.g., user attributes) and outcome (e.g., user susceptibility). In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities. Learning such user behavior is typically subject to selection bias in users who are susceptible to share news on social media. Drawing on causal inference theories, we first propose a principled approach to alleviating selection bias in fake news dissemination. We then consider the learned unbiased fake news sharing behavior as the surrogate confounder that can fully capture the causal links between user attributes and user susceptibility. We theoretically and empirically characterize the effectiveness of the proposed approach and find that it could be useful in protecting society from the perils of fake news.
In recent years, disinformation including fake news, has became a global phenomenon due to its explosive growth, particularly on social media. The wide spread of disinformation and fake news can cause detrimental societal effects. Despite the recent progress in detecting disinformation and fake news, it is still non-trivial due to its complexity, diversity, multi-modality, and costs of fact-checking or annotation. The goal of this chapter is to pave the way for appreciating the challenges and advancements via: (1) introducing the types of information disorder on social media and examine their differences and connections; (2) describing important and emerging tasks to combat disinformation for characterization, detection and attribution; and (3) discussing a weak supervision approach to detect disinformation with limited labeled data. We then provide an overview of the chapters in this book that represent the recent advancements in three related parts: (1) user engagements in the dissemination of information disorder; (2) techniques on detecting and mitigating disinformation; and (3) trending issues such as ethics, blockchain, clickbaits, etc. We hope this book to be a convenient entry point for researchers, practitioners, and students to understand the problems and challenges, learn state-of-the-art solutions for their specific needs, and quickly identify new research problems in their domains.
There is a need for systems to dynamically interact with ageing populations to gather information, monitor health condition and provide support, especially after hospital discharge or at-home settings. Several smart devices have been delivered by digital health, bundled with telemedicine systems, smartphone and other digital services. While such solutions offer personalised data and suggestions, the real disruptive step comes from the interaction of new digital ecosystem, represented by chatbots. Chatbots will play a leading role by embodying the function of a virtual assistant and bridging the gap between patients and clinicians. Powered by AI and machine learning algorithms, chatbots are forecasted to save healthcare costs when used in place of a human or assist them as a preliminary step of helping to assess a condition and providing self-care recommendations. This paper describes integrating chatbots into telemedicine systems intended for elderly patient after their hospital discharge. The paper discusses possible ways to utilise chatbots to assist healthcare providers and support patients with their condition.
This project addresses the problem of sentiment analysis in twitter; that is classifying tweets according to the sentiment expressed in them: positive, negative or neutral. Twitter is an online micro-blogging and social-networking platform which allows users to write short status updates of maximum length 140 characters. It is a rapidly expanding service with over 200 million registered users - out of which 100 million are active users and half of them log on twitter on a daily basis - generating nearly 250 million tweets per day. Due to this large amount of usage we hope to achieve a reflection of public sentiment by analysing the sentiments expressed in the tweets. Analysing the public sentiment is important for many applications such as firms trying to find out the response of their products in the market, predicting political elections and predicting socioeconomic phenomena like stock exchange. The aim of this project is to develop a functional classifier for accurate and automatic sentiment classification of an unknown tweet stream.