When the threat of COVID-19 became widely acknowledged, many hoped that this epidemic would squash "the anti-vaccine movement". However, when vaccines started arriving in rich countries at the end of 2020, it appeared that vaccine hesitancy might be an issue even in the context of this major epidemic. Does it mean that the mobilization of vaccine-critical activists on social media is one of the main causes of this reticence to vaccinate against COVID-19? In this paper, we wish to contribute to current work on vaccine hesitancy during the COVID-19 epidemic by looking at one of the many mechanisms which can cause reticence towards vaccines: the capacity of vaccine-critical activists to influence a wider public on social media. We analyze the evolution of debates over the COVID-19 vaccine on the French Twittosphere, during two first years of the pandemic, with a particular attention to the spreading capacity of vaccine-critical websites. We address two main questions: 1) Did vaccine-critical contents gain ground during this period? 2) Who were the central actors in the diffusion of these contents? While debates over vaccines experienced a tremendous surge during this period, the share of vaccine-critical contents in these debates remains stable except for a limited number of short periods associated with specific events. Secondly, analyzing the community structure of the re-tweets hyper-graph, we reconstruct the mesoscale structure of the information flows, identifying and characterizing the major communities of users. We analyze their role in the information ecosystem: the largest right-wing community has a typical echo-chamber behavior collecting all the vaccine-critical tweets from outside and recirculating it inside the community. The smaller left-wing community is less permeable to vaccine-critical contents but, has a large capacity to spread it once adopted.
The wide dissemination of fake news is increasingly threatening both individuals and society. Fake news detection aims to train a model on the past news and detect fake news of the future. Though great efforts have been made, existing fake news detection methods overlooked the unintended entity bias in the real-world data, which seriously influences models' generalization ability to future data. For example, 97\% of news pieces in 2010-2017 containing the entity `Donald Trump' are real in our data, but the percentage falls down to merely 33\% in 2018. This would lead the model trained on the former set to hardly generalize to the latter, as it tends to predict news pieces about `Donald Trump' as real for lower training loss. In this paper, we propose an entity debiasing framework (\textbf{ENDEF}) which generalizes fake news detection models to the future data by mitigating entity bias from a cause-effect perspective. Based on the causal graph among entities, news contents, and news veracity, we separately model the contribution of each cause (entities and contents) during training. In the inference stage, we remove the direct effect of the entities to mitigate entity bias. Extensive offline experiments on the English and Chinese datasets demonstrate that the proposed framework can largely improve the performance of base fake news detectors, and online tests verify its superiority in practice. To the best of our knowledge, this is the first work to explicitly improve the generalization ability of fake news detection models to the future data. The code has been released at //github.com/ICTMCG/ENDEF-SIGIR2022.
The COVID-19 pandemic is accompanied by a massive "infodemic" that makes it hard to identify concise and credible information for COVID-19-related questions, like incubation time, infection rates, or the effectiveness of vaccines. As a novel solution, our paper is concerned with designing a question-answering system based on modern technologies from natural language processing to overcome information overload and misinformation in pandemic situations. To carry out our research, we followed a design science research approach and applied Ingwersen's cognitive model of information retrieval interaction to inform our design process from a socio-technical lens. On this basis, we derived prescriptive design knowledge in terms of design requirements and design principles, which we translated into the construction of a prototypical instantiation. Our implementation is based on the comprehensive CORD-19 dataset, and we demonstrate our artifact's usefulness by evaluating its answer quality based on a sample of COVID-19 questions labeled by biomedical experts.
The Coronavirus disease 2019 (COVID-19) outbreak quickly spread around the world, resulting in over 240 million infections and 4 million deaths by Oct 2021. While the virus is spreading from person to person silently, fear has also been spreading around the globe. The COVID-19 information from the Australian Government is convincing but not timely or detailed, and there is much information on social networks with both facts and rumors. As software engineers, we have spontaneously and rapidly constructed a COVID-19 information dashboard aggregating reliable information semi-automatically checked from different sources for providing one-stop information sharing site about the latest status in Australia. Inspired by the John Hopkins University COVID-19 Map, our dashboard contains the case statistics, case distribution, government policy, latest news, with interactive visualization. In this paper, we present a participant's in-person observations in which the authors acted as founders of //covid-19-au.com/ serving more than 830K users with 14M page views since March 2020. According to our first-hand experience, we summarize 9 lessons for developers, researchers and instructors. These lessons may inspire the development, research and teaching in software engineer aspects for coping with similar public crises in the future.
This paper explores Null Island, a fictional place located at 0$^\circ$ latitude and 0$^\circ$ longitude in the WGS84 geographic coordinate system. Null Island is erroneously associated with large amounts of geographic data in a wide variety of location-based services, place databases, social media and web-based maps. While it was originally considered a joke within the geospatial community, this article will demonstrate implications of its existence, both technological and social in nature, promoting Null Island as a fundamental issue of geographic information that requires more widespread awareness. The article summarizes error sources that lead to data being associated with Null Island. We identify four evolutionary phases which help explain how this fictional place evolved and established itself as an entity reaching beyond the geospatial profession to the point of being discovered by the visual arts and the general population. After providing an accurate account of data that can be found at (0, 0), geospatial, technological and social implications of Null Island are discussed. Guidelines to avoid misplacing data to Null Island are provided. Since data will likely continue to appear at this location, our contribution is aimed at both GIScientists and the general population to promote awareness of this error source.
With the advent of open source software, a veritable treasure trove of previously proprietary software development data was made available. This opened the field of empirical software engineering research to anyone in academia. Data that is mined from software projects, however, requires extensive processing and needs to be handled with utmost care to ensure valid conclusions. Since the software development practices and tools have changed over two decades, we aim to understand the state-of-the-art research workflows and to highlight potential challenges. We employ a systematic literature review by sampling over one thousand papers from leading conferences and by analyzing the 286 most relevant papers from the perspective of data workflows, methodologies, reproducibility, and tools. We found that an important part of the research workflow involving dataset selection was particularly problematic, which raises questions about the generality of the results in existing literature. Furthermore, we found a considerable number of papers provide little or no reproducibility instructions -- a substantial deficiency for a data-intensive field. In fact, 33% of papers provide no information on how their data was retrieved. Based on these findings, we propose ways to address these shortcomings via existing tools and also provide recommendations to improve research workflows and the reproducibility of research.
Modern software development is based on a series of rapid incremental changes collaboratively made to large source code repositories by developers with varying experience and expertise levels. The ZeroIn project is aimed at analyzing the metadata of these dynamic phenomena, including the data on repositories, commits, and developers, to rapidly and accurately mark the quality of commits as they arrive at the repositories. In this context, the present article presents a characterization of the software development metadata in terms of distributions of data that best captures the trends in the datasets. Multiple datasets are analyzed for this purpose, including Stack Overflow on developers' features and GitHub data on over 452 million repositories with 16 million commits. This characterization is intended to make it possible to generate multiple synthetic datasets that can be used in training and testing novel machine learning-based solutions to improve the reliability of software even as it evolves. It is also aimed at serving the development process to exploit the latent correlations among many key feature vectors across the aggregate space of repositories and developers. The data characterization of this article is designed to feed into the machine learning components of ZeroIn, including the application of binary classifiers for early flagging of buggy software commits and the development of graph-based learning methods to exploit sparse connectivity among the sets of repositories, commits, and developers.
The coronavirus pandemic has spread over the past two years in our highly connected and information-dense society. Nonetheless, disseminating accurate and up-to-date information on the spread of this pandemic remains a challenge. In this context, opting for a solution based on conversational artificial intelligence, also known under the name of the chatbot, is proving to be an unavoidable solution, especially since it has already shown its effectiveness in fighting the coronavirus crisis in several countries. This work proposes to design and implement a smart chatbot on the theme of COVID-19, called COVIBOT, which will be useful in the context of Saudi Arabia. COVIBOT is a generative-based contextual chatbot, which is built using machine learning APIs that are offered by the cloud-based Azure Cognitive Services. Two versions of COVIBOT are offered: English and Arabic versions. Use cases of COVIBOT are tested and validated using a scenario-based approach.
The outbreak of the COVID-19 pandemic has deeply influenced the lifestyle of the general public and the healthcare system of the society. As a promising approach to address the emerging challenges caused by the epidemic of infectious diseases like COVID-19, Internet of Medical Things (IoMT) deployed in hospitals, clinics, and healthcare centers can save the diagnosis time and improve the efficiency of medical resources though privacy and security concerns of IoMT stall the wide adoption. In order to tackle the privacy, security, and interoperability issues of IoMT, we propose a framework of blockchain-enabled IoMT by introducing blockchain to incumbent IoMT systems. In this paper, we review the benefits of this architecture and illustrate the opportunities brought by blockchain-enabled IoMT. We also provide use cases of blockchain-enabled IoMT on fighting against the COVID-19 pandemic, including the prevention of infectious diseases, location sharing and contact tracing, and the supply chain of injectable medicines. We also outline future work in this area.
As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continue to thrive in this new reality. Existing FL protocol design has been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this paper, we conduct the first comprehensive survey on this topic. Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions towards robust and privacy-preserving federated learning.
Deep Learning algorithms have achieved the state-of-the-art performance for Image Classification and have been used even in security-critical applications, such as biometric recognition systems and self-driving cars. However, recent works have shown those algorithms, which can even surpass the human capabilities, are vulnerable to adversarial examples. In Computer Vision, adversarial examples are images containing subtle perturbations generated by malicious optimization algorithms in order to fool classifiers. As an attempt to mitigate these vulnerabilities, numerous countermeasures have been constantly proposed in literature. Nevertheless, devising an efficient defense mechanism has proven to be a difficult task, since many approaches have already shown to be ineffective to adaptive attackers. Thus, this self-containing paper aims to provide all readerships with a review of the latest research progress on Adversarial Machine Learning in Image Classification, however with a defender's perspective. Here, novel taxonomies for categorizing adversarial attacks and defenses are introduced and discussions about the existence of adversarial examples are provided. Further, in contrast to exisiting surveys, it is also given relevant guidance that should be taken into consideration by researchers when devising and evaluating defenses. Finally, based on the reviewed literature, it is discussed some promising paths for future research.