The Covid-19 pandemic presents a serious threat to people's health, resulting in over 250 million confirmed cases and over 5 million deaths globally. In order to reduce the burden on national health care systems and to mitigate the effects of the outbreak, accurate modelling and forecasting methods for short- and long-term health demand are needed to inform government interventions aiming at curbing the pandemic. Current research on Covid-19 is typically based on a single source of information, specifically on structured historical pandemic data. Other studies are exclusively focused on unstructured online retrieved insights, such as data available from social media. However, the combined use of structured and unstructured information is still uncharted. This paper aims at filling this gap, by leveraging historical as well as social media information with a novel data integration methodology. The proposed approach is based on vine copulas, which allow us to improve predictions by exploiting the dependencies between different sources of information. We apply the methodology to combine structured datasets retrieved from official sources and to a big unstructured dataset of information collected from social media. The results show that the proposed approach, compared to traditional approaches, yields more accurate estimations and predictions of the evolution of the Covid-19 pandemic.
With the ongoing COVID-19 pandemic, understanding the characteristics of the virus has become an important and challenging task in the scientific community. While tests do exist for COVID-19, the goal of our research is to explore other methods of identifying infected individuals. Our group applied unsupervised clustering techniques to explore a dataset of lungscans of COVID-19 infected, Viral Pneumonia infected, and healthy individuals. This is an important area to explore as COVID-19 is a novel disease that is currently being studied in detail. Our methodology explores the potential that unsupervised clustering algorithms have to reveal important hidden differences between COVID-19 and other respiratory illnesses. Our experiments use: Principal Component Analysis (PCA), K-Means++ (KM++) and the recently developed Robust Continuous Clustering algorithm (RCC). We evaluate the performance of KM++ and RCC in clustering COVID-19 lung scans using the Adjusted Mutual Information (AMI) score.
We conducted semi-structured interviews with members of the LGBTQ community about their privacy practices and concerns on social networking sites. Participants used different social media sites for different needs and adapted to not being completely out on each site. We would value the opportunity to discuss the unique privacy and security needs of this population with workshop participants and learn more about the privacy needs of other marginalized user groups from researchers who have worked in those communities.
A drastic rise in potentially life-threatening misinformation has been a by-product of the COVID-19 pandemic. Computational support to identify false information within the massive body of data on the topic is crucial to prevent harm. Researchers proposed many methods for flagging online misinformation related to COVID-19. However, these methods predominantly target specific content types (e.g., news) or platforms (e.g., Twitter). The methods' capabilities to generalize were largely unclear so far. We evaluate fifteen Transformer-based models on five COVID-19 misinformation datasets that include social media posts, news articles, and scientific papers to fill this gap. We show tokenizers and models tailored to COVID-19 data do not provide a significant advantage over general-purpose ones. Our study provides a realistic assessment of models for detecting COVID-19 misinformation. We expect that evaluating a broad spectrum of datasets and models will benefit future research in developing misinformation detection systems.
The importance of the working document is that it allows the analysis of information and cases associated with (SARS-CoV-2) COVID-19, based on the daily information generated by the Government of Mexico through the Secretariat of Health, responsible for the Epidemiological Surveillance System for Viral Respiratory Diseases (SVEERV). The information in the SVEERV is disseminated as open data, and the level of information is displayed at the municipal, state and national levels. On the other hand, the monitoring of the genomic surveillance of (SARS-CoV-2) COVID-19, through the identification of variants and mutations, is registered in the database of the Information System of the Global Initiative on Sharing All Influenza Data (GISAID) based in Germany. These two sources of information SVEERV and GISAID provide the information for the analysis of the impact of (SARS-CoV-2) COVID-19 on the population in Mexico. The first data source identifies information, at the national level, on patients according to age, sex, comorbidities and COVID-19 presence (SARS-CoV-2), among other characteristics. The data analysis is carried out by means of the design of an algorithm applying data mining techniques and methodology, to estimate the case fatality rate, positivity index and identify a typology according to the severity of the infection identified in patients who present a positive result. for (SARS-CoV-2) COVID-19. From the second data source, information is obtained worldwide on the new variants and mutations of COVID-19 (SARS-CoV-2), providing valuable information for timely genomic surveillance. This study analyzes the impact of (SARS-CoV-2) COVID-19 on the indigenous language-speaking population, it allows us to provide information, quickly and in a timely manner, to support the design of public policy on health.
Music streaming services heavily rely on recommender systems to improve their users' experience, by helping them navigate through a large musical catalog and discover new songs, albums or artists. However, recommending relevant and personalized content to new users, with few to no interactions with the catalog, is challenging. This is commonly referred to as the user cold start problem. In this applied paper, we present the system recently deployed on the music streaming service Deezer to address this problem. The solution leverages a semi-personalized recommendation strategy, based on a deep neural network architecture and on a clustering of users from heterogeneous sources of information. We extensively show the practical impact of this system and its effectiveness at predicting the future musical preferences of cold start users on Deezer, through both offline and online large-scale experiments. Besides, we publicly release our code as well as anonymized usage data from our experiments. We hope that this release of industrial resources will benefit future research on user cold start recommendation.
The accurate and interpretable prediction of future events in time-series data often requires the capturing of representative patterns (or referred to as states) underpinning the observed data. To this end, most existing studies focus on the representation and recognition of states, but ignore the changing transitional relations among them. In this paper, we present evolutionary state graph, a dynamic graph structure designed to systematically represent the evolving relations (edges) among states (nodes) along time. We conduct analysis on the dynamic graphs constructed from the time-series data and show that changes on the graph structures (e.g., edges connecting certain state nodes) can inform the occurrences of events (i.e., time-series fluctuation). Inspired by this, we propose a novel graph neural network model, Evolutionary State Graph Network (EvoNet), to encode the evolutionary state graph for accurate and interpretable time-series event prediction. Specifically, Evolutionary State Graph Network models both the node-level (state-to-state) and graph-level (segment-to-segment) propagation, and captures the node-graph (state-to-segment) interactions over time. Experimental results based on five real-world datasets show that our approach not only achieves clear improvements compared with 11 baselines, but also provides more insights towards explaining the results of event predictions.
In recommender systems, modeling user-item behaviors is essential for user representation learning. Existing sequential recommenders consider the sequential correlations between historically interacted items for capturing users' historical preferences. However, since users' preferences are by nature time-evolving and diversified, solely modeling the historical preference (without being aware of the time-evolving trends of preferences) can be inferior for recommending complementary or fresh items and thus hurt the effectiveness of recommender systems. In this paper, we bridge the gap between the past preference and potential future preference by proposing the future-aware diverse trends (FAT) framework. By future-aware, for each inspected user, we construct the future sequences from other similar users, which comprise of behaviors that happen after the last behavior of the inspected user, based on a proposed neighbor behavior extractor. By diverse trends, supposing the future preferences can be diversified, we propose the diverse trends extractor and the time-aware mechanism to represent the possible trends of preferences for a given user with multiple vectors. We leverage both the representations of historical preference and possible future trends to obtain the final recommendation. The quantitative and qualitative results from relatively extensive experiments on real-world datasets demonstrate the proposed framework not only outperforms the state-of-the-art sequential recommendation methods across various metrics, but also makes complementary and fresh recommendations.
In recent years, Graph Neural Networks (GNNs), which can naturally integrate node information and topological structure, have been demonstrated to be powerful in learning on graph data. These advantages of GNNs provide great potential to advance social recommendation since data in social recommender systems can be represented as user-user social graph and user-item graph; and learning latent factors of users and items is the key. However, building social recommender systems based on GNNs faces challenges. For example, the user-item graph encodes both interactions and their associated opinions; social relations have heterogeneous strengths; users involve in two graphs (e.g., the user-user social graph and the user-item graph). To address the three aforementioned challenges simultaneously, in this paper, we present a novel graph neural network framework (GraphRec) for social recommendations. In particular, we provide a principled approach to jointly capture interactions and opinions in the user-item graph and propose the framework GraphRec, which coherently models two graphs and heterogeneous strengths. Extensive experiments on two real-world datasets demonstrate the effectiveness of the proposed framework GraphRec.
Although Recommender Systems have been comprehensively studied in the past decade both in industry and academia, most of current recommender systems suffer from the fol- lowing issues: 1) The data sparsity of the user-item matrix seriously affect the recommender system quality. As a result, most of traditional recommender system approaches are not able to deal with the users who have rated few items, which is known as cold start problem in recommender system. 2) Traditional recommender systems assume that users are in- dependently and identically distributed and ignore the social relation between users. However, in real life scenario, due to the exponential growth of social networking service, such as facebook and Twitter, social connections between different users play an significant role for recommender system task. In this work, aiming at providing a better recommender sys- tems by incorporating user social network information, we propose a matrix factorization framework with user social connection constraints. Experimental results on the real-life dataset shows that the proposed method performs signifi- cantly better than the state-of-the-art approaches in terms of MAE and RMSE, especially for the cold start users.
For extracting meaningful topics from texts, their structures should be considered properly. In this paper, we aim to analyze structured time-series documents such as a collection of news articles and a series of scientific papers, wherein topics evolve along time depending on multiple topics in the past and are also related to each other at each time. To this end, we propose a dynamic and static topic model, which simultaneously considers the dynamic structures of the temporal topic evolution and the static structures of the topic hierarchy at each time. We show the results of experiments on collections of scientific papers, in which the proposed method outperformed conventional models. Moreover, we show an example of extracted topic structures, which we found helpful for analyzing research activities.