The COVID-19 pandemic has birthed a wealth of information through many publicly accessible sources, such as news outlets and social media. However, gathering and understanding the content can be difficult due to inaccuracies or inconsistencies between the different sources. To alleviate this challenge in Australia, a team of 48 student volunteers developed an open-source COVID-19 information dashboard to provide accurate, reliable, and real-time COVID-19 information for Australians. The students developed this software while working under legislative restrictions that required social isolation. The goal of this study is to characterize the experiences of the students throughout the project. We conducted an online survey completed by 39 of the volunteering students contributing to the COVID-19 dashboard project. Our results indicate that playing a positive role in the COVID-19 crisis and learning new skills and technologies were the most cited motivating factors for the students to participate in the project. While working on the project, some students struggled to maintain a work-life balance due to working from home. However, the students generally did not express strong sentiment towards general project challenges. The students expressed more strongly that data collection was a significant challenge as it was difficult to collect reliable, accurate, and up-to-date data from various government sources. The students have been able to mitigate these challenges by establishing a systematic data collection process in the team, leveraging frequent and clear communication through text, and appreciating and encouraging each other's efforts. By participating in the project, the students boosted their technical (e.g., front-end development) and non-technical (e.g., task prioritization) skills. Our study discusses several implications for students, educators, and policymakers.
Effective visualizations were evaluated to reveal relevant health patterns from multi-sensor real-time wearable devices that recorded vital signs from patients admitted to hospital with COVID-19. Furthermore, specific challenges associated with wearable health data visualizations, such as fluctuating data quality resulting from compliance problems, time needed to charge the device and technical problems are described. As a primary use case, we examined the detection and communication of relevant health patterns visible in the vital signs acquired by the technology. Customized heat maps and bar charts were used to specifically highlight medically relevant patterns in vital signs. A survey of two medical doctors, one clinical project manager and seven health data science researchers was conducted to evaluate the visualization methods. From a dataset of 84 hospitalized COVID-19 patients, we extracted one typical COVID-19 patient history and based on the visualizations showcased the health history of two noteworthy patients. The visualizations were shown to be effective, simple and intuitive in deducing the health status of patients. For clinical staff who are time-constrained and responsible for numerous patients, such visualization methods can be an effective tool to enable continuous acquisition and monitoring of patients' health statuses even remotely.
Loneliness has been associated with negative outcomes for physical and mental health. Understanding how people express and cope with various forms of loneliness is critical for early screening and targeted interventions to reduce loneliness, particularly among vulnerable groups such as young adults. To examine how different forms of loneliness and coping strategies manifest in loneliness self-disclosure, we built a dataset, FIG-Loneliness (FIne-Grained Loneliness) by using Reddit posts in two young adult-focused forums and two loneliness related forums consisting of a diverse age group. We provide annotations by trained human annotators for binary and fine-grained loneliness classifications of the posts. Trained on FIG-Loneliness, two BERT-based models were used to understand loneliness forms and authors' coping strategies in these forums. Our binary loneliness classification archived an accuracy above 97%, and fine-grained loneliness category classification reached an average accuracy of 77% across all labeled categories. With FIG-Loneliness and model predictions, we found that loneliness expressions in the young adult related forums are distinct from other forums. Those in young adult-focused forums are more likely to express concerns pertaining to peer relationship, and are potentially more sensitive to geographical isolation impacted by the COVID-19 pandemic lockdown. Also, we show that different forms of loneliness have differential use in coping strategies.
The subreddit r/The_Donald was repeatedly denounced as a toxic and misbehaving online community, reasons for which it faced a sequence of increasingly constraining moderation interventions by Reddit administrators. It was quarantined in June 2019, restricted in February 2020, and finally banned in June 2020, but despite precursory work on the matter, the effects of this sequence of interventions are still unclear. In this work, we follow a multidimensional causal inference approach to study data containing more than 15M posts made in a time frame of 2 years, to examine the effects of such interventions within and without the subreddit. We find that the interventions had strong positive effects toward reducing the activity of problematic users both inside and outside of r/The_Donald. However, the interventions also caused an increase in toxicity and led users to share more polarized and less factual news. Additional findings of our study are that the restriction had stronger effects than the quarantine and that core users of r/The_Donald suffered stronger effects than the other users. Overall, our results provide evidence that the interventions had mixed effects and paint a nuanced picture of the consequences of community-level moderation strategies. We conclude by reflecting on the challenges of policing online platforms and by discussing implications for the design and deployment of moderation interventions.
The COVID-19 pandemic continues to have major impact to health and medical infrastructure, economy, and agriculture. Prominent computational and mathematical models have been unreliable due to the complexity of the spread of infections. Moreover, lack of data collection and reporting makes modelling attempts difficult and unreliable. Hence, we need to re-look at the situation with reliable data sources and innovative forecasting models. Deep learning models such as recurrent neural networks are well suited for modelling spatiotemporal sequences. In this paper, we apply recurrent neural networks such as long short term memory (LSTM), bidirectional LSTM, and encoder-decoder LSTM models for multi-step (short-term) COVID-19 infection forecasting. We select Indian states with COVID-19 hotpots and capture the first (2020) and second (2021) wave of infections and provide two months ahead forecast. Our model predicts that the likelihood of another wave of infections in October and November 2021 is low; however, the authorities need to be vigilant given emerging variants of the virus. The accuracy of the predictions motivate the application of the method in other countries and regions. Nevertheless, the challenges in modelling remain due to the reliability of data and difficulties in capturing factors such as population density, logistics, and social aspects such as culture and lifestyle.
Online participant recruitment platforms such as Prolific have been gaining popularity in research, as they enable researchers to easily access large pools of participants. However, participant quality can be an issue; participants may give incorrect information to gain access to more studies, adding unwanted noise to results. This paper details our experience recruiting participants from Prolific for a user study requiring programming skills in Node.js, with the aim of helping other researchers conduct similar studies. We explore a method of recruiting programmer participants using prescreening validation, attention checks and a series of programming knowledge questions. We received 680 responses, and determined that 55 met the criteria to be invited to our user study. We ultimately conducted user study sessions via video calls with 10 participants. We conclude this paper with a series of recommendations for researchers.
The world population is anticipated to increase by close to 2 billion by 2050 causing a rapid escalation of food demand. A recent projection shows that the world is lagging behind accomplishing the "Zero Hunger" goal, in spite of some advancements. Socio-economic and well being fallout will affect the food security. Vulnerable groups of people will suffer malnutrition. To cater to the needs of the increasing population, the agricultural industry needs to be modernized, become smart, and automated. Traditional agriculture can be remade to efficient, sustainable, eco-friendly smart agriculture by adopting existing technologies. In this survey paper the authors present the applications, technological trends, available datasets, networking options, and challenges in smart agriculture. How Agro Cyber Physical Systems are built upon the Internet-of-Agro-Things is discussed through various application fields. Agriculture 4.0 is also discussed as a whole. We focus on the technologies, such as Artificial Intelligence (AI) and Machine Learning (ML) which support the automation, along with the Distributed Ledger Technology (DLT) which provides data integrity and security. After an in-depth study of different architectures, we also present a smart agriculture framework which relies on the location of data processing. We have divided open research problems of smart agriculture as future research work in two groups - from a technological perspective and from a networking perspective. AI, ML, the blockchain as a DLT, and Physical Unclonable Functions (PUF) based hardware security fall under the technology group, whereas any network related attacks, fake data injection and similar threats fall under the network research problem group.
In this paper, the challenges of maintaining a healthy IT operational environment have been addressed by proactively analyzing IT Service Desk tickets, customer satisfaction surveys, and social media data. A Cognitive solution goes beyond the traditional structured data analysis by deep analyses of both structured and unstructured text. The salient features of the proposed platform include language identification, translation, hierarchical extraction of the most frequently occurring topics, entities and their relationships, text summarization, sentiments, and knowledge extraction from the unstructured text using Natural Language Processing techniques. Moreover, the insights from unstructured text combined with structured data allow the development of various classification, segmentation, and time-series forecasting use-cases on the incident, problem, and change datasets. Further, the text and predictive insights together with raw data are used for visualization and exploration of actionable insights on a rich and interactive dashboard. However, it is hard not only to find these insights using traditional structured data analysis but it might also take a very long time to discover them, especially while dealing with a massive amount of unstructured data. By taking action on these insights, organizations can benefit from a significant reduction of ticket volume, reduced operational costs, and increased customer satisfaction. In various experiments, on average, upto 18-25% of yearly ticket volume has been reduced using the proposed approach.
Throughout the Covid-19 pandemic, a significant amount of effort had been put into developing techniques that predict the number of infections under various assumptions about the public policy and non-pharmaceutical interventions. While both the available data and the sophistication of the AI models and available computing power exceed what was available in previous years, the overall success of prediction approaches was very limited. In this paper, we start from prediction algorithms proposed for XPrize Pandemic Response Challenge and consider several directions that might allow their improvement. Then, we investigate their performance over medium-term predictions extending over several months. We find that augmenting the algorithms with additional information about the culture of the modeled region, incorporating traditional compartmental models and up-to-date deep learning architectures can improve the performance for short term predictions, the accuracy of medium-term predictions is still very low and a significant amount of future research is needed to make such models a reliable component of a public policy toolbox.
Federated learning (FL) is an emerging, privacy-preserving machine learning paradigm, drawing tremendous attention in both academia and industry. A unique characteristic of FL is heterogeneity, which resides in the various hardware specifications and dynamic states across the participating devices. Theoretically, heterogeneity can exert a huge influence on the FL training process, e.g., causing a device unavailable for training or unable to upload its model updates. Unfortunately, these impacts have never been systematically studied and quantified in existing FL literature. In this paper, we carry out the first empirical study to characterize the impacts of heterogeneity in FL. We collect large-scale data from 136k smartphones that can faithfully reflect heterogeneity in real-world settings. We also build a heterogeneity-aware FL platform that complies with the standard FL protocol but with heterogeneity in consideration. Based on the data and the platform, we conduct extensive experiments to compare the performance of state-of-the-art FL algorithms under heterogeneity-aware and heterogeneity-unaware settings. Results show that heterogeneity causes non-trivial performance degradation in FL, including up to 9.2% accuracy drop, 2.32x lengthened training time, and undermined fairness. Furthermore, we analyze potential impact factors and find that device failure and participant bias are two potential factors for performance degradation. Our study provides insightful implications for FL practitioners. On the one hand, our findings suggest that FL algorithm designers consider necessary heterogeneity during the evaluation. On the other hand, our findings urge system providers to design specific mechanisms to mitigate the impacts of heterogeneity.
In recent years with the rise of Cloud Computing (CC), many companies providing services in the cloud, are empowered a new series of services to their catalog, such as data mining (DM) and data processing, taking advantage of the vast computing resources available to them. Different service definition proposals have been proposed to address the problem of describing services in CC in a comprehensive way. Bearing in mind that each provider has its own definition of the logic of its services, and specifically of DM services, it should be pointed out that the possibility of describing services in a flexible way between providers is fundamental in order to maintain the usability and portability of this type of CC services. The use of semantic technologies based on the proposal offered by Linked Data (LD) for the definition of services, allows the design and modelling of DM services, achieving a high degree of interoperability. In this article a schema for the definition of DM services on CC is presented, in addition are considered all key aspects of service in CC, such as prices, interfaces, Software Level Agreement, instances or workflow of experimentation, among others. The proposal presented is based on LD, so that it reuses other schemata obtaining a best definition of the service. For the validation of the schema, a series of DM services have been created where some of the best known algorithms such as \textit{Random Forest} or \textit{KMeans} are modeled as services.