Temporal evolution of the coronavirus literature over the last thirty years (N=43,769) is analyzed along with its subdomain of SARS-CoV-2 articles (N=27,460) and the subdomain of reviews and meta-analytic studies (N=1,027). (i) The analyses on the subset of SARS-CoV-2 literature identified studies published prior to 2020 that have now proven highly instrumental in the development of various clusters of publications linked to SARS-CoV-2. In particular, the so-called sleeping beauties of the coronavirus literature with an awakening in 2020 were identified, i.e., previously published studies of this literature that had remained relatively unnoticed for several years but gained sudden traction in 2020 in the wake of the SARS-CoV-2 outbreak. (ii) The subset of 2020 SARS-CoV-2 articles is bibliographically distant from the rest of this literature published prior to 2020. Individual articles of the SARS-CoV-2 segment with a bridging role between the two bodies of articles (i.e., before and after 2020) are identifiable. (iii) Furthermore, the degree of bibliographic coupling within the 2020 SARS-CoV-2 cluster is much poorer compared to the cluster of articles published prior to 2020. This could, in part, be explained by the higher diversity of topics that are studied in relation to SARS-CoV-2 compared to the literature of coronaviruses published prior to the SARS-CoV-2 disease. This work demonstrates how scholarly efforts undertaken during peace time or prior to a disease outbreak could suddenly play a critical role in prevention and mitigation of health disasters caused by new diseases.
We have witnessed an unprecedented public health crisis caused by the new coronavirus disease (COVID-19), which has severely affected medical institutions, our common lives, and social-economic activities. This crisis also reveals the brittleness of existing medical services, such as over-centralization of medical resources, the hysteresis of medical services digitalization, and weak security and privacy protection of medical data. The integration of the Internet of Medical Things (IoMT) and blockchain is expected to be a panacea to COVID-19 attributed to the ubiquitous presence and the perception of IoMT as well as the enhanced security and immutability of the blockchain. However, the synergy of IoMT and blockchain is also faced with challenges in privacy, latency, and context-absence. The emerging edge intelligence technologies bring opportunities to tackle these issues. In this article, we present a blockchain-empowered edge intelligence for IoMT in addressing the COVID-19 crisis. We first review IoMT, edge intelligence, and blockchain in addressing the COVID-19 pandemic. We then present an architecture of blockchain-empowered edge intelligence for IoMT after discussing the opportunities of integrating blockchain and edge intelligence. We next offer solutions to COVID-19 brought by blockchain-empowered edge intelligence from 1) monitoring and tracing COVID-19 pandemic origin, 2) traceable supply chain of injectable medicines and COVID-19 vaccines, and 3) telemedicine and remote healthcare services. Moreover, we also discuss the challenges and open issues in blockchain-empowered edge intelligence.
Context: Machine Learning (ML) has been at the heart of many innovations over the past years. However, including it in so-called 'safety-critical' systems such as automotive or aeronautic has proven to be very challenging, since the shift in paradigm that ML brings completely changes traditional certification approaches. Objective: This paper aims to elucidate challenges related to the certification of ML-based safety-critical systems, as well as the solutions that are proposed in the literature to tackle them, answering the question 'How to Certify Machine Learning Based Safety-critical Systems?'. Method: We conduct a Systematic Literature Review (SLR) of research papers published between 2015 to 2020, covering topics related to the certification of ML systems. In total, we identified 217 papers covering topics considered to be the main pillars of ML certification: Robustness, Uncertainty, Explainability, Verification, Safe Reinforcement Learning, and Direct Certification. We analyzed the main trends and problems of each sub-field and provided summaries of the papers extracted. Results: The SLR results highlighted the enthusiasm of the community for this subject, as well as the lack of diversity in terms of datasets and type of models. It also emphasized the need to further develop connections between academia and industries to deepen the domain study. Finally, it also illustrated the necessity to build connections between the above mention main pillars that are for now mainly studied separately. Conclusion: We highlighted current efforts deployed to enable the certification of ML based software systems, and discuss some future research directions.
We analyze repeated cross-sectional survey data collected by the Institute of Global Health Innovation, to characterize the perception and behavior of the Italian population during the Covid-19 pandemic, focusing on the period that spans from April to November 2020. To accomplish this goal, we propose a Bayesian dynamic latent-class regression model, that accounts for the effect of sampling bias including survey weights into the likelihood function. According to the proposed approach, attitudes towards Covid-19 are described via three ideal behaviors that are fixed over time, corresponding to different degrees of compliance with spread-preventive measures. The overall tendency toward a specific profile dynamically changes across survey waves via a latent Gaussian process regression, that adjusts for subject-specific covariates. We illustrate the dynamic evolution of Italians' behaviors during the pandemic, providing insights on how the proportion of ideal behaviors has varied during the phases of the lockdown, while measuring the effect of age, sex, region and employment of the respondents on the attitude toward Covid-19.
The 2019 Coronavirus disease (COVID-19) pandemic, caused by a quick dissemination of the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), has had a deep impact worldwide, both in terms of the loss of human life and the economic and social disruption. The use of digital technologies has been seen as an important effort to combat the pandemic and one of such technologies is contact tracing applications. These applications were successfully employed to face other infectious diseases, thus they have been used during the current pandemic. However, the use of contact tracing poses several privacy concerns since it is necessary to store and process data which can lead to the user/device identification as well as location and behavior tracking. These concerns are even more relevant when considering nationwide implementations since they can lead to mass surveillance by authoritarian governments. Despite the restrictions imposed by data protection laws from several countries, there are still doubts on the preservation of the privacy of the users. In this article, we analyze the privacy features in national contact tracing COVID-19 applications considering their intrinsic characteristics. As a case study, we discuss in more depth the Brazilian COVID-19 application Coronav\'irus-SUS, since Brazil is one of the most impacted countries by the current pandemic. Finally, as we believe contact tracing will continue to be employed as part of the strategy for the current and potential future pandemics, we present key research challenges.
Over the last 20 years, a very large number of startups have been launched, ranging from mobile application and game providers to enormous corporations that have started as tiny startups. Startups are an important topic for research and development. The fundamentals of success are the characteristics of individuals and teams, partner investors, the market, and the speed at which everything evolves. Startup's business environment is fraught with uncertainty, as actors tend to be young and inexperienced, technologies either new or rapidly evolving, and team-combined skills and knowledge either key or fatal. As over 90 per cent of software startups fail, having a capable and reliable team is crucial to survival and success. Many aspects of this topic have been extensively studied, and the results of the study on human capital are particularly important. Regarding human capital abilities, such as knowledge, experience, skills, and other cognitive abilities, this dissertation focuses on design skills and their deployment in startups. Design is widely studied in artistic and industrial contexts, but its application to startup culture and software startups follows its own method prison. In the method prison, old and conventional means are chosen instead of new techniques and demanding design studies. This means that when a software startup considers design as a foundation for creativity and generating better offerings, they can grab any industry with a disruptive agenda, making anything software-intensive.
Awareness of the possible impacts associated with artificial intelligence has risen in proportion to progress in the field. While there are tremendous benefits to society, many argue that there are just as many, if not more, concerns related to advanced forms of artificial intelligence. Accordingly, research into methods to develop artificial intelligence safely is increasingly important. In this paper, we provide an overview of one such safety paradigm: containment with a critical lens aimed toward generative adversarial networks and potentially malicious artificial intelligence. Additionally, we illuminate the potential for a developmental blindspot in the stovepiping of containment mechanisms.
Scientific and technological progress is largely driven by firms in many domains, including artificial intelligence and vaccine development. However, we do not know yet whether the success of firms' research activities exhibits dynamic regularities and some degree of predictability. By inspecting the research lifecycles of 7,440 firms, we find that the economic value of a firm's early patents is an accurate predictor of various dimensions of a firm's future research success. At the same time, a smaller set of future top-performers do not generate early patents of high economic value, but they are detectable via the technological value of their early patents. Importantly, the observed predictability cannot be explained by a cumulative advantage mechanism, and the observed heterogeneity of the firms' temporal success patterns markedly differs from patterns previously observed for individuals' research careers. Our results uncover the dynamical regularities of the research success of firms, and they could inform managerial strategies as well as policies to promote entrepreneurship and accelerate human progress.
A novel model is here introduced for the SOC change index defined as the normalized difference between the actual Soil Organic Carbon and the value assumed at an initial reference year. It is tailored on the RothC carbon model dynamics and assumes as baseline the value of the SOC equilibrium under constant environmental conditions. A sensitivity analysis is performed to evaluate the response of the model to changes of temperature, Net Primary Production (NPP), and land use soil class (forest, grassland, arable). A non-standard monthly time-stepping procedure has been proposed to approximate the SOC change index in the Alta Murgia National Park, a protected area in the Italian Apulia region, selected as test site. In the case of arable class, the SOC change index exhibits a negative trend which can be inverted by a suitable organic fertilization program here proposed.
To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.
BERT-based architectures currently give state-of-the-art performance on many NLP tasks, but little is known about the exact mechanisms that contribute to its success. In the current work, we focus on the interpretation of self-attention, which is one of the fundamental underlying components of BERT. Using a subset of GLUE tasks and a set of handcrafted features-of-interest, we propose the methodology and carry out a qualitative and quantitative analysis of the information encoded by the individual BERT's heads. Our findings suggest that there is a limited set of attention patterns that are repeated across different heads, indicating the overall model overparametrization. While different heads consistently use the same attention patterns, they have varying impact on performance across different tasks. We show that manually disabling attention in certain heads leads to a performance improvement over the regular fine-tuned BERT models.