Significant attention in the financial industry has paved the way for blockchain technology to spread across other industries, resulting in a plethora of literature on the subject. This study approaches the subject through bibliometrics and network analysis of 6790 records extracted from the Web of Science from 2014-2020 based on blockchain. This study asserts (i) the impact of open access publication on the growth and visibility of literature, (ii) the collaboration patterns and impact of team size on collaboration, (iii) the ranking of countries based on their national and international collaboration, and (iv) the major themes in the literature through thematic analysis. Based on the significant momentum gained by the blockchain, the trend of open access publications has increased 1.5 times than no open access in 2020. This analysis articulates the numerous potentials of blockchain literature and its adoption by various countries and their authors. China and the USA are the top leaders in the field and applied blockchain more with smart contracts, supply chain, and internet of things. Also, results show that blockchain has attracted the attention of less than 1% of authors who have contributed to multiple works on the blockchain and authors also preferred to work in teams smaller in size.
While operating communication networks adaptively may improve utilization and performance, frequent adjustments also introduce an algorithmic challenge: the re-optimization of traffic engineering solutions is time-consuming and may limit the granularity at which a network can be adjusted. This paper is motivated by question whether the reactivity of a network can be improved by re-optimizing solutions dynamically rather than from scratch, especially if inputs such as link weights do not change significantly. This paper explores to what extent dynamic algorithms can be used to speed up fundamental tasks in network operations. We specifically investigate optimizations related to traffic engineering (namely shortest paths and maximum flow computations), but also consider spanning tree and matching applications. While prior work on dynamic graph algorithms focuses on link insertions and deletions, we are interested in the practical problem of link weight changes. We revisit existing upper bounds in the weight-dynamic model, and present several novel lower bounds on the amortized runtime for recomputing solutions. In general, we find that the potential performance gains depend on the application, and there are also strict limitations on what can be achieved, even if link weights change only slightly.
Disaster victim identification (DVI) entails a protracted process of evidence collection and data matching to reconcile physical remains with victim identity. Technology is critical to DVI by enabling the linkage of physical evidence to information. However, labelling physical remains and collecting data at the scene are dominated by low-technology paper-based practices. We ask, how can technology help us tag and track the victims of disaster? Our response has two parts. First, we conducted a human-computer interaction led investigation into the systematic factors impacting DVI tagging and tracking processes. Through interviews with Australian DVI practitioners, we explored how technologies to improve linkage might fit with prevailing work practices and preferences; practical and social considerations; and existing systems and processes. Using insights from these interviews and relevant literature, we identified four critical themes: protocols and training; stress and stressors; the plurality of information capture and management systems; and practicalities and constraints. Second, we applied the themes identified in the first part of the investigation to critically review technologies that could support DVI practitioners by enhancing DVI processes that link physical evidence to information. This resulted in an overview of candidate technologies matched with consideration of their key attributes. This study recognises the importance of considering human factors that can affect technology adoption into existing practices. We provide a searchable table (Supplementary Information) that relates technologies to the key attributes relevant to DVI practice, for the reader to apply to their own context. While this research directly contributes to DVI, it also has applications to other domains in which a physical/digital linkage is required, particularly within high-stress environments.
The fourth industrial revolution is rapidly changing the manufacturing landscape. Due to the growing research and fast evolution in this field, no clear definitions of these concepts yet exist. This work provides a clear description of technological trends and gaps. We introduce a novel method to create a map of Industry 4.0 technologies, using natural language processing to extract technology terms from 14,667 research articles and applying network analysis. We identified eight clusters of Industry 4.0 technologies, which served as the basis for our analysis. Our results show that Industrial Internet of Things (IIoT) technologies have become the center of the Industry 4.0 technology map. This is in line with the initial definitions of Industry 4.0, which centered on IIoT. Given the recent growth in the importance of artificial intelligence (AI), we suggest accounting for AI's fundamental role in Industry 4.0 and understanding the fourth industrial revolution as an AI-powered natural collaboration between humans and machines. This article introduces a novel approach for literature reviews, and the results highlight trends and research gaps to guide future work and help these actors reap the benefits of digital transformations.
With the rise of AI in SE, researchers have shown how AI can be applied to assist software developers in a wide variety of activities. However, it has not been accompanied by a complementary increase in labelled datasets, which is required in many supervised learning methods. Several studies have been using crowdsourcing platforms to collect labelled training data in recent years. However, research has shown that the quality of labelled data is unstable due to participant bias, knowledge variance, and task difficulty. Thus, we present CodeLabeller, a web-based tool that aims to provide a more efficient approach in handling the process of labelling Java source files at scale by improving the data collection process throughout, and improving the degree of reliability of responses by requiring each labeller to attach a confidence rating to each of their responses. We test CodeLabeller by constructing a corpus of over a thousand source files obtained from a large collection of opensource Java projects, and labelling each Java source file with their respective design patterns and summaries. Apart from assisting researchers to crowdsource a labelled dataset, the tool has practical applicability in software engineering education and assists in building expert ratings for software artefacts. This paper discusses the motivation behind the creation of CodeLabeller, the intended users, a tool demonstration and its UI, its implementation, benefits, and lastly, the evaluation through a user study and in-practice usage.
Symbol-pair codes are block codes with symbol-pair metrics designed to protect against pair-errors that may occur in high-density data storage systems. MDS symbol-pair codes are optimal in the sense that it can attain the highest pair-error correctability within the same code length and code size. Constructing MDS symbol-pair codes is one of the main topics in symbol-pair codes. In this paper, we characterize the symbol-pair distances of some constacyclic codes of arbitrary lengths over finite fields and a class of finite chain rings. Using the characterization of symbol-pair distance, we present several classes of MDS symbol-pair constacyclic codes and show that there is no other MDS symbol-pair code among the class of constacyclic codes except for what we present. Moreover, some of these MDS symbol-pair constacyclic codes over the finite chain rings cannot be obtained by previous work.
In humans, Attention is a core property of all perceptual and cognitive operations. Given our limited ability to process competing sources, attention mechanisms select, modulate, and focus on the information most relevant to behavior. For decades, concepts and functions of attention have been studied in philosophy, psychology, neuroscience, and computing. For the last six years, this property has been widely explored in deep neural networks. Currently, the state-of-the-art in Deep Learning is represented by neural attention models in several application domains. This survey provides a comprehensive overview and analysis of developments in neural attention models. We systematically reviewed hundreds of architectures in the area, identifying and discussing those in which attention has shown a significant impact. We also developed and made public an automated methodology to facilitate the development of reviews in the area. By critically analyzing 650 works, we describe the primary uses of attention in convolutional, recurrent networks and generative models, identifying common subgroups of uses and applications. Furthermore, we describe the impact of attention in different application domains and their impact on neural networks' interpretability. Finally, we list possible trends and opportunities for further research, hoping that this review will provide a succinct overview of the main attentional models in the area and guide researchers in developing future approaches that will drive further improvements.
The explanation dimension of Artificial Intelligence (AI) based system has been a hot topic for the past years. Different communities have raised concerns about the increasing presence of AI in people's everyday tasks and how it can affect people's lives. There is a lot of research addressing the interpretability and transparency concepts of explainable AI (XAI), which are usually related to algorithms and Machine Learning (ML) models. But in decision-making scenarios, people need more awareness of how AI works and its outcomes to build a relationship with that system. Decision-makers usually need to justify their decision to others in different domains. If that decision is somehow based on or influenced by an AI-system outcome, the explanation about how the AI reached that result is key to building trust between AI and humans in decision-making scenarios. In this position paper, we discuss the role of XAI in decision-making scenarios, our vision of Decision-Making with AI-system in the loop, and explore one case from the literature about how XAI can impact people justifying their decisions, considering the importance of building the human-AI relationship for those scenarios.
This paper identifies the factors that have an impact on mobile recommender systems. Recommender systems have become a technology that has been widely used by various online applications in situations where there is an information overload problem. Numerous applications such as e-Commerce, video platforms and social networks provide personalized recommendations to their users and this has improved the user experience and vendor revenues. The development of recommender systems has been focused mostly on the proposal of new algorithms that provide more accurate recommendations. However, the use of mobile devices and the rapid growth of the internet and networking infrastructure has brought the necessity of using mobile recommender systems. The links between web and mobile recommender systems are described along with how the recommendations in mobile environments can be improved. This work is focused on identifying the links between web and mobile recommender systems and to provide solid future directions that aim to lead in a more integrated mobile recommendation domain.
This paper reports on modern approaches in Information Extraction (IE) and its two main sub-tasks of Named Entity Recognition (NER) and Relation Extraction (RE). Basic concepts and the most recent approaches in this area are reviewed, which mainly include Machine Learning (ML) based approaches and the more recent trend to Deep Learning (DL) based methods.