Digital connectivity gap in the global south hampered the education of millions of school children during the COVID-19 pandemic. If not actions are taken to remedy this problem, future prospects of millions of children around will be bleak. This paper explores the feasibility of using the SpaceX Starlink satellite constellation as a means to alleviate the problem of the digital connectivity divide in the global south. First, the paper discusses the issues of digital connectivity in education in rural Sri Lanka and other countries in the global south. Then, the paper gives an introduction to Starlink broadband internet technology and discusses its advantages over traditional technologies. After that, the paper discusses a possible mechanism of adopting Starlink technology as a solution to the rural digital connectivity problem in the global south. Technological, as well as economical aspects of such scheme, are discussed. Finally, challenges that may arise in deploying a system such as Starlink to improve rural digital connectivity in Sri Lanka or any another country in the global south will be discussed with possible remedies.
This paper provides an overview of hearing loss effects on neurological function and progressive diseases; and explores the role of cognitive load monitoring to detect dementia. It also investigates the prospects of utilizing hearing aid technology to reverse cognitive decline and delay the onset of dementia, for the old age population. The interrelation between hearing loss, cognitive load and dementia is discussed. Future considerations for improvement with respect to robust diagnosis, user centricity, device accuracy and privacy for wider clinical practice is also explored. The review concludes by discussing the future scope and potential of designing practical wearable microwave technologies and evaluating their use in smart care homes setting.
Supply chain management plays an essential role in our economy, as evidenced by recent COVID-19-induced supply chain challenges. Traditional supply chain management faces security and efficiency issues, but they can be addressed by leveraging digital twins and blockchain technology. The integration of blockchain technology can benefit the digital twins through improved security, traceability, transparency, and efficiency of digital twin data processing. A digital twin is an exact virtual representation of a physical asset, system, or process to synchronise data for the monitoring, simulation, and prediction of performance. Thus, the combination of blockchain and digital twins can refine the concepts of both technologies and reform supply chain management to advance into Industry 4.0. In this literature survey, we provide a comprehensive literature review of the blockchain-based digital twin solutions to optimise the processes of data management, data storage, and data sharing. We also investigate the key benefits of the integration of blockchain and digital twins and study their potential implementation in various processes of supply chains, including smart manufacturing, intelligent maintenance, and blockchain-based digital twin shop floor, warehouse, and logistics. This paper has implications for research and practice, which we detail in future research opportunities.
The COVID-19 pandemic has shed light on how the spread of infectious diseases worldwide are importantly shaped by both human mobility networks and socio-economic factors. Few studies, however, have examined the interaction of mobility networks with socio-spatial inequalities to understand the spread of infection. We introduce a novel methodology, called the Infection Delay Model, to calculate how the arrival time of an infection varies geographically, considering both effective distance-based metrics and differences in regions' capacity to isolate -- a feature associated with socioeconomic inequalities. To illustrate an application of the Infection Delay Model, this paper integrates household travel survey data with cell phone mobility data from the S\~ao Paulo metropolitan region to assess the effectiveness of lockdowns to slow the spread of COVID-19. Rather than operating under the assumption that the next pandemic will begin in the same region as the last, the model estimates infection delays under every possible outbreak scenario, allowing for generalizable insights into the effectiveness of interventions to delay a region's first case. The model sheds light on how the effectiveness of lockdowns to slow the spread of disease is influenced by the interaction of mobility networks and socio-economic levels. We find that a negative relationship emerges between network centrality and the infection delay after lockdown, irrespective of income. Furthermore, for regions across all income and centrality levels, outbreaks starting in less central locations were more effectively slowed by a lockdown. Using the Infection Delay Model, this paper identifies and quantifies a new dimension of disease risk faced by those most central in a mobility network.
Smart grids have received much attention in recent years in order to optimally manage the resources, transmission and consumption of electric power.In these grids, one of the most important communication services is the multicast service. Providing multicast services in the smart communicative grid poses several challenges, including the heterogeneity of different communication media and the strict requirements of reliability, security and latency. Wireless technologies and PLC connections are the two most important media used in this grid, among which PLC connections are very unstable, which makes it difficult to provide reliability. In this research, the problem of geographically flooding of multicast data has been considered. First, this problem has been modeled as an optimization problem which is used as a reference model in evaluating the proposed approaches. Then, two MKMB and GCBT multicast tree formation algorithms have been developed based on geographical information according to the characteristics of smart grids. Comparison of these two approaches shows the advantages and disadvantages of forming a core-based tree compared to a source-based tree. Evaluation of these approaches shows a relative improvement in tree cost and the amount of end-to-end delay compared to basic algorithms. In the second part, providing security and reliability in data transmission has been considered. Both Hybrid and Multiple algorithms have been developed based on the idea of multiple transmission tree. In the Hybrid algorithm, the aim is to provide higher security and reliability, but in the Multiple algorithms, minimization of message transmission delay is targeted. In the section of behavior evaluation, these two algorithms have been studied in different working conditions, which indicates the achievement of the desired goals.
Big Data are growing at an exponential rate and it becomes necessary the use of tools and technologies to manage, process and visualize them in order to extract value. In this paper a micro-service based platform is presented for the composition, deployment and execution of Big Data Analytics (BDA) application workflows in several domains and scenarios is presented. ALIDA is a result coming from previous research activities by ENGINEERING. It aims to achieve a unified platform that allows both BDA application developers and data analysts to interact with it. Developers will be able to register new BDA applications through the exposed API and/or through the web user interface. Data analysts will be able to use the BDA applications provided to create batch/stream workflows through a dashboard user interface to manipulate and subsequently visualize results from one or more sources. The platform also supports the auto-tuning of Big Data frameworks deployment properties to improve metrics for analytics application. ALIDA has been properly extended and integrated into a software solution for the analysis of large amounts of data from the avionic industries. A use case within this context is then presented.
It is imperative for all stakeholders that digital forensics investigations produce reliable results to ensure the field delivers a positive contribution to the pursuit of justice across the globe. Some aspects of these investigations are inevitably contingent on trust, however this is not always explicitly considered or critically evaluated. Erroneously treating features of the investigation as trusted can be enormously damaging to the overall reliability of an investigations findings as well as the confidence that external stakeholders can have in it. As an example, digital crime scenes can be manipulated by tampering with the digital artefacts left on devices, yet recent studies have shown that efforts to detect occurrences of this are rare and argue that this leaves digital forensics investigations vulnerable to accusations of inaccuracy. In this paper a new approach to digital forensics is considered based on the concept of Zero Trust, an increasingly popular design in network security. Zero Trust describes the practitioner mindset and principles upon which the reliance on trust in network components is eliminated in favour of dynamic verification of network interactions. An initial definition of Zero Trust Digital Forensics will be proposed and then a specific example considered showing how this strategy can be applied to digital forensic investigations to mitigate against the specific risk of evidence tampering. A definition of Zero Trust Digital Forensics is proposed, specifically that it is a strategy adopted by investigators whereby each aspect of an investigation is assumed to be unreliable until verified. A new principle will be introduced, namely the multifaceted verification of digital artefacts that can be used by practitioners who wish to adopt a Zero Trust Digital Forensics strategy during their investigations...
Future wireless services must be focused on improving the quality of life by enabling various applications, such as extended reality, brain-computer interaction, and healthcare. These applications have diverse performance requirements (e.g., user-defined quality of experience metrics, latency, and reliability) that are challenging to be fulfilled by existing wireless systems. To meet the diverse requirements of the emerging applications, the concept of a digital twin has been recently proposed. A digital twin uses a virtual representation along with security-related technologies (e.g., blockchain), communication technologies (e.g., 6G), computing technologies (e.g., edge computing), and machine learning, so as to enable the smart applications. In this tutorial, we present a comprehensive overview on digital twins for wireless systems. First, we present an overview of fundamental concepts (i.e., design aspects, high-level architecture, and frameworks) of digital twin of wireless systems. Second, a comprehensive taxonomy is devised for both different aspects. These aspects are twins for wireless and wireless for twins. For the twins for wireless aspect, we consider parameters, such as twin objects design, prototyping, deployment trends, physical devices design, interface design, incentive mechanism, twins isolation, and decoupling. On the other hand, for wireless for twins, parameters such as, twin objects access aspects, security and privacy, and air interface design are considered. Finally, open research challenges and opportunities are presented along with causes and possible solutions.
A new cost-efficient concept to realize a real-time monitoring of quality-of-service metrics and other service data in 5G and beyond access network using a separate return channel based on a vertical cavity surface emitting laser in the optical injection locked mode that simultaneously operates as an optical transmitter and as a resonant cavity enhanced photodetector, is proposed and discussed. The feasibility and efficiency of the proposed approach are confirmed by a proof-of-concept experiment when optically transceiving high-speed digital signal with multi-position quadrature amplitude modulation of a radio-frequency carrier.
Network embedding aims to learn a latent, low-dimensional vector representations of network nodes, effective in supporting various network analytic tasks. While prior arts on network embedding focus primarily on preserving network topology structure to learn node representations, recently proposed attributed network embedding algorithms attempt to integrate rich node content information with network topological structure for enhancing the quality of network embedding. In reality, networks often have sparse content, incomplete node attributes, as well as the discrepancy between node attribute feature space and network structure space, which severely deteriorates the performance of existing methods. In this paper, we propose a unified framework for attributed network embedding-attri2vec-that learns node embeddings by discovering a latent node attribute subspace via a network structure guided transformation performed on the original attribute space. The resultant latent subspace can respect network structure in a more consistent way towards learning high-quality node representations. We formulate an optimization problem which is solved by an efficient stochastic gradient descent algorithm, with linear time complexity to the number of nodes. We investigate a series of linear and non-linear transformations performed on node attributes and empirically validate their effectiveness on various types of networks. Another advantage of attri2vec is its ability to solve out-of-sample problems, where embeddings of new coming nodes can be inferred from their node attributes through the learned mapping function. Experiments on various types of networks confirm that attri2vec is superior to state-of-the-art baselines for node classification, node clustering, as well as out-of-sample link prediction tasks. The source code of this paper is available at //github.com/daokunzhang/attri2vec.
The ever-growing interest witnessed in the acquisition and development of unmanned aerial vehicles (UAVs), commonly known as drones in the past few years, has brought generation of a very promising and effective technology. Because of their characteristic of small size and fast deployment, UAVs have shown their effectiveness in collecting data over unreachable areas and restricted coverage zones. Moreover, their flexible-defined capacity enables them to collect information with a very high level of detail, leading to high resolution images. UAVs mainly served in military scenario. However, in the last decade, they have being broadly adopted in civilian applications as well. The task of aerial surveillance and situation awareness is usually completed by integrating intelligence, surveillance, observation, and navigation systems, all interacting in the same operational framework. To build this capability, UAV's are well suited tools that can be equipped with a wide variety of sensors, such as cameras or radars. Deep learning has been widely recognized as a prominent approach in different computer vision applications. Specifically, one-stage object detector and two-stage object detector are regarded as the most important two groups of Convolutional Neural Network based object detection methods. One-stage object detector could usually outperform two-stage object detector in speed; however, it normally trails in detection accuracy, compared with two-stage object detectors. In this study, focal loss based RetinaNet, which works as one-stage object detector, is utilized to be able to well match the speed of regular one-stage detectors and also defeat two-stage detectors in accuracy, for UAV based object detection. State-of-the-art performance result has been showed on the UAV captured image dataset-Stanford Drone Dataset (SDD).