The fundamental aim of the healthcare sector is to incorporate different technologies to observe and keep a track of the various clinical parameters of the patients in day to day life. Distant patient observation applications are becoming popular as economical healthcare services are facilitated by these apps. The process of data management gathered through these applications also require due attention. Although cloud facilitated healthcare applications cater a variety of solutions to store patients record and deliver the required data as per need of all the stakeholders but are affected by security issues, more response time and affecting the continues availability of the system. To overcome these challenges, an intelligent IoT based distributed framework to deploy remote healthcare services is proposed in this chapter. In the proposed model, various entities of the system are interconnected using IoT and Distributed Database Management Systems is used to cater secure and fast data availability to the patients and health care workers. The concept of Blockchain is used to ensure the security of the patient medical records. The proposed model will comprise of intelligent analysis of the clinical records fetched from Distributed Database Management Systems secured with Blockchain. Proposed model is tested with true clinical data and results are discussed in detail.
While the concept of Artificial Intelligent Internet of Things\ (AIoT) is booming, computation and/or communication-intensive tasks accompanied by several sub-tasks are slowly moving from centralized deployment to edge-side deployment. The idea of edge computing also makes intelligent services sink locally. But in actual scenarios like dynamic edge computing networks (DECN), due to fluctuations in available computing resources of intermediate servers and changes in bandwidth during data transmission, service reliability becomes difficult to guarantee. Coupled with changes in the amount of data in a service, the above three problems all make the existing reliability evaluation methods no longer accurate. To study the effect of distributed service deployment strategies under such a background, this paper proposes a reliability evaluation method (REMR) based on lower boundary rule under time constraint to study the degree of the rationality of a service deployment plan combined with DECN. In this scenario, time delay is the main concern which would be affected by three quantitative factors: data packet storing and sending time, data transmission time and the calculation time of executing sub-tasks on the node devices, specially while the last two are in dynamic scenarios. In actual calculation, based on the idea of the minimal paths, the solution set would to be found that can meet requirements in the current deployment. Then the reliability of the service supported by the solution sets would be found out based on the principle of inclusion-exclusion combined with the distribution of available data transmission bandwidth and the distribution of node available computing resources. Besides a illustrative example was provided, to verify the calculated reliability of the designed service deployment plan, the NS3 is utilized along with Google cluster data set for simulation.
Digital vaccine passports are one of the main solutions which would allow the restart of travel in a post COVID-19 world. Trust, scalability and security are all key challenges one must overcome in implementing a vaccine passport. Initial approaches attempt to solve this problem by using centralised systems with trusted authorities. However, sharing vaccine passport data between different organisations, regions and countries has become a major challenge. This paper designs a new platform architecture for creating, storing and verifying digital COVID-19 vaccine certifications. The platform makes use of the InterPlanetary File System (IPFS) to guarantee there is no single point of failure and allow data to be securely distributed globally. Blockchain and smart contracts are also integrated into the platform to define policies and log access rights to vaccine passport data while ensuring all actions are audited and verifiably immutable. Our proposed platform realises General Data Protection Regulation (GDPR) requirements in terms of user consent, data encryption, data erasure and accountability obligations. We assess the scalability and performance of the platform using IPFS and Blockchain test networks.
Integrated Sensing And Communication (ISAC)forms a symbiosis between the human need for communication and the need for increasing productivity, by extracting environmental information leveraging the communication network. As multiple sensory already create a perception of the environment, an investigation into the advantages of ISAC compare to such modalities is required. Therefore, we introduce MaxRay, an ISAC framework allowing to simulate communication, sensing, and additional sensory jointly. Emphasizing the challenges for creating such sensing networks, we introduce the required propagation properties for sensing and how they are leveraged. To compare the performance of the different sensing techniques, we analyze four commonly used metrics used in different fields and evaluate their advantages and disadvantages for sensing. We depict that a metric based on prominence is suitable to cover most algorithms. Further we highlight the requirement of clutter removal algorithms, using two standard clutter removal techniques to detect a target in a typical industrial scenario. In general a versatile framework, allowing to create automatically labeled datasets to investigate a large variety of tasks is demonstrated.
Cyber-Physical systems (CPS) have complex lifecycles involving multiple stakeholders, and the transparency of both hardware and software components' supply chain is opaque at best. This raises concerns for stakeholders who may not trust that what they receive is what was requested. There is an opportunity to build a cyberphysical titling process offering universal traceability and the ability to differentiate systems based on provenance. Today, RFID tags and barcodes address some of these needs, though they are easily manipulated due to non-linkage with an object or system's intrinsic characteristics. We propose cyberphysical sequencing as a low-cost, light-weight and pervasive means of adding track-and-trace capabilities to any asset that ties a system's physical identity to a unique and invariant digital identifier. CPS sequencing offers benefits similar Digital Twins' for identifying and managing the provenance and identity of an asset throughout its life with far fewer computational and other resources.
It has been a long time that computer architecture and systems are optimized to enable efficient execution of machine learning (ML) algorithms or models. Now, it is time to reconsider the relationship between ML and systems, and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: the improvement of designers' productivity, and the completion of the virtuous cycle. In this paper, we present a comprehensive review of work that applies ML for system design, which can be grouped into two major categories, ML-based modelling that involves predictions of performance metrics or some other criteria of interest, and ML-based design methodology that directly leverages ML as the design tool. For ML-based modelling, we discuss existing studies based on their target level of system, ranging from the circuit level to the architecture/system level. For ML-based design methodology, we follow a bottom-up path to review current work, with a scope of (micro-)architecture design (memory, branch prediction, NoC), coordination between architecture/system and workload (resource allocation and management, data center management, and security), compiler, and design automation. We further provide a future vision of opportunities and potential directions, and envision that applying ML for computer architecture and systems would thrive in the community.
Driven by the visions of Internet of Things and 5G communications, the edge computing systems integrate computing, storage and network resources at the edge of the network to provide computing infrastructure, enabling developers to quickly develop and deploy edge applications. Nowadays the edge computing systems have received widespread attention in both industry and academia. To explore new research opportunities and assist users in selecting suitable edge computing systems for specific applications, this survey paper provides a comprehensive overview of the existing edge computing systems and introduces representative projects. A comparison of open source tools is presented according to their applicability. Finally, we highlight energy efficiency and deep learning optimization of edge computing systems. Open issues for analyzing and designing an edge computing system are also studied in this survey.
Deep learning has been successfully applied to solve various complex problems ranging from big data analytics to computer vision and human-level control. Deep learning advances however have also been employed to create software that can cause threats to privacy, democracy and national security. One of those deep learning-powered applications recently emerged is "deepfake". Deepfake algorithms can create fake images and videos that humans cannot distinguish them from authentic ones. The proposal of technologies that can automatically detect and assess the integrity of digital visual media is therefore indispensable. This paper presents a survey of algorithms used to create deepfakes and, more importantly, methods proposed to detect deepfakes in the literature to date. We present extensive discussions on challenges, research trends and directions related to deepfake technologies. By reviewing the background of deepfakes and state-of-the-art deepfake detection methods, this study provides a comprehensive overview of deepfake techniques and facilitates the development of new and more robust methods to deal with the increasingly challenging deepfakes.
Smart services are an important element of the smart cities and the Internet of Things (IoT) ecosystems where the intelligence behind the services is obtained and improved through the sensory data. Providing a large amount of training data is not always feasible; therefore, we need to consider alternative ways that incorporate unlabeled data as well. In recent years, Deep reinforcement learning (DRL) has gained great success in several application domains. It is an applicable method for IoT and smart city scenarios where auto-generated data can be partially labeled by users' feedback for training purposes. In this paper, we propose a semi-supervised deep reinforcement learning model that fits smart city applications as it consumes both labeled and unlabeled data to improve the performance and accuracy of the learning agent. The model utilizes Variational Autoencoders (VAE) as the inference engine for generalizing optimal policies. To the best of our knowledge, the proposed model is the first investigation that extends deep reinforcement learning to the semi-supervised paradigm. As a case study of smart city applications, we focus on smart buildings and apply the proposed model to the problem of indoor localization based on BLE signal strength. Indoor localization is the main component of smart city services since people spend significant time in indoor environments. Our model learns the best action policies that lead to a close estimation of the target locations with an improvement of 23% in terms of distance to the target and at least 67% more received rewards compared to the supervised DRL model.
Within the rapidly developing Internet of Things (IoT), numerous and diverse physical devices, Edge devices, Cloud infrastructure, and their quality of service requirements (QoS), need to be represented within a unified specification in order to enable rapid IoT application development, monitoring, and dynamic reconfiguration. But heterogeneities among different configuration knowledge representation models pose limitations for acquisition, discovery and curation of configuration knowledge for coordinated IoT applications. This paper proposes a unified data model to represent IoT resource configuration knowledge artifacts. It also proposes IoT-CANE (Context-Aware recommendatioN systEm) to facilitate incremental knowledge acquisition and declarative context driven knowledge recommendation.
There is a need for systems to dynamically interact with ageing populations to gather information, monitor health condition and provide support, especially after hospital discharge or at-home settings. Several smart devices have been delivered by digital health, bundled with telemedicine systems, smartphone and other digital services. While such solutions offer personalised data and suggestions, the real disruptive step comes from the interaction of new digital ecosystem, represented by chatbots. Chatbots will play a leading role by embodying the function of a virtual assistant and bridging the gap between patients and clinicians. Powered by AI and machine learning algorithms, chatbots are forecasted to save healthcare costs when used in place of a human or assist them as a preliminary step of helping to assess a condition and providing self-care recommendations. This paper describes integrating chatbots into telemedicine systems intended for elderly patient after their hospital discharge. The paper discusses possible ways to utilise chatbots to assist healthcare providers and support patients with their condition.