An Intelligent IoT Environment (iIoTe) is comprised of heterogeneous devices that can collaboratively execute semi-autonomous IoT applications, examples of which include highly automated manufacturing cells or autonomously interacting harvesting machines. Energy efficiency is key in such edge environments, since they are often based on an infrastructure that consists of wireless and battery-run devices, e.g., e-tractors, drones, Automated Guided Vehicle (AGV)s and robots. The total energy consumption draws contributions from multipleiIoTe technologies that enable edge computing and communication, distributed learning, as well as distributed ledgers and smart contracts. This paper provides a state-of-the-art overview of these technologies and illustrates their functionality and performance, with special attention to the tradeoff among resources, latency, privacy and energy consumption. Finally, the paper provides a vision for integrating these enabling technologies in energy-efficient iIoTe and a roadmap to address the open research challenges
Multi-access edge computing (MEC) emerges as an essential part of the upcoming Fifth Generation (5G) and future beyond-5G mobile communication systems. It brings computation power to the edge of cellular networks, which is close to the energy-constrained user devices, and therewith allows the users to offload tasks to the edge computing nodes for a low-latency computation with low battery consumption. However, due to the high dynamics of user demand and server load, task congestion may occur at the edge nodes, leading to long queuing delay. Such delays can significantly degrade the quality of experience (QoE) of some latency-sensitive applications, raise the risk of service outage, and cannot be efficiently resolved by conventional queue management solutions. In this article, we study an latency-outage critical scenario, where the users intend to reduce the risk of latency outage. We propose an impatience-based queuing strategy for such users to intelligently choose between MEC offloading and local computation, allowing them to rationally renege from the task queue. The proposed approach is demonstrated by numerical simulations as efficient for generic service model, when a perfect queue information is available. For the practical case where the users obtain no perfect queue information, we design a optimal online learning strategy to enable its application in Poisson service scenarios.
Energy harvesting (EH) IoT devices that operate intermittently without batteries, coupled with advances in deep neural networks (DNNs), have opened up new opportunities for enabling sustainable smart applications. Nevertheless, implementing those computation and memory-intensive intelligent algorithms on EH devices is extremely difficult due to the challenges of limited resources and intermittent power supply that causes frequent failures. To address those challenges, this paper proposes a methodology that enables super-fast deep learning with low-energy accelerators for tiny energy harvesting devices. We first propose RAD, a resource-aware structured DNN training framework, which employs block circulant matrix with ADMM to achieve high compression and model quantization for leveraging the advantage of various vector operation accelerators. A DNN implementation method, ACE, is then proposed that employs low-energy accelerators to profit maximum performance with minor energy consumption. Finally, we further design FLEX, the system support for intermittent computation in energy harvesting situations. Experimental results from three different DNN models demonstrate that RAD, ACE, and FLEX can enable super-fast and correct inference on energy harvesting devices with up to 4.26X runtime reduction, up to 7.7X energy reduction with higher accuracy over the state-of-the-art.
The advent of Bitcoin, and consequently Blockchain, has ushered in a new era of decentralization. Blockchain enables mutually distrusting entities to work collaboratively to attain a common objective. However, current Blockchain technologies lack scalability, which limits their use in Internet of Things (IoT) applications. Many devices on the Internet have the computational and communication capabilities to facilitate decision-making. These devices will soon be a 50 billion node network. Furthermore, new IoT business models such as Sensor-as-a-Service (SaaS) require a robust Trust and Reputation System (TRS). In this paper, we introduce an innovative distributed ledger combining Tangle and Blockchain as a TRS framework for IoT. The combination of Tangle and Blockchain provides maintainability of the former and scalability of the latter. The proposed ledger can handle large numbers of IoT device transactions and facilitates low power nodes joining and contributing. Employing a distributed ledger mitigates many threats, such as whitewashing attacks. Along with combining payments and rating protocols, the proposed approach provides cleaner data to the upper layer reputation algorithm.
Preserving energy in households and office buildings is a significant challenge, mainly due to the recent shortage of energy resources, the uprising of the current environmental problems, and the global lack of utilizing energy-saving technologies. Not to mention, within some regions, COVID-19 social distancing measures have led to a temporary transfer of energy demand from commercial and urban centers to residential areas, causing an increased use and higher charges, and in turn, creating economic impacts on customers. Therefore, the marketplace could benefit from developing an internet of things (IoT) ecosystem that monitors energy consumption habits and promptly recommends action to facilitate energy efficiency. This paper aims to present the full integration of a proposed energy efficiency framework into the Home-Assistant platform using an edge-based architecture. End-users can visualize their consumption patterns as well as ambient environmental data using the Home-Assistant user interface. More notably, explainable energy-saving recommendations are delivered to end-users in the form of notifications via the mobile application to facilitate habit change. In this context, to the best of the authors' knowledge, this is the first attempt to develop and implement an energy-saving recommender system on edge devices. Thus, ensuring better privacy preservation since data are processed locally on the edge, without the need to transmit them to remote servers, as is the case with cloudlet platforms.
The digital transformation faces tremendous security challenges. In particular, the growing number of cyber-attacks targeting Internet of Things (IoT) systems restates the need for a reliable detection of malicious network activity. This paper presents a comparative analysis of supervised, unsupervised and reinforcement learning techniques on nine malware captures of the IoT-23 dataset, considering both binary and multi-class classification scenarios. The developed models consisted of Support Vector Machine (SVM), Extreme Gradient Boosting (XGBoost), Light Gradient Boosting Machine (LightGBM), Isolation Forest (iForest), Local Outlier Factor (LOF) and a Deep Reinforcement Learning (DRL) model based on a Double Deep Q-Network (DDQN), adapted to the intrusion detection context. The best performance was achieved by LightGBM, closely followed by SVM. Nonetheless, iForest displayed good results against unknown attacks and the DRL model demonstrated the possible benefits of employing this methodology to continuously improve the detection. Overall, the obtained results indicate that the analyzed techniques are well suited for IoT intrusion detection.
Deep Learning (DL) is the most widely used tool in the contemporary field of computer vision. Its ability to accurately solve complex problems is employed in vision research to learn deep neural models for a variety of tasks, including security critical applications. However, it is now known that DL is vulnerable to adversarial attacks that can manipulate its predictions by introducing visually imperceptible perturbations in images and videos. Since the discovery of this phenomenon in 2013~[1], it has attracted significant attention of researchers from multiple sub-fields of machine intelligence. In [2], we reviewed the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses) until the advent of year 2018. Many of those contributions have inspired new directions in this area, which has matured significantly since witnessing the first generation methods. Hence, as a legacy sequel of [2], this literature review focuses on the advances in this area since 2018. To ensure authenticity, we mainly consider peer-reviewed contributions published in the prestigious sources of computer vision and machine learning research. Besides a comprehensive literature review, the article also provides concise definitions of technical terminologies for non-experts in this domain. Finally, this article discusses challenges and future outlook of this direction based on the literature reviewed herein and [2].
Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis in locations close to where data is captured based on artificial intelligence. The aim of edge intelligence is to enhance the quality and speed of data processing and protect the privacy and security of the data. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this paper, we present a thorough and comprehensive survey on the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, namely edge caching, edge training, edge inference, and edge offloading, based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare and analyse the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, etc. This survey article provides a comprehensive introduction to edge intelligence and its application areas. In addition, we summarise the development of the emerging research field and the current state-of-the-art and discuss the important open issues and possible theoretical and technical solutions.
The demand for artificial intelligence has grown significantly over the last decade and this growth has been fueled by advances in machine learning techniques and the ability to leverage hardware acceleration. However, in order to increase the quality of predictions and render machine learning solutions feasible for more complex applications, a substantial amount of training data is required. Although small machine learning models can be trained with modest amounts of data, the input for training larger models such as neural networks grows exponentially with the number of parameters. Since the demand for processing training data has outpaced the increase in computation power of computing machinery, there is a need for distributing the machine learning workload across multiple machines, and turning the centralized into a distributed system. These distributed systems present new challenges, first and foremost the efficient parallelization of the training process and the creation of a coherent model. This article provides an extensive overview of the current state-of-the-art in the field by outlining the challenges and opportunities of distributed machine learning over conventional (centralized) machine learning, discussing the techniques used for distributed machine learning, and providing an overview of the systems that are available.
In recent years, mobile devices have gained increasingly development with stronger computation capability and larger storage. Some of the computation-intensive machine learning and deep learning tasks can now be run on mobile devices. To take advantage of the resources available on mobile devices and preserve users' privacy, the idea of mobile distributed machine learning is proposed. It uses local hardware resources and local data to solve machine learning sub-problems on mobile devices, and only uploads computation results instead of original data to contribute to the optimization of the global model. This architecture can not only relieve computation and storage burden on servers, but also protect the users' sensitive information. Another benefit is the bandwidth reduction, as various kinds of local data can now participate in the training process without being uploaded to the server. In this paper, we provide a comprehensive survey on recent studies of mobile distributed machine learning. We survey a number of widely-used mobile distributed machine learning methods. We also present an in-depth discussion on the challenges and future directions in this area. We believe that this survey can demonstrate a clear overview of mobile distributed machine learning and provide guidelines on applying mobile distributed machine learning to real applications.
MLtuner automatically tunes settings for training tunables (such as the learning rate, the momentum, the mini-batch size, and the data staleness bound) that have a significant impact on large-scale machine learning (ML) performance. Traditionally, these tunables are set manually, which is unsurprisingly error-prone and difficult to do without extensive domain knowledge. MLtuner uses efficient snapshotting, branching, and optimization-guided online trial-and-error to find good initial settings as well as to re-tune settings during execution. Experiments show that MLtuner can robustly find and re-tune tunable settings for a variety of ML applications, including image classification (for 3 models and 2 datasets), video classification, and matrix factorization. Compared to state-of-the-art ML auto-tuning approaches, MLtuner is more robust for large problems and over an order of magnitude faster.