Securing necessary resources for edge computing processes via effective resource trading becomes a critical technique in supporting computation-intensive mobile applications. Conventional onsite spot trading could facilitate this paradigm with proper incentives, which, however, incurs excessive decision-making latency/energy consumption, and further leads to underutilization of dynamic resources. Motivated by this, a hybrid market unifying futures and spot is proposed to facilitate resource trading among an edge server (seller) and multiple smart devices (buyers) by encouraging some buyers to sign a forward contract with seller in advance, while leaving the remaining buyers to compete for available resources with spot trading. Specifically, overbooking is adopted to achieve substantial utilization and profit advantages owing to dynamic resource demands. By integrating overbooking into futures market, mutually beneficial and risk-tolerable forward contracts with appropriate overbooking rate can be achieved relying on analyzing historical statistics associated with future resource demand and communication quality, which are determined by an alternative optimization-based negotiation scheme. Besides, spot trading problem is studied via considering uniform/differential pricing rules, for which two bilateral negotiation schemes are proposed by addressing both non-convex optimization and knapsack problems. Experimental results demonstrate that the proposed mechanism achieves mutually beneficial players' utilities, while outperforming baseline methods on critical indicators, e.g., decision-making latency, resource usage, etc.
The proliferation of Internet-of-Things (IoT) devices and cloud-computing applications over siloed data centers is motivating renewed interest in the collaborative training of a shared model by multiple individual clients via federated learning (FL). To improve the communication efficiency of FL implementations in wireless systems, recent works have proposed compression and dimension reduction mechanisms, along with digital and analog transmission schemes that account for channel noise, fading, and interference. The prior art has mainly focused on star topologies consisting of distributed clients and a central server. In contrast, this paper studies FL over wireless device-to-device (D2D) networks by providing theoretical insights into the performance of digital and analog implementations of decentralized stochastic gradient descent (DSGD). First, we introduce generic digital and analog wireless implementations of communication-efficient DSGD algorithms, leveraging random linear coding (RLC) for compression and over-the-air computation (AirComp) for simultaneous analog transmissions. Next, under the assumptions of convexity and connectivity, we provide convergence bounds for both implementations. The results demonstrate the dependence of the optimality gap on the connectivity and on the signal-to-noise ratio (SNR) levels in the network. The analysis is corroborated by experiments on an image-classification task.
Past research and practice have demonstrated that dynamic rerouting framework is effective in mitigating urban traffic congestion and thereby improve urban travel efficiency. It has been suggested that dynamic rerouting could be facilitated using emerging technologies such as fog-computing which offer advantages of low-latency capabilities and information exchange between vehicles and roadway infrastructure. To address this question, this study proposes a two-stage model that combines GAQ (Graph Attention Network - Deep Q Learning) and EBkSP (Entropy Based k Shortest Path) using a fog-cloud architecture, to reroute vehicles in a dynamic urban environment and therefore to improve travel efficiency in terms of travel speed. First, GAQ analyzes the traffic conditions on each road and for each fog area, and then assigns a road index based on the information attention from both local and neighboring areas. Second, EBkSP assigns the route for each vehicle based on the vehicle priority and route popularity. A case study experiment is carried out to investigate the efficacy of the proposed model. At the model training stage, different methods are used to establish the vehicle priorities, and their impact on the results is assessed. Also, the proposed model is tested under various scenarios with different ratios of rerouting and background (non-rerouting) vehicles. The results demonstrate that vehicle rerouting using the proposed model can help attain higher speed and reduces possibility of severe congestion. This result suggests that the proposed model can be deployed by urban transportation agencies for dynamic rerouting and ultimately, to reduce urban traffic congestion.
In recent years, blockchain networks have attracted significant attention in many research areas beyond cryptocurrency, one of them being the Edge of Things (EoT) that is enabled by the combination of edge computing and the Internet of Things (IoT). In this context, blockchain networks enabled with unique features such as decentralization, immutability, and traceability, have the potential to reshape and transform the conventional EoT systems with higher security levels. Particularly, the convergence of blockchain and EoT leads to a new paradigm, called BEoT that has been regarded as a promising enabler for future services and applications. In this paper, we present a state-of-the-art review of recent developments in BEoT technology and discover its great opportunities in many application domains. We start our survey by providing an updated introduction to blockchain and EoT along with their recent advances. Subsequently, we discuss the use of BEoT in a wide range of industrial applications, from smart transportation, smart city, smart healthcare to smart home and smart grid. Security challenges in BEoT paradigm are also discussed and analyzed, with some key services such as access authentication, data privacy preservation, attack detection, and trust management. Finally, some key research challenges and future directions are also highlighted to instigate further research in this promising area.
Internet of Things (IoT) is a rapidly growing industry currently being integrated into both consumer and industrial environments on a wide scale. While the technology is available and deployment has a low barrier of entry in future applications, proper security frameworks are still at infancy stage and are being developed to fit varied implementations and device architectures. Further, the need for edge centric mechanisms are critical to offer security in real time smart connected applications with minimal or negligible overhead. In this paper, we propose a novel approach of data security by using multiple device shadows (aka digital twins) for a single physical object. These twins are paramount to separate data among different virtual objects based on tags assigned on-the-fly, and are used to limit access to different data points by authorized users/applications only. The novelty of the proposed architecture resides in the attachment of dynamic tags to key-value pairs reported by physical devices in the system. We further examine the advantages of tagging data in a digital twin system, and the performance impacts of the proposed data separation scheme. The proposed solution is deployed at the edge, supporting low latency and real time security mechanisms with minimal overhead, and is light-weight as reflected by captured performance metrics.
Performance contracts used for servitized business models enable consideration of overall life-cycle costs rather than just production costs. However, practical implementation of performance contracts has been limited due to challenges with performance evaluation, accountability, and financial concepts. As a solution, this paper proposes the connection of the digital building twin with blockchain-based smart contracts to execute performance-based digital payments. First, we conceptualize a technical architecture to connect blockchain to digital building twins. The digital building twin stores and evaluates performance data in real-time while the blockchain ensures transparency and trusted execution of automatic performance evaluation and rewards through smart contracts. Next, we demonstrate the feasibility of both the concept and technical architecture by integrating the Ethereum blockchain with digital building models and sensors via the Siemens building twin platform. The resulting prototype is the first full-stack implementation of a performance-based smart contract in the built environment.
Network slicing is a promising technology that allows mobile network operators to efficiently serve various emerging use cases in 5G. It is challenging to optimize the utilization of network infrastructures while guaranteeing the performance of network slices according to service level agreements (SLAs). To solve this problem, we propose SafeSlicing that introduces a new constraint-aware deep reinforcement learning (CaDRL) algorithm to learn the optimal resource orchestration policy within two steps, i.e., offline training in a simulated environment and online learning with the real network system. On optimizing the resource orchestration, we incorporate the constraints on the statistical performance of slices in the reward function using Lagrangian multipliers, and solve the Lagrangian relaxed problem via a policy network. To satisfy the constraints on the system capacity, we design a constraint network to map the latent actions generated from the policy network to the orchestration actions such that the total resources allocated to network slices do not exceed the system capacity. We prototype SafeSlicing on an end-to-end testbed developed by using OpenAirInterface LTE, OpenDayLight-based SDN, and CUDA GPU computing platform. The experimental results show that SafeSlicing reduces more than 20% resource usage while meeting SLAs of network slices as compared with other solutions.
In light of the emergence of deep reinforcement learning (DRL) in recommender systems research and several fruitful results in recent years, this survey aims to provide a timely and comprehensive overview of the recent trends of deep reinforcement learning in recommender systems. We start with the motivation of applying DRL in recommender systems. Then, we provide a taxonomy of current DRL-based recommender systems and a summary of existing methods. We discuss emerging topics and open issues, and provide our perspective on advancing the domain. This survey serves as introductory material for readers from academia and industry into the topic and identifies notable opportunities for further research.
Interest in the field of Explainable Artificial Intelligence has been growing for decades and has accelerated recently. As Artificial Intelligence models have become more complex, and often more opaque, with the incorporation of complex machine learning techniques, explainability has become more critical. Recently, researchers have been investigating and tackling explainability with a user-centric focus, looking for explanations to consider trustworthiness, comprehensibility, explicit provenance, and context-awareness. In this chapter, we leverage our survey of explanation literature in Artificial Intelligence and closely related fields and use these past efforts to generate a set of explanation types that we feel reflect the expanded needs of explanation for today's artificial intelligence applications. We define each type and provide an example question that would motivate the need for this style of explanation. We believe this set of explanation types will help future system designers in their generation and prioritization of requirements and further help generate explanations that are better aligned to users' and situational needs.
Driven by the visions of Internet of Things and 5G communications, the edge computing systems integrate computing, storage and network resources at the edge of the network to provide computing infrastructure, enabling developers to quickly develop and deploy edge applications. Nowadays the edge computing systems have received widespread attention in both industry and academia. To explore new research opportunities and assist users in selecting suitable edge computing systems for specific applications, this survey paper provides a comprehensive overview of the existing edge computing systems and introduces representative projects. A comparison of open source tools is presented according to their applicability. Finally, we highlight energy efficiency and deep learning optimization of edge computing systems. Open issues for analyzing and designing an edge computing system are also studied in this survey.
In the past decade, Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance in various Artificial Intelligence tasks. To accelerate the experimentation and development of CNNs, several software frameworks have been released, primarily targeting power-hungry CPUs and GPUs. In this context, reconfigurable hardware in the form of FPGAs constitutes a potential alternative platform that can be integrated in the existing deep learning ecosystem to provide a tunable balance between performance, power consumption and programmability. In this paper, a survey of the existing CNN-to-FPGA toolflows is presented, comprising a comparative study of their key characteristics which include the supported applications, architectural choices, design space exploration methods and achieved performance. Moreover, major challenges and objectives introduced by the latest trends in CNN algorithmic research are identified and presented. Finally, a uniform evaluation methodology is proposed, aiming at the comprehensive, complete and in-depth evaluation of CNN-to-FPGA toolflows.