Multi-access edge computing (MEC) emerges as an essential part of the upcoming Fifth Generation (5G) and future beyond-5G mobile communication systems. It brings computation power to the edge of cellular networks, which is close to the energy-constrained user devices, and therewith allows the users to offload tasks to the edge computing nodes for a low-latency computation with low battery consumption. However, due to the high dynamics of user demand and server load, task congestion may occur at the edge nodes, leading to long queuing delay. Such delays can significantly degrade the quality of experience (QoE) of some latency-sensitive applications, raise the risk of service outage, and cannot be efficiently resolved by conventional queue management solutions. In this article, we study an latency-outage critical scenario, where the users intend to reduce the risk of latency outage. We propose an impatience-based queuing strategy for such users to intelligently choose between MEC offloading and local computation, allowing them to rationally renege from the task queue. The proposed approach is demonstrated by numerical simulations as efficient for generic service model, when a perfect queue information is available. For the practical case where the users obtain no perfect queue information, we design a optimal online learning strategy to enable its application in Poisson service scenarios.
The rapid development of Industrial Internet of Things (IIoT) technologies has not only enabled new applications, but also presented new challenges for reliable communication with limited resources. In this work, we define a deceptively simple novel problem that can arise in these scenarios, in which a set of sensors need to communicate a joint observation. This observation is shared by a random subset of the nodes, which need to propagate it to the rest of the network, but coordination is complex: as signaling constraints require the use of random access schemes over shared channels, each sensor needs to implicitly coordinate with others with the same observation, so that at least one of the transmissions gets through without collisions. Unlike existing medium access control schemes, the goal here is not to maximize total goodput, but rather to make sure that the shared message gets through, regardless of the sender. The lack of any signaling, aside from an acknowledgment or lack thereof from the rest of the network, makes determining the optimal collective transmission strategy a significant challenge. We analyze this coordination problem theoretically, prove its hardness, and provide low-complexity solutions. While a low-complexity clustering-based approach is shown to provide near-optimal performance in certain special cases, for the general scenarios, we model each sensor as a multi-armed bandit (MAB), and provide a learning-based solution. Numerical results show the effectiveness of this approach in a variety of cases.
In this paper, the problem of training federated learning (FL) algorithms over a realistic wireless network is studied. In particular, in the considered model, wireless users execute an FL algorithm while training their local FL models using their own data and transmitting the trained local FL models to a base station (BS) that will generate a global FL model and send it back to the users. Since all training parameters are transmitted over wireless links, the quality of the training will be affected by wireless factors such as packet errors and the availability of wireless resources. Meanwhile, due to the limited wireless bandwidth, the BS must select an appropriate subset of users to execute the FL algorithm so as to build a global FL model accurately. This joint learning, wireless resource allocation, and user selection problem is formulated as an optimization problem whose goal is to minimize an FL loss function that captures the performance of the FL algorithm. To address this problem, a closed-form expression for the expected convergence rate of the FL algorithm is first derived to quantify the impact of wireless factors on FL. Then, based on the expected convergence rate of the FL algorithm, the optimal transmit power for each user is derived, under a given user selection and uplink resource block (RB) allocation scheme. Finally, the user selection and uplink RB allocation is optimized so as to minimize the FL loss function. Simulation results show that the proposed joint federated learning and communication framework can reduce the FL loss function value by up to 10% and 16%, respectively, compared to: 1) An optimal user selection algorithm with random resource allocation and 2) a standard FL algorithm with random user selection and resource allocation.
The exponential growth of distributed energy resources is enabling the transformation of traditional consumers in the smart grid into prosumers. Such transition presents a promising opportunity for sustainable energy trading. Yet, the integration of prosumers in the energy market imposes new considerations in designing unified and sustainable frameworks for efficient use of the power and communication infrastructure. Furthermore, several issues need to be tackled to adequately promote the adoption of decentralized renewable-oriented systems, such as communication overhead, data privacy, scalability, and sustainability. In this article, we present the different aspects and challenges to be addressed for building efficient energy trading markets in relation to communication and smart decision-making. Accordingly, we propose a multi-level pro-decision framework for prosumer communities to achieve collective goals. Since the individual decisions of prosumers are mainly driven by individual self-sufficiency goals, the framework prioritizes the individual prosumers' decisions and relies on the 5G wireless network for fast coordination among community members. In fact, each prosumer predicts energy production and consumption to make proactive trading decisions as a response to collective-level requests. Moreover, the collaboration of the community is further extended by including the collaborative training of prediction models using Federated Learning, assisted by edge servers and prosumer home-area equipment. In addition to preserving prosumers' privacy, we show through evaluations that training prediction models using Federated Learning yields high accuracy for different energy resources while reducing the communication overhead.
Device-to-device (D2D) communications is one of the key emerging technologies for the fifth generation (5G) networks and beyond. It enables direct communication between mobile users and thereby extends coverage for devices lacking direct access to the cellular infrastructure and hence enhances network capacity. D2D networks are complex, highly dynamic and will be strongly augmented by intelligence for decision making at both the edge and core of the network, which makes them particularly difficult to predict and analyze. Conventionally, D2D systems are evaluated, investigated and analyzed using analytical and probabilistic models (e.g., from stochastic geometry). However, applying classical simulation and analytical tools to such a complex system is often hard to track and inaccurate. In this paper, we present a modeling and simulation framework from the perspective of complex-systems science and exhibit an agent-based model for the simulation of D2D coverage extensions. We also present a theoretical study to benchmark our proposed approach for a basic scenario that is less complicated to model mathematically. Our simulation results show that we are indeed able to predict coverage extensions for multi-hop scenarios and quantify the effects of street-system characteristics and pedestrian mobility on the connection time of devices to the base station (BS). To our knowledge, this is the first study that applies agent-based simulations for coverage extensions in D2D.
Unmanned aerial vehicles (UAVs) are envisioned to be extensively employed for assisting wireless communications in Internet of Things (IoT) applications. On the other hand, terahertz (THz) enabled intelligent reflecting surface (IRS) is expected to be one of the core enabling technologies for forthcoming beyond-5G wireless communications that promise a broad range of data-demand applications. In this paper, we propose a UAV-mounted IRS (UIRS) communication system over THz bands for confidential data dissemination from an access point (AP) towards multiple ground user equipments (UEs) in IoT networks. Specifically, the AP intends to send data to the scheduled UE, while unscheduled UEs may pose potential adversaries. To protect information messages and the privacy of the scheduled UE, we aim to devise an energy-efficient multi-UAV covert communication scheme, where the UIRS is for reliable data transmissions, and an extra UAV is utilized as a cooperative jammer generating artificial noise (AN) to degrade unscheduled UEs detection. We then formulate a novel minimum average energy efficiency (mAEE) optimization problem, targetting to improve the covert throughput and reduce UAVs' propulsion energy consumption subject to the covertness requirement, which is determined analytically. Since the optimization problem is non-convex, we tackle it via the block successive convex approximation (BSCA) approach to iteratively solve a sequence of approximated convex sub-problems, designing the binary user scheduling, AP's power allocation, maximum AN jamming power, IRS beamforming, and both UAVs' trajectory planning. Finally, we present a low-complex overall algorithm for system performance enhancement with complexity and convergence analysis. Numerical results are provided to verify our analysis and demonstrate significant outperformance of our design over other existing benchmark schemes.
Low-Latency IoT applications such as autonomous vehicles, augmented/virtual reality devices and security applications require high computation resources to make decisions on the fly. However, these kinds of applications cannot tolerate offloading their tasks to be processed on a cloud infrastructure due to the experienced latency. Therefore, edge computing is introduced to enable low latency by moving the tasks processing closer to the users at the edge of the network. The edge of the network is characterized by the heterogeneity of edge devices forming it; thus, it is crucial to devise novel solutions that take into account the different physical resources of each edge device. In this paper, we propose a resource representation scheme, allowing each edge device to expose its resource information to the supervisor of the edge node through the mobile edge computing application programming interfaces proposed by European Telecommunications Standards Institute. The information about the edge device resource is exposed to the supervisor of the EN each time a resource allocation is required. To this end, we leverage a Lyapunov optimization framework to dynamically allocate resources at the edge devices. To test our proposed model, we performed intensive theoretical and experimental simulations on a testbed to validate the proposed scheme and its impact on different system's parameters. The simulations have shown that our proposed approach outperforms other benchmark approaches and provides low latency and optimal resource consumption.
Multi-access edge computing (MEC) is a key enabler to reduce the latency of vehicular network. Due to the vehicles mobility, their requested services (e.g., infotainment services) should frequently be migrated across different MEC servers to guarantee their stringent quality of service requirements. In this paper, we study the problem of service migration in a MEC-enabled vehicular network in order to minimize the total service latency and migration cost. This problem is formulated as a nonlinear integer program and is linearized to help obtaining the optimal solution using off-the-shelf solvers. Then, to obtain an efficient solution, it is modeled as a multi-agent Markov decision process and solved by leveraging deep Q learning (DQL) algorithm. The proposed DQL scheme performs a proactive services migration while ensuring their continuity under high mobility constraints. Finally, simulations results show that the proposed DQL scheme achieves close-to-optimal performance.
Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis in locations close to where data is captured based on artificial intelligence. The aim of edge intelligence is to enhance the quality and speed of data processing and protect the privacy and security of the data. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this paper, we present a thorough and comprehensive survey on the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, namely edge caching, edge training, edge inference, and edge offloading, based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare and analyse the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, etc. This survey article provides a comprehensive introduction to edge intelligence and its application areas. In addition, we summarise the development of the emerging research field and the current state-of-the-art and discuss the important open issues and possible theoretical and technical solutions.
Federated learning has been showing as a promising approach in paving the last mile of artificial intelligence, due to its great potential of solving the data isolation problem in large scale machine learning. Particularly, with consideration of the heterogeneity in practical edge computing systems, asynchronous edge-cloud collaboration based federated learning can further improve the learning efficiency by significantly reducing the straggler effect. Despite no raw data sharing, the open architecture and extensive collaborations of asynchronous federated learning (AFL) still give some malicious participants great opportunities to infer other parties' training data, thus leading to serious concerns of privacy. To achieve a rigorous privacy guarantee with high utility, we investigate to secure asynchronous edge-cloud collaborative federated learning with differential privacy, focusing on the impacts of differential privacy on model convergence of AFL. Formally, we give the first analysis on the model convergence of AFL under DP and propose a multi-stage adjustable private algorithm (MAPA) to improve the trade-off between model utility and privacy by dynamically adjusting both the noise scale and the learning rate. Through extensive simulations and real-world experiments with an edge-could testbed, we demonstrate that MAPA significantly improves both the model accuracy and convergence speed with sufficient privacy guarantee.
Driven by the visions of Internet of Things and 5G communications, the edge computing systems integrate computing, storage and network resources at the edge of the network to provide computing infrastructure, enabling developers to quickly develop and deploy edge applications. Nowadays the edge computing systems have received widespread attention in both industry and academia. To explore new research opportunities and assist users in selecting suitable edge computing systems for specific applications, this survey paper provides a comprehensive overview of the existing edge computing systems and introduces representative projects. A comparison of open source tools is presented according to their applicability. Finally, we highlight energy efficiency and deep learning optimization of edge computing systems. Open issues for analyzing and designing an edge computing system are also studied in this survey.