Cloud, fog, and edge computing integration with future mobile Internet-of-Things (IoT) devices and related applications in 5G/6G networks will become more practical in the coming years. Containers became the de facto virtualization technique that replaced Virtual Memory (VM). Mobile IoT applications, e.g., intelligent transportation and augmented reality, incorporating fog-edge, have increased the demand for a millisecond-scale response and processing time. Edge Computing reduces remote network traffic and latency. These services must run on edge nodes that are physically close to devices. However, classical migration techniques may not meet the requirements of future mission-critical IoT applications. IoT mobile devices have limited resources for running multiple services, and client-server latency worsens when fog-edge services must migrate to maintain proximity in light of device mobility. This study analyzes the performance of the MiGrror migration method and the pre-copy live migration method when the migration of multiple VMs/containers is considered. This paper presents mathematical models for the stated methods and provides migration guidelines and comparisons for services to be implemented as multiple containers, as in microservice-based environments. Experiments demonstrate that MiGrror outperforms the pre-copy technique and, unlike conventional live migrations, can maintain less than 10 milliseconds of downtime and reduce migration time with a minimal bandwidth overhead. The results show that MiGrror can improve service continuity and availability for users. Most significant is that the model can use average and non-average values for different parameters during migration to achieve improved and more accurate results, while other research typically only uses average values. This paper shows that using only average parameter values in migration can lead to inaccurate results.
Tor is the most popular anonymous communication overlay network which hides clients' identities from servers by passing packets through multiple relays. To provide anonymity to both clients and servers, Tor onion services were introduced by increasing the number of relays between a client and a server. Because of the limited bandwidth of Tor relays, large numbers of users, and multiple layers of encryption at relays, onion services suffer from high end-to-end latency and low data transfer rates, which degrade user experiences, making onion services unsuitable for latency-sensitive applications. In this paper, we present a UDP-based framework, called DarkHorse, that improves the end-to-end latency and the data transfer overhead of Tor onion services by exploiting the connectionless nature of UDP. Our evaluation results demonstrate that DarkHorse is up to 3.62x faster than regular TCP-based Tor onion services and reduces the Tor network overhead by up to 47%.
Mobile Edge Computing (MEC) is expected to play a significant role in the development of 6G networks, as new applications such as cooperative driving and eXtended Reality (XR) require both communication and computational resources from the network edge. However, the limited capabilities of edge servers may be strained to perform complex computational tasks within strict latency bounds for multiple clients. In these contexts, both maintaining a low average Age of Information (AoI) and guaranteeing a low Peak AoI (PAoI) even in the worst case may have significant user experience and safety implications. In this work, we investigate a theoretical model of a MEC server, deriving the expected AoI and the PAoI and latency distributions under the First In First Out (FIFO) and Generalized Processor Sharing (GPS) resource allocation policies. We consider both synchronized and unsynchronized systems, and draw insights on the robust design of resource allocation policies from the analytical results.
The Fifth Generation (5G) of mobile networks offers new and advanced services with stricter requirements. Multi-access Edge Computing (MEC) is a key technology that enables these new services by deploying multiple devices with computing and storage capabilities at the edge of the network, close to end-users. MEC enhances network efficiency by reducing latency, enabling real-time awareness of the local environment, allowing cloud offloading, and reducing traffic congestion. New mission-critical applications require high security and dependability, which are rarely addressed alongside performance. This survey paper fills this gap by presenting 5G MEC's three aspects: security, dependability, and performance. The paper provides an overview of MEC, introduces taxonomy, state-of-the-art, and challenges related to each aspect. Finally, the paper presents the challenges of jointly addressing these three aspects.
Exploiting the computational heterogeneity of mobile devices and edge nodes, mobile edge computation (MEC) provides an efficient approach to achieving real-time applications that are sensitive to information freshness, by offloading tasks from mobile devices to edge nodes. We use the metric Age-of-Information (AoI) to evaluate information freshness. An efficient solution to minimize the AoI for the MEC system with multiple users is non-trivial to obtain due to the random computing time. In this paper, we consider multiple users offloading tasks to heterogeneous edge servers in a MEC system. We first reformulate the problem as a Restless Multi-Arm-Bandit (RMAB) problem and establish a hierarchical Markov Decision Process (MDP) to characterize the updating of AoI for the MEC system. Based on the hierarchical MDP, we propose a nested index framework and design a nested index policy with provably asymptotic optimality. Finally, the closed form of the nested index is obtained, which enables the performance tradeoffs between computation complexity and accuracy. Our algorithm leads to an optimality gap reduction of up to 40%, compared to benchmarks. Our algorithm asymptotically approximates the lower bound as the system scalar gets large enough.
Graph neural networks (GNNs) have been demonstrated to be a powerful algorithmic model in broad application fields for their effectiveness in learning over graphs. To scale GNN training up for large-scale and ever-growing graphs, the most promising solution is distributed training which distributes the workload of training across multiple computing nodes. However, the workflows, computational patterns, communication patterns, and optimization techniques of distributed GNN training remain preliminarily understood. In this paper, we provide a comprehensive survey of distributed GNN training by investigating various optimization techniques used in distributed GNN training. First, distributed GNN training is classified into several categories according to their workflows. In addition, their computational patterns and communication patterns, as well as the optimization techniques proposed by recent work are introduced. Second, the software frameworks and hardware platforms of distributed GNN training are also introduced for a deeper understanding. Third, distributed GNN training is compared with distributed training of deep neural networks, emphasizing the uniqueness of distributed GNN training. Finally, interesting issues and opportunities in this field are discussed.
Edge computing facilitates low-latency services at the network's edge by distributing computation, communication, and storage resources within the geographic proximity of mobile and Internet-of-Things (IoT) devices. The recent advancement in Unmanned Aerial Vehicles (UAVs) technologies has opened new opportunities for edge computing in military operations, disaster response, or remote areas where traditional terrestrial networks are limited or unavailable. In such environments, UAVs can be deployed as aerial edge servers or relays to facilitate edge computing services. This form of computing is also known as UAV-enabled Edge Computing (UEC), which offers several unique benefits such as mobility, line-of-sight, flexibility, computational capability, and cost-efficiency. However, the resources on UAVs, edge servers, and IoT devices are typically very limited in the context of UEC. Efficient resource management is, therefore, a critical research challenge in UEC. In this article, we present a survey on the existing research in UEC from the resource management perspective. We identify a conceptual architecture, different types of collaborations, wireless communication models, research directions, key techniques and performance indicators for resource management in UEC. We also present a taxonomy of resource management in UEC. Finally, we identify and discuss some open research challenges that can stimulate future research directions for resource management in UEC.
Unmanned aerial vehicle (UAV) swarm enabled edge computing is envisioned to be promising in the sixth generation wireless communication networks due to their wide application sensories and flexible deployment. However, most of the existing works focus on edge computing enabled by a single or a small scale UAVs, which are very different from UAV swarm-enabled edge computing. In order to facilitate the practical applications of UAV swarm-enabled edge computing, the state of the art research is presented in this article. The potential applications, architectures and implementation considerations are illustrated. Moreover, the promising enabling technologies for UAV swarm-enabled edge computing are discussed. Furthermore, we outline challenges and open issues in order to shed light on the future research directions.
Games and simulators can be a valuable platform to execute complex multi-agent, multiplayer, imperfect information scenarios with significant parallels to military applications: multiple participants manage resources and make decisions that command assets to secure specific areas of a map or neutralize opposing forces. These characteristics have attracted the artificial intelligence (AI) community by supporting development of algorithms with complex benchmarks and the capability to rapidly iterate over new ideas. The success of artificial intelligence algorithms in real-time strategy games such as StarCraft II have also attracted the attention of the military research community aiming to explore similar techniques in military counterpart scenarios. Aiming to bridge the connection between games and military applications, this work discusses past and current efforts on how games and simulators, together with the artificial intelligence algorithms, have been adapted to simulate certain aspects of military missions and how they might impact the future battlefield. This paper also investigates how advances in virtual reality and visual augmentation systems open new possibilities in human interfaces with gaming platforms and their military parallels.
Deployment of Internet of Things (IoT) devices and Data Fusion techniques have gained popularity in public and government domains. This usually requires capturing and consolidating data from multiple sources. As datasets do not necessarily originate from identical sensors, fused data typically results in a complex data problem. Because military is investigating how heterogeneous IoT devices can aid processes and tasks, we investigate a multi-sensor approach. Moreover, we propose a signal to image encoding approach to transform information (signal) to integrate (fuse) data from IoT wearable devices to an image which is invertible and easier to visualize supporting decision making. Furthermore, we investigate the challenge of enabling an intelligent identification and detection operation and demonstrate the feasibility of the proposed Deep Learning and Anomaly Detection models that can support future application that utilizes hand gesture data from wearable devices.
Deep convolutional neural networks (CNNs) have recently achieved great success in many visual recognition tasks. However, existing deep neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past few years, tremendous progress has been made in this area. In this paper, we survey the recent advanced techniques for compacting and accelerating CNNs model developed. These techniques are roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation. Methods of parameter pruning and sharing will be described at the beginning, after that the other techniques will be introduced. For each scheme, we provide insightful analysis regarding the performance, related applications, advantages, and drawbacks etc. Then we will go through a few very recent additional successful methods, for example, dynamic capacity networks and stochastic depths networks. After that, we survey the evaluation matrix, the main datasets used for evaluating the model performance and recent benchmarking efforts. Finally, we conclude this paper, discuss remaining challenges and possible directions on this topic.