In-band Network Telemetry (INT) has emerged as a promising network measurement technology. However, existing network telemetry systems lack the flexibility to meet diverse telemetry requirements and are also difficult to adapt to dynamic network environments. In this paper, we propose AdapINT, a versatile and adaptive in-band network telemetry framework assisted by dual-timescale probes, including long-period auxiliary probes (APs) and short-period dynamic probes (DPs). Technically, the APs collect basic network status information, which is used for the path planning of DPs. To achieve full network coverage, we propose an auxiliary probes path deployment (APPD) algorithm based on the Depth-First-Search (DFS). The DPs collect specific network information for telemetry tasks. To ensure that the DPs can meet diverse telemetry requirements and adapt to dynamic network environments, we apply the deep reinforcement learning (DRL) technique and transfer learning method to design the dynamic probes path deployment (DPPD) algorithm. The evaluation results show that AdapINT can redesign the telemetry system according to telemetry requirements and network environments. AdapINT can reduce telemetry latency by 75\% in online games and video conferencing scenarios. For overhead-aware networks, AdapINT can reduce control overheads by 34\% in cloud computing services.
Compressing a predefined deep neural network (DNN) into a compact sub-network with competitive performance is crucial in the efficient machine learning realm. This topic spans various techniques, from structured pruning to neural architecture search, encompassing both pruning and erasing operators perspectives. Despite advancements, existing methods suffers from complex, multi-stage processes that demand substantial engineering and domain knowledge, limiting their broader applications. We introduce the third-generation Only-Train-Once (OTOv3), which first automatically trains and compresses a general DNN through pruning and erasing operations, creating a compact and competitive sub-network without the need of fine-tuning. OTOv3 simplifies and automates the training and compression process, minimizes the engineering efforts required from users. It offers key technological advancements: (i) automatic search space construction for general DNNs based on dependency graph analysis; (ii) Dual Half-Space Projected Gradient (DHSPG) and its enhanced version with hierarchical search (H2SPG) to reliably solve (hierarchical) structured sparsity problems and ensure sub-network validity; and (iii) automated sub-network construction using solutions from DHSPG/H2SPG and dependency graphs. Our empirical results demonstrate the efficacy of OTOv3 across various benchmarks in structured pruning and neural architecture search. OTOv3 produces sub-networks that match or exceed the state-of-the-arts. The source code will be available at //github.com/tianyic/only_train_once.
Existing studies analyzing electromagnetic field (EMF) exposure in wireless networks have primarily considered downlink (DL) communications. In the uplink (UL), the EMF exposure caused by the user's smartphone is usually the only considered source of radiation, thereby ignoring contributions caused by other active neighboring devices. In addition, the network coverage and EMF exposure are typically analyzed independently for both the UL and DL, while a joint analysis would be necessary to fully understand the network performance. This paper aims at bridging the resulting gaps by presenting a comprehensive stochastic geometry framework including the above aspects. The proposed topology features base stations (BS) modeled via a homogeneous Poisson point process as well as a user process of type II (with users uniformly distributed in the Voronoi cell of each BS). In addition to the UL to DL exposure ratio, we derive joint probability metrics considering the UL and DL coverage and EMF exposure. These metrics are evaluated in two scenarios considering BS and/or user densifications. Our numerical results highlight the existence of optimal node densities maximizing these joint probabilities.
The wireless blockchain network (WBN) concept, born from the blockchain deployed in wireless networks, has appealed to many network scenarios. Blockchain consensus mechanisms (CMs) are key to enabling nodes in a wireless network to achieve consistency without any trusted entity. However, consensus reliability will be seriously affected by the instability of communication links in wireless networks. Meanwhile, it is difficult for nodes in wireless scenarios to obtain a timely energy supply. Energy-intensive blockchain functions can quickly drain the power of nodes, thus degrading consensus performance. Fortunately, a symbiotic radio (SR) system enabled by cognitive backscatter communications can solve the above problems. In SR, the secondary transmitter (STx) transmits messages over the radio frequency (RF) signal emitted from a primary transmitter (PTx) with extremely low energy consumption, and the STx can provide multipath gain to the PTx in return. Such an approach is useful for almost all vote-based CMs, such as the Practical Byzantine Fault-tolerant (PBFT)-like and the RAFT-like CMs. This paper proposes symbiotic blockchain consensus (SBC) by transforming 6 PBFT-like and 4 RAFT-like state-of-the-art (SOTA) CMs to demonstrate universality. These new CMs will benefit from mutualistic transmission relationships in SR, making full use of the limited spectrum resources in WBN. Simulation results show that SBC can increase the consensus success rate of PBFT-like and RAFT-like by 54.1% and 5.8%, respectively, and reduce energy consumption by 9.2% and 23.7%, respectively.
Multi-access Edge Computing (MEC) addresses computational and battery limitations in devices by allowing them to offload computation tasks. To overcome the difficulties in establishing line-of-sight connections, integrating unmanned aerial vehicles (UAVs) has proven beneficial, offering enhanced data exchange, rapid deployment, and mobility. The utilization of reconfigurable intelligent surfaces (RIS), specifically simultaneously transmitting and reflecting RIS (STAR-RIS) technology, further extends coverage capabilities and introduces flexibility in MEC. This study explores the integration of UAV and STAR-RIS to facilitate communication between IoT devices and an MEC server. The formulated problem aims to minimize energy consumption for IoT devices and aerial STAR-RIS by jointly optimizing task offloading, aerial STAR-RIS trajectory, amplitude and phase shift coefficients, and transmit power. Given the non-convexity of the problem and the dynamic environment, solving it directly within a polynomial time frame is challenging. Therefore, deep reinforcement learning (DRL), particularly proximal policy optimization (PPO), is introduced for its sample efficiency and stability. Simulation results illustrate the effectiveness of the proposed system compared to benchmark schemes in the literature.
The resilience of internet service is crucial for ensuring consistent communication, facilitating emergency response in digitally-dependent society. Due to empirical data constraints, there has been limited research on internet service disruptions during extreme weather events. To bridge this gap, this study utilizes observational datasets on internet performance to quantitatively assess extent of internet disruption during two recent extreme weather events. Taking Harris County in United States as study region, we jointly analyzed the hazard severity and the associated internet disruptions in two extreme weather events. The results show that hazard events significantly impacted regional internet connectivity. There exists a pronounced temporal synchronicity between magnitude of disruption and hazard severity: as severity of hazards intensifies, internet disruptions correspondingly escalate, and eventually return to baseline levels post-event. Spatial analyses show internet service disruptions can happen even in areas not directly impacted by hazards, demonstrating that repercussions of hazards extend beyond immediate area of impact. This interplay of temporal synchronization and spatial variance underscores complex relationships between hazard severity and Internet disruption. Socio-demographic analysis suggests vulnerable communities, already grappling with myriad challenges, face exacerbated service disruptions during hazard events, emphasizing the need for prioritized disaster mitigation strategiesfor improving the resilience of internet services. To the best of our knowledge, this research is among the first studies to examine the Internet disruptions during hazardous events using a quantitative observational dataset. Insights obtained hold significant implications for city administrators, guiding them towards more resilient and equitable infrastructure planning.
Graph neural networks (GNNs) have demonstrated a significant boost in prediction performance on graph data. At the same time, the predictions made by these models are often hard to interpret. In that regard, many efforts have been made to explain the prediction mechanisms of these models from perspectives such as GNNExplainer, XGNN and PGExplainer. Although such works present systematic frameworks to interpret GNNs, a holistic review for explainable GNNs is unavailable. In this survey, we present a comprehensive review of explainability techniques developed for GNNs. We focus on explainable graph neural networks and categorize them based on the use of explainable methods. We further provide the common performance metrics for GNNs explanations and point out several future research directions.
With the advent of 5G commercialization, the need for more reliable, faster, and intelligent telecommunication systems are envisaged for the next generation beyond 5G (B5G) radio access technologies. Artificial Intelligence (AI) and Machine Learning (ML) are not just immensely popular in the service layer applications but also have been proposed as essential enablers in many aspects of B5G networks, from IoT devices and edge computing to cloud-based infrastructures. However, most of the existing surveys in B5G security focus on the performance of AI/ML models and their accuracy, but they often overlook the accountability and trustworthiness of the models' decisions. Explainable AI (XAI) methods are promising techniques that would allow system developers to identify the internal workings of AI/ML black-box models. The goal of using XAI in the security domain of B5G is to allow the decision-making processes of the security of systems to be transparent and comprehensible to stakeholders making the systems accountable for automated actions. In every facet of the forthcoming B5G era, including B5G technologies such as RAN, zero-touch network management, E2E slicing, this survey emphasizes the role of XAI in them and the use cases that the general users would ultimately enjoy. Furthermore, we presented the lessons learned from recent efforts and future research directions on top of the currently conducted projects involving XAI.
Deep neural networks (DNNs) have achieved unprecedented success in the field of artificial intelligence (AI), including computer vision, natural language processing and speech recognition. However, their superior performance comes at the considerable cost of computational complexity, which greatly hinders their applications in many resource-constrained devices, such as mobile phones and Internet of Things (IoT) devices. Therefore, methods and techniques that are able to lift the efficiency bottleneck while preserving the high accuracy of DNNs are in great demand in order to enable numerous edge AI applications. This paper provides an overview of efficient deep learning methods, systems and applications. We start from introducing popular model compression methods, including pruning, factorization, quantization as well as compact model design. To reduce the large design cost of these manual solutions, we discuss the AutoML framework for each of them, such as neural architecture search (NAS) and automated pruning and quantization. We then cover efficient on-device training to enable user customization based on the local data on mobile devices. Apart from general acceleration techniques, we also showcase several task-specific accelerations for point cloud, video and natural language processing by exploiting their spatial sparsity and temporal/token redundancy. Finally, to support all these algorithmic advancements, we introduce the efficient deep learning system design from both software and hardware perspectives.
Generative adversarial networks (GANs) have been extensively studied in the past few years. Arguably their most significant impact has been in the area of computer vision where great advances have been made in challenges such as plausible image generation, image-to-image translation, facial attribute manipulation and similar domains. Despite the significant successes achieved to date, applying GANs to real-world problems still poses significant challenges, three of which we focus on here. These are: (1) the generation of high quality images, (2) diversity of image generation, and (3) stable training. Focusing on the degree to which popular GAN technologies have made progress against these challenges, we provide a detailed review of the state of the art in GAN-related research in the published scientific literature. We further structure this review through a convenient taxonomy we have adopted based on variations in GAN architectures and loss functions. While several reviews for GANs have been presented to date, none have considered the status of this field based on their progress towards addressing practical challenges relevant to computer vision. Accordingly, we review and critically discuss the most popular architecture-variant, and loss-variant GANs, for tackling these challenges. Our objective is to provide an overview as well as a critical analysis of the status of GAN research in terms of relevant progress towards important computer vision application requirements. As we do this we also discuss the most compelling applications in computer vision in which GANs have demonstrated considerable success along with some suggestions for future research directions. Code related to GAN-variants studied in this work is summarized on //github.com/sheqi/GAN_Review.
Graph convolutional network (GCN) has been successfully applied to many graph-based applications; however, training a large-scale GCN remains challenging. Current SGD-based algorithms suffer from either a high computational cost that exponentially grows with number of GCN layers, or a large space requirement for keeping the entire graph and the embedding of each node in memory. In this paper, we propose Cluster-GCN, a novel GCN algorithm that is suitable for SGD-based training by exploiting the graph clustering structure. Cluster-GCN works as the following: at each step, it samples a block of nodes that associate with a dense subgraph identified by a graph clustering algorithm, and restricts the neighborhood search within this subgraph. This simple but effective strategy leads to significantly improved memory and computational efficiency while being able to achieve comparable test accuracy with previous algorithms. To test the scalability of our algorithm, we create a new Amazon2M data with 2 million nodes and 61 million edges which is more than 5 times larger than the previous largest publicly available dataset (Reddit). For training a 3-layer GCN on this data, Cluster-GCN is faster than the previous state-of-the-art VR-GCN (1523 seconds vs 1961 seconds) and using much less memory (2.2GB vs 11.2GB). Furthermore, for training 4 layer GCN on this data, our algorithm can finish in around 36 minutes while all the existing GCN training algorithms fail to train due to the out-of-memory issue. Furthermore, Cluster-GCN allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy---using a 5-layer Cluster-GCN, we achieve state-of-the-art test F1 score 99.36 on the PPI dataset, while the previous best result was 98.71 by [16]. Our codes are publicly available at //github.com/google-research/google-research/tree/master/cluster_gcn.