With the standardization and commercialization completed at an unforeseen pace for the 5th generation (5G) wireless networks, researchers, engineers and executives from the academia and industry have turned their attention to new candidate technologies that can support the next generation wireless networks enabling more advanced capabilities in sophisticated scenarios. Explicitly, the 6th generation (6G) terrestrial wireless network aims to providing seamless connectivity not only to users but also to machine type devices for the next decade and beyond. This paper describes the progresses moving towards 6G, which is officially termed as ``international mobile telecommunications (IMT) for 2030 and beyond'' in the International Telecommunication Union Radiocommunication Sector (ITU-R). Specifically, the usage scenarios, their representative capabilities and the supporting technologies are discussed, and the future opportunities and challenges are highlighted.
The evolution of mobile communication networks has always been accompanied by the advancement of ISI mitigation techniques, from equalization in 2G, spread spectrum and RAKE receiver in 3G, to OFDM in 4G and 5G. Looking forward towards 6G, by exploiting the high spatial resolution brought by large antenna arrays and the multi-path sparsity of mmWave and Terahertz channels, a novel ISI mitigation technique termed delay alignment modulation (DAM) was recently proposed. However, existing works only consider the single-carrier perfect DAM, which is feasible only when the number of BS antennas is no smaller than that of channel paths, so that all multi-path signal components arrive at the receiver simultaneously and constructively. This imposes stringent requirements on the number of BS antennas and multi-path sparsity. In this paper, we propose a generic DAM technique to manipulate the channel delay spread via spatial-delay processing, thus providing a flexible framework to combat channel time dispersion for efficient single- or multi-carrier transmissions. We first show that when the number of BS antennas is much larger than that of channel paths, perfect delay alignment can be achieved to transform the time-dispersive channel to time non-dispersive channel with the simple delay pre-compensation and path-based MRT beamforming. When perfect DAM is infeasible or undesirable, the proposed generic DAM technique can be applied to significantly reduce the channel delay spread. We further propose the novel DAM-OFDM technique, which is able to save the CP overhead or mitigate the PAPR issue suffered by conventional OFDM. We show that the proposed DAM-OFDM involves joint frequency- and time-domain beamforming optimization, for which a closed-form solution is derived. Simulation results show that the proposed DAM-OFDM outperforms the conventional OFDM in terms of spectral efficiency, BER and PAPR.
The 6th generation (6G) wireless networks will likely to support a variety of capabilities beyond communication, such as sensing and localization, through the use of communication networks empowered by advanced technologies. Integrated sensing and communication (ISAC) has been recognized as a critical technology as well as an usage scenario for 6G, as widely agreed by leading global standardization bodies. ISAC utilizes communication infrastructure and devices to provide the capability of sensing the environment with high resolution, as well as tracking and localizing moving objects nearby. Meeting both the requirements for communication and sensing simultaneously, ISAC based approaches celebrate the advantages of higher spectral and energy efficiency compared to two separate systems to serve two purposes, and potentially lower costs and easy deployment. A key step towards the standardization and commercialization of ISAC is to carry out comprehensive field trials in practical networks, such as the 5th generation (5G) network, to demonstrate its true capacities in practical scenarios. In this paper, an ISAC based outdoor multi-target detection, tracking and localization approach is proposed and validated in 5G networks. The proposed system comprises of 5G base stations (BSs) which serve nearby mobile users normally, while accomplishing the task of detecting, tracking and localizing drones, vehicles and pedestrians simultaneously. Comprehensive trial results demonstrate the relatively high accuracy of the proposed method in practical outdoor environment when tracking and localizing single targets and multiple targets.
The unmanned aerial vehicle (UAV) network is popular these years due to its various applications. In the UAV network, routing is significantly affected by the distributed network topology, leading to the issue that UAVs are vulnerable to deliberate damage. Hence, this paper focuses on the routing plan and recovery for UAV networks with attacks. In detail, a deliberate attack model based on the importance of nodes is designed to represent enemy attacks. Then, a node importance ranking mechanism is presented, considering the degree of nodes and link importance. However, it is intractable to handle the routing problem by traditional methods for UAV networks, since link connections change with the UAV availability. Hence, an intelligent algorithm based on reinforcement learning is proposed to recover the routing path when UAVs are attacked. Simulations are conducted and numerical results verify the proposed mechanism performs better than other referred methods.
In the third round of the NIST Post-Quantum Cryptography standardization project, the focus is on optimizing software and hardware implementations of candidate schemes. The winning schemes are CRYSTALS Kyber and CRYSTALS Dilithium, which serve as a Key Encapsulation Mechanism (KEM) and Digital Signature Algorithm (DSA), respectively. This study utilizes the TaPaSCo open-source framework to create hardware building blocks for both schemes using High-level Synthesis (HLS) from minimally modified ANSI C software reference implementations across all security levels. Additionally, a generic TaPaSCo host runtime application is developed in Rust to verify their functionality through the standard NIST interface, utilizing the corresponding Known Answer Test mechanism on actual hardware. Building on this foundation, the communication overhead for TaPaSCo hardware accelerators on PCIe-connected FPGA devices is evaluated and compared with previous work and optimized AVX2 software reference implementations. The results demonstrate the feasibility of verifying and evaluating the performance of Post-Quantum Cryptography accelerators on real hardware using TaPaSCo. Furthermore, the off-chip accelerator communication overhead of the NIST standard interface is measured, which, on its own, outweighs the execution wall clock time of the optimized software reference implementation of Kyber at Security Level 1.
Unmanned Aerial Vehicles (UAVs) offer promising potential as communications node carriers, providing on-demand wireless connectivity to users. While existing literature presents various wireless channel models, it often overlooks the impact of UAV heading. This paper provides an experimental characterization of the Air-to-Ground (A2G) and Ground-to-Air (G2A) wireless channels in an open environment with no obstacles nor interference, considering the distance and the UAV heading. We analyze the received signal strength indicator and the TCP throughput between a ground user and a UAV, covering distances between 50~m and 500~m, and considering different UAV headings. Additionally, we characterize the antenna's radiation pattern based on UAV headings. The paper provides valuable perspectives on the capabilities of UAVs in offering on-demand and dynamic wireless connectivity, as well as highlights the significance of considering UAV heading and antenna configurations in real-world scenarios.
The principles of net neutrality have been essential for maintaining the diversity of services built on top of the internet and for maintaining some competition between small and large providers of those online services. That diversity and competition, in turn, provide users with a broader array of choices for seeking online content and disseminating their own speech. Furthermore, in order for the internet to be used to its full potential and to protect the human rights of internet users, we need privacy from surveillance and unwarranted data collection by governments, network providers, and edge providers. The transition to 5G mobile networks enables network operators to engage in a technique called network slicing. The portion of a network that is sliced can be used to provide a suite of different service offerings, each tailored to specific purposes, instead of a single, general-purpose subscription for mobile voice and data. This requires a careful approach. Our report describes the technologies used for network slicing and outlines recommendations -- for both operators and regulators -- to enable network slicing while maintaining network neutrality, protecting privacy, and promoting competition.
Graph neural networks (GNNs) have demonstrated a significant boost in prediction performance on graph data. At the same time, the predictions made by these models are often hard to interpret. In that regard, many efforts have been made to explain the prediction mechanisms of these models from perspectives such as GNNExplainer, XGNN and PGExplainer. Although such works present systematic frameworks to interpret GNNs, a holistic review for explainable GNNs is unavailable. In this survey, we present a comprehensive review of explainability techniques developed for GNNs. We focus on explainable graph neural networks and categorize them based on the use of explainable methods. We further provide the common performance metrics for GNNs explanations and point out several future research directions.
With the advent of 5G commercialization, the need for more reliable, faster, and intelligent telecommunication systems are envisaged for the next generation beyond 5G (B5G) radio access technologies. Artificial Intelligence (AI) and Machine Learning (ML) are not just immensely popular in the service layer applications but also have been proposed as essential enablers in many aspects of B5G networks, from IoT devices and edge computing to cloud-based infrastructures. However, most of the existing surveys in B5G security focus on the performance of AI/ML models and their accuracy, but they often overlook the accountability and trustworthiness of the models' decisions. Explainable AI (XAI) methods are promising techniques that would allow system developers to identify the internal workings of AI/ML black-box models. The goal of using XAI in the security domain of B5G is to allow the decision-making processes of the security of systems to be transparent and comprehensible to stakeholders making the systems accountable for automated actions. In every facet of the forthcoming B5G era, including B5G technologies such as RAN, zero-touch network management, E2E slicing, this survey emphasizes the role of XAI in them and the use cases that the general users would ultimately enjoy. Furthermore, we presented the lessons learned from recent efforts and future research directions on top of the currently conducted projects involving XAI.
Recent years have witnessed significant advances in technologies and services in modern network applications, including smart grid management, wireless communication, cybersecurity as well as multi-agent autonomous systems. Considering the heterogeneous nature of networked entities, emerging network applications call for game-theoretic models and learning-based approaches in order to create distributed network intelligence that responds to uncertainties and disruptions in a dynamic or an adversarial environment. This paper articulates the confluence of networks, games and learning, which establishes a theoretical underpinning for understanding multi-agent decision-making over networks. We provide an selective overview of game-theoretic learning algorithms within the framework of stochastic approximation theory, and associated applications in some representative contexts of modern network systems, such as the next generation wireless communication networks, the smart grid and distributed machine learning. In addition to existing research works on game-theoretic learning over networks, we highlight several new angles and research endeavors on learning in games that are related to recent developments in artificial intelligence. Some of the new angles extrapolate from our own research interests. The overall objective of the paper is to provide the reader a clear picture of the strengths and challenges of adopting game-theoretic learning methods within the context of network systems, and further to identify fruitful future research directions on both theoretical and applied studies.
Generative adversarial networks (GANs) have been extensively studied in the past few years. Arguably their most significant impact has been in the area of computer vision where great advances have been made in challenges such as plausible image generation, image-to-image translation, facial attribute manipulation and similar domains. Despite the significant successes achieved to date, applying GANs to real-world problems still poses significant challenges, three of which we focus on here. These are: (1) the generation of high quality images, (2) diversity of image generation, and (3) stable training. Focusing on the degree to which popular GAN technologies have made progress against these challenges, we provide a detailed review of the state of the art in GAN-related research in the published scientific literature. We further structure this review through a convenient taxonomy we have adopted based on variations in GAN architectures and loss functions. While several reviews for GANs have been presented to date, none have considered the status of this field based on their progress towards addressing practical challenges relevant to computer vision. Accordingly, we review and critically discuss the most popular architecture-variant, and loss-variant GANs, for tackling these challenges. Our objective is to provide an overview as well as a critical analysis of the status of GAN research in terms of relevant progress towards important computer vision application requirements. As we do this we also discuss the most compelling applications in computer vision in which GANs have demonstrated considerable success along with some suggestions for future research directions. Code related to GAN-variants studied in this work is summarized on //github.com/sheqi/GAN_Review.