In this paper, we propose a new six-dimensional (6D) movable antenna (6DMA) system for future wireless networks to improve the communication performance. Unlike the traditional fixed-position antenna (FPA) and existing fluid antenna/two-dimensional (2D) movable antenna (FA/2DMA) systems that adjust the positions of antennas only, the proposed 6DMA system consists of distributed antenna surfaces with independently adjustable three-dimensional (3D) positions as well as 3D rotations within a given space. In particular, this paper applies the 6DMA to the base station (BS) in wireless networks to provide full degrees of freedom (DoFs) for the BS to adapt to the dynamic user spatial distribution in the network. However, a challenging new problem arises on how to optimally control the 6D positions and rotations of all 6DMA surfaces at the BS to maximize the network capacity based on the user spatial distribution, subject to the practical constraints on 6D antennas' movement. To tackle this problem, we first model the 6DMA-enabled BS and the user channels with the BS in terms of 6D positions and rotations of all 6DMA surfaces. Next, we propose an efficient alternating optimization algorithm to search for the best 6D positions and rotations of all 6DMA surfaces by leveraging the Monte Carlo simulation technique. Specifically, we sequentially optimize the 3D position/3D rotation of each 6DMA surface with those of the other surfaces fixed in an iterative manner. Numerical results show that our proposed 6DMA-BS can significantly improve the network capacity as compared to the benchmark BS architectures with FPAs or MAs with limited/partial movability, especially when the user distribution is more spatially non-uniform.
Integrated sensing and communication (ISAC) is expected to play a prominent role among emerging technologies in future wireless communications. In particular, a communication radar coexistence system is degraded significantly by mutual interference. In this work, given the advantages of promising reconfigurable intelligent surface (RIS), we propose a simultaneously transmitting and reflecting RIS (STAR-RIS)-assisted radar coexistence system where a STAR-RIS is introduced to improve the communication performance while suppressing the mutual interference and providing full space coverage. Based on the realistic conditions of correlated fading, and the presence of multiple user equipments (UEs) at both sides of the RIS, we derive the achievable rates at the radar and the communication receiver side in closed forms in terms of statistical channel state information (CSI). Next, we perform alternating optimization (AO) for optimizing the STAR-RIS and the radar beamforming. Regarding the former, we optimize the amplitudes and phase shifts of the STAR-RIS through a projected gradient ascent algorithm (PGAM) simultaneously with respect to the amplitudes and phase shifts of the surface for both energy splitting (ES) and mode switching (MS) operation protocols. The proposed optimization saves enough overhead since it can be performed every several coherence intervals. This property is particularly beneficial compared to reflecting-only RIS because a STAR-RIS includes the double number of variables, which require increased overhead. Finally, simulation results illustrate how the proposed architecture outperforms the conventional RIS counterpart, and show how the various parameters affect the performance. Moreover, a benchmark full instantaneous CSI (I-CSI) based design is provided and shown to result in higher sum-rate but also in large overhead associated with complexity.
It is crucial to deploy temporary non-terrestrial networks (NTN) in disaster situations where terrestrial networks are no longer operable. Deploying uncrewed aerial vehicle base stations (UAV-BSs) can provide a radio access network (RAN); however, the backhaul link may also be damaged and unserviceable in such disaster conditions. In this regard, high-altitude platform stations (HAPS) spark attention as they can be deployed as super macro base stations (SMBS) and data centers. Therefore, in this study, we investigate a three-layer heterogeneous network with different topologies to prolong the lifespan of the temporary network by using UAV-BSs for RAN services and HAPS-SMBS as a backhaul. Furthermore, a two-layer clustering algorithm is proposed to handle the UAV-BS ad-hoc networking effectively.
In this paper, we propose a novel efficient digital twin (DT) data processing scheme to reduce service latency for multicast short video streaming. Particularly, DT is constructed to emulate and analyze user status for multicast group update and swipe feature abstraction. Then, a precise measurement model of DT data processing is developed to characterize the relationship among DT model size, user dynamics, and user clustering accuracy. A service latency model, consisting of DT data processing delay, video transcoding delay, and multicast transmission delay, is constructed by incorporating the impact of user clustering accuracy. Finally, a joint optimization problem of DT model size selection and bandwidth allocation is formulated to minimize the service latency. To efficiently solve this problem, a diffusion-based resource management algorithm is proposed, which utilizes the denoising technique to improve the action-generation process in the deep reinforcement learning algorithm. Simulation results based on the real-world dataset demonstrate that the proposed DT data processing scheme outperforms benchmark schemes in terms of service latency.
Semantic communication is of crucial importance for the next-generation wireless communication networks. The existing works have developed semantic communication frameworks based on deep learning. However, systems powered by deep learning are vulnerable to threats such as backdoor attacks and adversarial attacks. This paper delves into backdoor attacks targeting deep learning-enabled semantic communication systems. Since current works on backdoor attacks are not tailored for semantic communication scenarios, a new backdoor attack paradigm on semantic symbols (BASS) is introduced, based on which the corresponding defense measures are designed. Specifically, a training framework is proposed to prevent BASS. Additionally, reverse engineering-based and pruning-based defense strategies are designed to protect against backdoor attacks in semantic communication. Simulation results demonstrate the effectiveness of both the proposed attack paradigm and the defense strategies.
In this article, we propose an accuracy-assuring technique for finding a solution for unsymmetric linear systems. Such problems are related to different areas such as image processing, computer vision, and computational fluid dynamics. Parallel implementation of Krylov subspace methods speeds up finding approximate solutions for linear systems. In this context, the refined approach in pipelined BiCGStab enhances scalability on distributed memory machines, yielding to substantial speed improvements compared to the standard BiCGStab method. However, it's worth noting that the pipelined BiCGStab algorithm sacrifices some accuracy, which is stabilized with the residual replacement technique. This paper aims to address this issue by employing the ExBLAS-based reproducible approach. We validate the idea on a set of matrices from the SuiteSparse Matrix Collection.
In this paper, we investigate the retrieval-augmented generation (RAG) based on Knowledge Graphs (KGs) to improve the accuracy and reliability of Large Language Models (LLMs). Recent approaches suffer from insufficient and repetitive knowledge retrieval, tedious and time-consuming query parsing, and monotonous knowledge utilization. To this end, we develop a Hypothesis Knowledge Graph Enhanced (HyKGE) framework, which leverages LLMs' powerful reasoning capacity to compensate for the incompleteness of user queries, optimizes the interaction process with LLMs, and provides diverse retrieved knowledge. Specifically, HyKGE explores the zero-shot capability and the rich knowledge of LLMs with Hypothesis Outputs to extend feasible exploration directions in the KGs, as well as the carefully curated prompt to enhance the density and efficiency of LLMs' responses. Furthermore, we introduce the HO Fragment Granularity-aware Rerank Module to filter out noise while ensuring the balance between diversity and relevance in retrieved knowledge. Experiments on two Chinese medical multiple-choice question datasets and one Chinese open-domain medical Q&A dataset with two LLM turbos demonstrate the superiority of HyKGE in terms of accuracy and explainability.
Graph neural networks (GNNs) have demonstrated a significant boost in prediction performance on graph data. At the same time, the predictions made by these models are often hard to interpret. In that regard, many efforts have been made to explain the prediction mechanisms of these models from perspectives such as GNNExplainer, XGNN and PGExplainer. Although such works present systematic frameworks to interpret GNNs, a holistic review for explainable GNNs is unavailable. In this survey, we present a comprehensive review of explainability techniques developed for GNNs. We focus on explainable graph neural networks and categorize them based on the use of explainable methods. We further provide the common performance metrics for GNNs explanations and point out several future research directions.
With the advent of 5G commercialization, the need for more reliable, faster, and intelligent telecommunication systems are envisaged for the next generation beyond 5G (B5G) radio access technologies. Artificial Intelligence (AI) and Machine Learning (ML) are not just immensely popular in the service layer applications but also have been proposed as essential enablers in many aspects of B5G networks, from IoT devices and edge computing to cloud-based infrastructures. However, most of the existing surveys in B5G security focus on the performance of AI/ML models and their accuracy, but they often overlook the accountability and trustworthiness of the models' decisions. Explainable AI (XAI) methods are promising techniques that would allow system developers to identify the internal workings of AI/ML black-box models. The goal of using XAI in the security domain of B5G is to allow the decision-making processes of the security of systems to be transparent and comprehensible to stakeholders making the systems accountable for automated actions. In every facet of the forthcoming B5G era, including B5G technologies such as RAN, zero-touch network management, E2E slicing, this survey emphasizes the role of XAI in them and the use cases that the general users would ultimately enjoy. Furthermore, we presented the lessons learned from recent efforts and future research directions on top of the currently conducted projects involving XAI.
Deep neural networks (DNNs) have achieved unprecedented success in the field of artificial intelligence (AI), including computer vision, natural language processing and speech recognition. However, their superior performance comes at the considerable cost of computational complexity, which greatly hinders their applications in many resource-constrained devices, such as mobile phones and Internet of Things (IoT) devices. Therefore, methods and techniques that are able to lift the efficiency bottleneck while preserving the high accuracy of DNNs are in great demand in order to enable numerous edge AI applications. This paper provides an overview of efficient deep learning methods, systems and applications. We start from introducing popular model compression methods, including pruning, factorization, quantization as well as compact model design. To reduce the large design cost of these manual solutions, we discuss the AutoML framework for each of them, such as neural architecture search (NAS) and automated pruning and quantization. We then cover efficient on-device training to enable user customization based on the local data on mobile devices. Apart from general acceleration techniques, we also showcase several task-specific accelerations for point cloud, video and natural language processing by exploiting their spatial sparsity and temporal/token redundancy. Finally, to support all these algorithmic advancements, we introduce the efficient deep learning system design from both software and hardware perspectives.
Generative adversarial networks (GANs) have been extensively studied in the past few years. Arguably their most significant impact has been in the area of computer vision where great advances have been made in challenges such as plausible image generation, image-to-image translation, facial attribute manipulation and similar domains. Despite the significant successes achieved to date, applying GANs to real-world problems still poses significant challenges, three of which we focus on here. These are: (1) the generation of high quality images, (2) diversity of image generation, and (3) stable training. Focusing on the degree to which popular GAN technologies have made progress against these challenges, we provide a detailed review of the state of the art in GAN-related research in the published scientific literature. We further structure this review through a convenient taxonomy we have adopted based on variations in GAN architectures and loss functions. While several reviews for GANs have been presented to date, none have considered the status of this field based on their progress towards addressing practical challenges relevant to computer vision. Accordingly, we review and critically discuss the most popular architecture-variant, and loss-variant GANs, for tackling these challenges. Our objective is to provide an overview as well as a critical analysis of the status of GAN research in terms of relevant progress towards important computer vision application requirements. As we do this we also discuss the most compelling applications in computer vision in which GANs have demonstrated considerable success along with some suggestions for future research directions. Code related to GAN-variants studied in this work is summarized on //github.com/sheqi/GAN_Review.