Semantic communication is of crucial importance for the next-generation wireless communication networks. The existing works have developed semantic communication frameworks based on deep learning. However, systems powered by deep learning are vulnerable to threats such as backdoor attacks and adversarial attacks. This paper delves into backdoor attacks targeting deep learning-enabled semantic communication systems. Since current works on backdoor attacks are not tailored for semantic communication scenarios, a new backdoor attack paradigm on semantic symbols (BASS) is introduced, based on which the corresponding defense measures are designed. Specifically, a training framework is proposed to prevent BASS. Additionally, reverse engineering-based and pruning-based defense strategies are designed to protect against backdoor attacks in semantic communication. Simulation results demonstrate the effectiveness of both the proposed attack paradigm and the defense strategies.
In forthcoming AI-assisted 6G networks, integrating semantic, pragmatic, and goal-oriented communication strategies becomes imperative. This integration will enable sensing, transmission, and processing of exclusively pertinent task data, ensuring conveyed information possesses understandable, pragmatic semantic significance, aligning with destination needs and goals. Without doubt, no communication is error free. Within this context, besides errors stemming from typical wireless communication dynamics, potential distortions between transmitter-intended and receiver-interpreted meanings can emerge due to limitations in semantic processing capabilities, as well as language and knowledge representation disparities between transmitters and receivers. The main contribution of this paper is two-fold. First, it proposes and details a novel mathematical modeling of errors stemming from language mismatches at both semantic and effectiveness levels. Second, it provides a novel algorithmic solution to counteract these types of errors which leverages optimal transport theory. Our numerical results show the potential of the proposed mechanism to compensate for language mismatches, thereby enhancing the attainability of reliable communication under noisy communication environments.
Advancements in 6G wireless technology have elevated the importance of beamforming, especially for attaining ultra-high data rates via millimeter-wave (mmWave) frequency deployment. Although promising, mmWave bands require substantial beam training to achieve precise beamforming. While initial deep learning models that use RGB camera images demonstrated promise in reducing beam training overhead, their performance suffers due to sensitivity to lighting and environmental variations. Due to this sensitivity, Quality of Service (QoS) fluctuates, eventually affecting the stability and dependability of networks in dynamic environments. This emphasizes a critical need for more robust solutions. This paper proposes a robust beamforming technique to ensure consistent QoS under varying environmental conditions. An optimization problem has been formulated to maximize users' data rates. To solve the formulated NP-hard optimization problem, we decompose it into two subproblems: the semantic localization problem and the optimal beam selection problem. To solve the semantic localization problem, we propose a novel method that leverages the k-means clustering and YOLOv8 model. To solve the beam selection problem, we propose a novel lightweight hybrid architecture that utilizes various data sources and a weighted entropy-based mechanism to predict the optimal beams. Rapid and accurate beam predictions are needed to maintain QoS. A novel metric, Accuracy-Complexity Efficiency (ACE), has been proposed to quantify this. Six testing scenarios have been developed to evaluate the robustness of the proposed model. Finally, the simulation result demonstrates that the proposed model outperforms several state-of-the-art baselines regarding beam prediction accuracy, received power, and ACE in the developed test scenarios.
This paper studies an integrated sensing and communication (ISAC) system, where a multi-antenna base station transmits beamformed signals for joint downlink multi-user communication and radar sensing of an extended target (ET). By considering echo signals as reflections from valid elements on the ET contour, a set of novel Cram\'er-Rao bounds (CRBs) is derived for parameter estimation of the ET, including central range, direction, and orientation. The ISAC transmit beamforming design is then formulated as an optimization problem, aiming to minimize the CRB associated with radar sensing, while satisfying a minimum signal-to-interference-pulse-noise ratio requirement for each communication user, along with a 3-dB beam coverage constraint tailored for the ET. To solve this non-convex problem, we utilize semidefinite relaxation (SDR) and propose a rank-one solution extraction scheme for non-tight relaxation circumstances. To reduce the computation complexity, we further employ an efficient zero-forcing (ZF) based beamforming design, where the sensing task is performed in the null space of communication channels. Numerical results validate the effectiveness of the obtained CRB, revealing the diverse features of CRB for differently shaped ETs. The proposed SDR beamforming design outperforms benchmark designs with lower estimation error and CRB, while the ZF beamforming design greatly improves computation efficiency with minor sensing performance loss.
Next-generation wireless networks are projected to empower a broad range of Internet-of-things (IoT) applications and services with extreme data rates, posing new challenges in delivering large-scale connectivity at a low cost to current communication paradigms. Rate-splitting multiple access (RSMA) is one of the most spotlight nominees, conceived to address spectrum scarcity while reaching massive connectivity. Meanwhile, symbiotic communication is said to be an inexpensive way to realize future IoT on a large scale. To reach the goal of spectrum efficiency improvement and low energy consumption, we merge these advances by means of introducing a novel paradigm shift, called symbiotic backscatter RSMA, for the next generation. Specifically, we first establish the way to operate the symbiotic system to assist the readers in apprehending the proposed paradigm, then guide detailed design in beamforming weights with four potential gain-control (GC) strategies for enhancing symbiotic communication, and finally provide an information-theoretic framework using a new metric, called symbiotic outage probability (SOP) to characterize the proposed system performance. Through numerical result experiments, we show that the developed framework can accurately predict the actual SOP and the efficacy of the proposed GC strategies in improving the SOP performance.
This paper studies the performance trade-off in a multi-user backscatter communication (BackCom) system for integrated sensing and communications (ISAC), where the multi-antenna ISAC transmitter sends excitation signals to power multiple single-antenna passive backscatter devices (BD), and the multi-antenna ISAC receiver performs joint sensing (localization) and communication tasks based on the backscattered signals from all BDs. Specifically, the localization performance is measured by the Cram\'{e}r-Rao bound (CRB) on the transmission delay and direction of arrival (DoA) of the backscattered signals, whose closed-form expression is obtained by deriving the corresponding Fisher information matrix (FIM), and the communication performance is characterized by the sum transmission rate of all BDs. Then, to characterize the trade-off between the localization and communication performances, the CRB minimization problem with the communication rate constraint is formulated, and is shown to be non-convex in general. By exploiting the hidden convexity, we propose an approach that combines fractional programming (FP) and Schur complement techniques to transform the original problem into an equivalent convex form. Finally, numerical results reveal the trade-off between the CRB and sum transmission rate achieved by our proposed method.
Satellite networks provide communication services to global users with an uneven geographical distribution. In densely populated regions, Inter-satellite links (ISLs) often experience congestion, blocking traffic from other links and leading to low link utilization and throughput. In such cases, delay-tolerant traffic can be withheld by moving satellites and carried to navigate congested areas, thereby mitigating link congestion in densely populated regions. Through rational store-and-forward decision-making, link utilization and throughput can be improved. Building on this foundation, this letter centers its focus on learning-based decision-making for satellite traffic. First, a link load prediction method based on topology isomorphism is proposed. Then, a Markov decision process (MDP) is formulated to model store-and-forward decision-making. To generate store-and-forward policies, we propose reinforcement learning algorithms based on value iteration and Q-Learning. Simulation results demonstrate that the proposed method improves throughput and link utilization while consuming less than 20$\%$ of the time required by constraint-based routing.
Collaborative inference in next-generation networks can enhance Artificial Intelligence (AI) applications, including autonomous driving, personal identification, and activity classification. This method involves a three-stage process: a) data acquisition through sensing, b) feature extraction, and c) feature encoding for transmission. Transmission of the extracted features entails the potential risk of exposing sensitive personal data. To address this issue, in this work a new privacy-protecting collaborative inference mechanism is developed. Under this mechanism, each edge device in the network protects the privacy of extracted features before transmitting them to a central server for inference. This mechanism aims to achieve two main objectives while ensuring effective inference performance: 1) reducing communication overhead, and 2) maintaining strict privacy guarantees during features transmission.
In response to the increasing number of devices anticipated in next-generation networks, a shift toward over-the-air (OTA) computing has been proposed. Leveraging the superposition of multiple access channels, OTA computing enables efficient resource management by supporting simultaneous uncoded transmission in the time and the frequency domain. Thus, to advance the integration of OTA computing, our study presents a theoretical analysis addressing practical issues encountered in current digital communication transceivers, such as time sampling error and intersymbol interference (ISI). To this end, we examine the theoretical mean squared error (MSE) for OTA transmission under time sampling error and ISI, while also exploring methods for minimizing the MSE in the OTA transmission. Utilizing alternating optimization, we also derive optimal power policies for both the devices and the base station. Additionally, we propose a novel deep neural network (DNN)-based approach to design waveforms enhancing OTA transmission performance under time sampling error and ISI. To ensure fair comparison with existing waveforms like the raised cosine (RC) and the better-than-raised-cosine (BRTC), we incorporate a custom loss function integrating energy and bandwidth constraints, along with practical design considerations such as waveform symmetry. Simulation results validate our theoretical analysis and demonstrate performance gains of the designed pulse over RC and BTRC waveforms. To facilitate testing of our results without necessitating the DNN structure recreation, we provide curve fitting parameters for select DNN-based waveforms as well.
Digital Twin technology facilitates the monitoring and online analysis of large-scale communication networks. Faster predictions of network performance thus become imperative, especially for analysing Quality of Service (QoS) parameters in large-scale city networks. Discrete Event Simulation (DES) is a standard network analysis technology, and can be further optimised with parallel and distributed execution for speedup, referred to as Parallel Discrete Event Simulation (PDES). However, modelling detailed QoS mechanisms such as DiffServ requires complex event handling for each network router, which can involve excessive simulation events. In addition, current PDES for network analysis mostly adopts conservative scheduling, which suffers from excessive global synchronisation to avoid causality problems. The performance analysis of optimistic PDES for real-world large-scale network topology and complex QoS mechanisms is still inadequate. To address these gaps, this paper proposes a simulation toolkit, Quaint, which leverages an optimistic PDES engine ROSS, for detailed modelling of DiffServ-based networks. A novel event-handling model for each network router is also proposed to significantly reduce the number of events in complex QoS modelling. Quaint has been evaluated using a real-world metropolitan-scale network topology with 5,000 routers/switches. Results show that compared to the conventional simulator OMNeT++/INET, even the sequential mode of Quaint can achieve a speedup of 53 times, and the distributed mode has a speedup of 232 times. Scalability characterisation is conducted to portray the efficiency of distributed execution, and the results indicate the future direction for workload-aware model partitioning.
Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.