Reconfigurable intelligent surfaces (RIS) and index modulation (IM) represent key technologies for enabling reliable wireless communication with high energy efficiency. However, to fully take advantage of these technologies in practical deployments, comprehending the impact of the non-ideal nature of the underlying transceivers is paramount. In this context, this paper introduces two RIS-assisted IM communication models, in which the RIS is part of the transmitter and space-shift keying (SSK) is employed for IM, and assesses their performance in the presence of hardware impairments. In the first model, the RIS acts as a passive reflector only, reflecting the oncoming SSK modulated signal intelligently towards the desired receive diversity branch/antenna. The second model employs RIS as a transmitter, employing M-ary phase-shift keying for reflection phase modulation (RPM), and as a reflector for the incoming SSK modulated signal. Considering transmissions subjected to Nakagami-m fading, and a greedy detection rule at the receiver, the performance of both the system configurations is evaluated. Specifically, the pairwise probability of erroneous index detection and the probability of erroneous index detection are adopted as performance metrics, and their closed-form expressions are derived for the RIS-assisted SSK and RIS-assisted SSK-RPM system models. Monte-Carlo simulation studies are carried out to verify the analytical framework, and numerical results are presented to study the dependency of the error performance on the system parameters. The findings highlight the effect of hardware impairment on the performance of the communication system under study.
In the era of Internet of Things (IoT), multi-access edge computing (MEC)-enabled satellite-aerial-terrestrial integrated network (SATIN) has emerged as a promising technology to provide massive IoT devices with seamless and reliable communication and computation services. This paper investigates the cooperation of low Earth orbit (LEO) satellites, high altitude platforms (HAPs), and terrestrial base stations (BSs) to provide relaying and computation services for vastly distributed IoT devices. Considering the uncertainty in dynamic SATIN systems, we formulate a stochastic optimization problem to minimize the time-average expected service delay by jointly optimizing resource allocation and task offloading while satisfying the energy constraints. To solve the formulated problem, we first develop a Lyapunov-based online control algorithm to decompose it into multiple one-slot problems. Since each one-slot problem is a large-scale mixed-integer nonlinear program (MINLP) that is intractable for classical computers, we further propose novel hybrid quantum-classical generalized Benders' decomposition (HQCGBD) algorithms to solve the problem efficiently by leveraging quantum advantages in parallel computing. Numerical results validate the effectiveness of the proposed MEC-enabled SATIN schemes.
Existing pruning techniques for large language models (LLMs) targeting domain-specific applications typically follow a two-stage process: pruning the pretrained general-purpose LLMs and then fine-tuning the pruned LLMs on specific domains. However, the pruning decisions, derived from the pretrained weights, remain unchanged during fine-tuning, even if the weights have been updated. Therefore, such a combination of the pruning decisions and the finetuned weights may be suboptimal, leading to non-negligible performance degradation. To address these limitations, we propose ATP: All-in-One Tuning and Structural Pruning, a unified one-stage structural pruning and fine-tuning approach that dynamically identifies the current optimal substructure throughout the fine-tuning phase via a trainable pruning decision generator. Moreover, given the limited available data for domain-specific applications, Low-Rank Adaptation (LoRA) becomes a common technique to fine-tune the LLMs. In ATP, we introduce LoRA-aware forward and sparsity regularization to ensure that the substructures corresponding to the learned pruning decisions can be directly removed after the ATP process. ATP outperforms the state-of-the-art two-stage pruning methods on tasks in the legal and healthcare domains. More specifically, ATP recovers up to 88% and 91% performance of the dense model when pruning 40% parameters of LLaMA2-7B and LLaMA3-8B models, respectively.
Affine frequency division multiplexing (AFDM) is a promising new multicarrier technique for high-mobility communications based on discrete affine Fourier transform (DAFT). By properly tuning two parameters in the DAFT module, the effective channel in the DAFT domain can completely circumvent path overlap, thereby constituting a full representation of delay-Doppler profile. However, AFDM has a crucial problem of high peak-to-average power ratio (PAPR), stemming from randomness of modulated symbols. To reduce the PAPR of AFDM, a novel algorithm named grouped pre-chirp selection (GPS) is proposed in this letter. The GPS varying the pre-chirp parameter across subcarrier groups in a non-enumerated manner and then selects the signal with the smallest PAPR among all candidate signals. We detail the operational procedures of the GPS algorithm, analyzing GPS from four aspects: PAPR performance, computational complexity, spectral efficiency, and bit error rate (BER) performance. Simulations indicate the effectiveness of the proposed GPS in reducing PAPR. At the cost of slight communication performance, AFDM with GPS can achieve better PAPR performance than orthogonal time frequency space (OTFS) while maintaining a lower modulation complexity.
Connected and autonomous vehicles (CAVs) offload computationally intensive tasks to multi-access edge computing (MEC) servers via vehicle-to-infrastructure (V2I) communication, enabling applications within the vehicular metaverse, which transforms physical environment into the digital space enabling advanced analysis or predictive modeling. A core challenge is physical-to-virtual (P2V) synchronization through digital twins (DTs), reliant on MEC networks and ultra-reliable low-latency communication (URLLC). To address this, we introduce radiance field (RF) delta video compression (RFDVC), which uses RF-encoder and RF-decoder architecture using distributed RFs as DTs storing photorealistic 3D urban scenes in compressed form. This method extracts differences between CAV-frame capturing actual traffic and RF-frame capturing empty scene from the same camera pose in batches encoded and transmitted over the MEC network. Experiments show data savings up to 71% against H.264 codec and 44% against H.265 codec under different conditions as lighting changes, and rain. RFDVC also demonstrates resilience to transmission errors, achieving up to +0.29 structural similarity index measure (SSIM) improvement at block error rate (BLER) = 0.35 in non-rainy and +0.25 at BLER = 0.2 in rainy conditions, ensuring superior visual quality compared to standard video coding (VC) methods across various conditions.
Class imbalance and distributional differences in large datasets present significant challenges for classification tasks machine learning, often leading to biased models and poor predictive performance for minority classes. This work introduces two novel undersampling approaches: mutual information-based stratified simple random sampling and support points optimization. These methods prioritize representative data selection, effectively minimizing information loss. Empirical results across multiple classification tasks demonstrate that our methods outperform traditional undersampling techniques, achieving higher balanced classification accuracy. These findings highlight the potential of combining statistical concepts with machine learning to address class imbalance in practical applications.
In analog neuromorphic chips, designers can embed computing primitives in the intrinsic physical properties of devices and circuits, heavily reducing device count and energy consumption, and enabling high parallelism, because all devices are computing simultaneously. Neural network parameters can be stored in local analog non-volatile memories (NVMs), saving the energy required to move data between memory and logic. However, the main drawback of analog sub-threshold electronic circuits is their dramatic temperature sensitivity. In this paper, we demonstrate that a temperature compensation mechanism can be devised to solve this problem. We have designed and fabricated a chip implementing a two-layer analog neural network trained to classify low-resolution images of handwritten digits with a low-cost single-poly complementary metal-oxide-semiconductor (CMOS) process, using unconventional analog NVMs for weight storage. We demonstrate a temperature-resilient analog neuromorphic chip for image recognition operating between 10$^{\circ}$C and 60$^{\circ}$C without loss of classification accuracy, within 2\% of the corresponding software-based neural network in the whole temperature range.
This paper investigates the use of multi-agent reinforcement learning (MARL) to address distributed channel access in wireless local area networks. In particular, we consider the challenging yet more practical case where the agents heterogeneously adopt value-based or policy-based reinforcement learning algorithms to train the model. We propose a heterogeneous MARL training framework, named QPMIX, which adopts a centralized training with distributed execution paradigm to enable heterogeneous agents to collaborate. Moreover, we theoretically prove the convergence of the proposed heterogeneous MARL method when using the linear value function approximation. Our method maximizes the network throughput and ensures fairness among stations, therefore, enhancing the overall network performance. Simulation results demonstrate that the proposed QPMIX algorithm improves throughput, mean delay, delay jitter, and collision rates compared with conventional carrier-sense multiple access with collision avoidance in the saturated traffic scenario. Furthermore, the QPMIX is shown to be robust in unsaturated and delay-sensitive traffic scenarios, and promotes cooperation among heterogeneous agents.
This work proposes a novel privacy-preserving cyberattack detection framework for blockchain-based Internet-of-Things (IoT) systems. In our approach, artificial intelligence (AI)-driven detection modules are strategically deployed at blockchain nodes to identify real-time attacks, ensuring high accuracy and minimal delay. To achieve this efficiency, the model training is conducted by a cloud service provider (CSP). Accordingly, blockchain nodes send their data to the CSP for training, but to safeguard privacy, the data is encrypted using homomorphic encryption (HE) before transmission. This encryption method allows the CSP to perform computations directly on encrypted data without the need for decryption, preserving data privacy throughout the learning process. To handle the substantial volume of encrypted data, we introduce an innovative packing algorithm in a Single-Instruction-Multiple-Data (SIMD) manner, enabling efficient training on HE-encrypted data. Building on this, we develop a novel deep neural network training algorithm optimized for encrypted data. We further propose a privacy-preserving distributed learning approach based on the FedAvg algorithm, which parallelizes the training across multiple workers, significantly improving computation time. Upon completion, the CSP distributes the trained model to the blockchain nodes, enabling them to perform real-time, privacy-preserved detection. Our simulation results demonstrate that our proposed method can not only mitigate the training time but also achieve detection accuracy that is approximately identical to the approach without encryption, with a gap of around 0.01%. Additionally, our real implementations on various blockchain consensus algorithms and hardware configurations show that our proposed framework can also be effectively adapted to real-world systems.
Vast amount of data generated from networks of sensors, wearables, and the Internet of Things (IoT) devices underscores the need for advanced modeling techniques that leverage the spatio-temporal structure of decentralized data due to the need for edge computation and licensing (data access) issues. While federated learning (FL) has emerged as a framework for model training without requiring direct data sharing and exchange, effectively modeling the complex spatio-temporal dependencies to improve forecasting capabilities still remains an open problem. On the other hand, state-of-the-art spatio-temporal forecasting models assume unfettered access to the data, neglecting constraints on data sharing. To bridge this gap, we propose a federated spatio-temporal model -- Cross-Node Federated Graph Neural Network (CNFGNN) -- which explicitly encodes the underlying graph structure using graph neural network (GNN)-based architecture under the constraint of cross-node federated learning, which requires that data in a network of nodes is generated locally on each node and remains decentralized. CNFGNN operates by disentangling the temporal dynamics modeling on devices and spatial dynamics on the server, utilizing alternating optimization to reduce the communication cost, facilitating computations on the edge devices. Experiments on the traffic flow forecasting task show that CNFGNN achieves the best forecasting performance in both transductive and inductive learning settings with no extra computation cost on edge devices, while incurring modest communication cost.
In this paper, we propose the joint learning attention and recurrent neural network (RNN) models for multi-label classification. While approaches based on the use of either model exist (e.g., for the task of image captioning), training such existing network architectures typically require pre-defined label sequences. For multi-label classification, it would be desirable to have a robust inference process, so that the prediction error would not propagate and thus affect the performance. Our proposed model uniquely integrates attention and Long Short Term Memory (LSTM) models, which not only addresses the above problem but also allows one to identify visual objects of interests with varying sizes without the prior knowledge of particular label ordering. More importantly, label co-occurrence information can be jointly exploited by our LSTM model. Finally, by advancing the technique of beam search, prediction of multiple labels can be efficiently achieved by our proposed network model.