In this work, we investigate the problem of neural-based error correction decoding, and more specifically, the new so-called syndrome-based decoding technique introduced to tackle scalability in the training phase for larger code sizes. We improve on previous works in terms of allowing full decoding of the message rather than codewords, allowing thus the application to non-systematic codes, and proving that the single-message training property is still viable. The suggested system is implemented and tested on polar codes of sizes (64,32) and (128,64), and a BCH of size (63,51), leading to a significant improvement in both Bit Error Rate (BER) and Frame Error Rate (FER), with gains between 0.3dB and 1dB for the implemented codes in the high Signal-to-Noise Ratio (SNR) regime.
In this study, we introduce a domain-decomposition-based distributed training and inference approach for message-passing neural networks (MPNN). Our objective is to address the challenge of scaling edge-based graph neural networks as the number of nodes increases. Through our distributed training approach, coupled with Nystr\"om-approximation sampling techniques, we present a scalable graph neural network, referred to as DS-MPNN (D and S standing for distributed and sampled, respectively), capable of scaling up to $O(10^5)$ nodes. We validate our sampling and distributed training approach on two cases: (a) a Darcy flow dataset and (b) steady RANS simulations of 2-D airfoils, providing comparisons with both single-GPU implementation and node-based graph convolution networks (GCNs). The DS-MPNN model demonstrates comparable accuracy to single-GPU implementation, can accommodate a significantly larger number of nodes compared to the single-GPU variant (S-MPNN), and significantly outperforms the node-based GCN.
In this work, we unveil the advantages of synergizing cooperative rate splitting (CRS) with user relaying and simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR RIS). Specifically, we propose a novel STAR RIS-assisted CRS transmission framework, featuring six unique transmission modes that leverage various combination of the relaying protocols (including full duplex-FD and half duplex-HD) and the STAR RIS configuration protocols (including energy splitting-ES, mode switching-MS, and time splitting-TS). With the objective of maximizing the minimum user rate, we then propose a unified successive convex approximation (SCA)-based alternative optimization (AO) algorithm to jointly optimize the transmit active beamforming, common rate allocation, STAR RIS passive beamforming, as well as time allocation (for HD or TS protocols) subject to the transmit power constraint at the base station (BS) and the law of energy conservation at the STAR RIS. To alleviate the computational burden, we further propose a low-complexity algorithm that incorporates a closed-form passive beamforming design. Numerical results show that our proposed framework significantly enhances user fairness compared with conventional CRS schemes without STAR RIS or other STAR RIS empowered multiple access schemes. Moreover, the proposed low-complexity algorithm dramatically reduces the computational complexity while achieving very close performance to the AO method.
In this paper, we investigate the millimeter-wave (mmWave) near-field beam training problem to find the correct beam direction. In order to address the high complexity and low identification accuracy of existing beam training techniques, we propose an efficient hashing multi-arm beam (HMB) training scheme for the near-field scenario. Specifically, we first design a set of sparse bases based on the polar domain sparsity of the near-field channel. Then, the random hash functions are chosen to construct the near-field multi-arm beam training codebook. Each multi-arm beam codeword is scanned in a time slot until all the predefined codewords are traversed. Finally, the soft decision and voting methods are applied to distinguish the signal from different base stations and obtain correctly aligned beams. Simulation results show that our proposed near-field HMB training method can reduce the beam training overhead to the logarithmic level, and achieve 96.4% identification accuracy of exhaustive beam training. Moreover, we also verify applicability under the far-field scenario.
In this work, we introduce a pioneering research challenge: evaluating positive and potentially harmful messages within music products. We initiate by setting a multi-faceted, multi-task benchmark for music content assessment. Subsequently, we introduce an efficient multi-task predictive model fortified with ordinality-enforcement to address this challenge. Our findings reveal that the proposed method not only significantly outperforms robust task-specific alternatives but also possesses the capability to assess multiple aspects simultaneously. Furthermore, through detailed case studies, where we employed Large Language Models (LLMs) as surrogates for content assessment, we provide valuable insights to inform and guide future research on this topic. The code for dataset creation and model implementation is publicly available at //github.com/RiTUAL-UH/music-message-assessment.
In this work, we introduce DeepIPC, a novel end-to-end model tailored for autonomous driving, which seamlessly integrates perception and control tasks. Unlike traditional models that handle these tasks separately, DeepIPC innovatively combines a perception module, which processes RGBD images for semantic segmentation and generates bird's eye view (BEV) mappings, with a controller module that utilizes these insights along with GNSS and angular speed measurements to accurately predict navigational waypoints. This integration allows DeepIPC to efficiently translate complex environmental data into actionable driving commands. Our comprehensive evaluation demonstrates DeepIPC's superior performance in terms of drivability and multi-task efficiency across diverse real-world scenarios, setting a new benchmark for end-to-end autonomous driving systems with a leaner model architecture. The experimental results underscore DeepIPC's potential to significantly enhance autonomous vehicular navigation, promising a step forward in the development of autonomous driving technologies. For further insights and replication, we will make our code and datasets available at //github.com/oskarnatan/DeepIPC.
Neuropathies are gaining higher relevance in clinical settings, as they risk permanently jeopardizing a person's life. To support the recovery of patients, the use of fully implanted devices is emerging as one of the most promising solutions. However, these devices, even if becoming an integral part of a fully complex neural nanonetwork system, pose numerous challenges. In this article, we address one of them, which consists of the classification of motor/sensory stimuli. The task is performed by exploring four different types of artificial neural networks (ANNs) to extract various sensory stimuli from the electroneurographic (ENG) signal measured in the sciatic nerve of rats. Different sizes of the data sets are considered to analyze the feasibility of the investigated ANNs for real-time classification through a comparison of their performance in terms of accuracy, F1-score, and prediction time. The design of the ANNs takes advantage of the modelling of the ENG signal as a multiple-input multiple-output (MIMO) system to describe the measures taken by state-of-the-art implanted nerve interfaces. These are based on the use of multi-contact cuff electrodes to achieve nanoscale spatial discrimination of the nerve activity. The MIMO ENG signal model is another contribution of this paper. Our results show that some ANNs are more suitable for real-time applications, being capable of achieving accuracies over $90\%$ for signal windows of $100$ and $200\,$ms with a low enough processing time to be effective for pathology recovery.
In this work, we investigate the joint visibility region (VR) detection and channel estimation (CE) problem for extremely large-scale multiple-input-multiple-output (XL-MIMO) systems considering both the spherical wavefront effect and spatial non-stationary (SnS) property. Unlike existing SnS CE methods that rely on the statistical characteristics of channels in the spatial or delay domain, we propose an approach that simultaneously exploits the antenna-domain spatial correlation and the wavenumber-domain sparsity of SnS channels. To this end, we introduce a two-stage VR detection and CE scheme. In the first stage, the belief regarding the visibility of antennas is obtained through a VR detection-oriented message passing (VRDO-MP) scheme, which fully exploits the spatial correlation among adjacent antenna elements. In the second stage, leveraging the VR information and wavenumber-domain sparsity, we accurately estimate the SnS channel employing the belief-based orthogonal matching pursuit (BB-OMP) method. Simulations show that the proposed algorithms lead to a significant enhancement in VR detection and CE accuracy as compared to existing methods, especially in low signal-to-noise ratio (SNR) scenarios.
In this work, we study integrated sensing and communication (ISAC) networks intending to effectively balance sensing and communication (S&C) performance at the network level. Through the simultaneous utilization of multi-point (CoMP) coordinated joint transmission and distributed multiple-input multiple-output (MIMO) radar techniques, we propose a cooperative networked ISAC scheme to enhance both S&C services. Then, the tool of stochastic geometry is exploited to capture the S&C performance, which allows us to illuminate key cooperative dependencies in the ISAC network. Remarkably, the derived expression of the Cramer-Rao lower bound (CRLB) of the localization accuracy unveils a significant finding: Deploying $N$ ISAC transceivers yields an enhanced sensing performance across the entire network, in accordance with the $\ln^2N$ scaling law. Simulation results demonstrate that compared to the time-sharing scheme, the proposed cooperative ISAC scheme can effectively improve the average data rate and reduce the CRLB.
In this study, we uncover the unexpected efficacy of residual-based large language models (LLMs) as part of encoders for biomedical imaging tasks, a domain traditionally devoid of language or textual data. The approach diverges from established methodologies by utilizing a frozen transformer block, extracted from pre-trained LLMs, as an innovative encoder layer for the direct processing of visual tokens. This strategy represents a significant departure from the standard multi-modal vision-language frameworks, which typically hinge on language-driven prompts and inputs. We found that these LLMs could boost performance across a spectrum of biomedical imaging applications, including both 2D and 3D visual classification tasks, serving as plug-and-play boosters. More interestingly, as a byproduct, we found that the proposed framework achieved superior performance, setting new state-of-the-art results on extensive, standardized datasets in MedMNIST-2D and 3D. Through this work, we aim to open new avenues for employing LLMs in biomedical imaging and enriching the understanding of their potential in this specialized domain.
With the rise of powerful pre-trained vision-language models like CLIP, it becomes essential to investigate ways to adapt these models to downstream datasets. A recently proposed method named Context Optimization (CoOp) introduces the concept of prompt learning -- a recent trend in NLP -- to the vision domain for adapting pre-trained vision-language models. Specifically, CoOp turns context words in a prompt into a set of learnable vectors and, with only a few labeled images for learning, can achieve huge improvements over intensively-tuned manual prompts. In our study we identify a critical problem of CoOp: the learned context is not generalizable to wider unseen classes within the same dataset, suggesting that CoOp overfits base classes observed during training. To address the problem, we propose Conditional Context Optimization (CoCoOp), which extends CoOp by further learning a lightweight neural network to generate for each image an input-conditional token (vector). Compared to CoOp's static prompts, our dynamic prompts adapt to each instance and are thus less sensitive to class shift. Extensive experiments show that CoCoOp generalizes much better than CoOp to unseen classes, even showing promising transferability beyond a single dataset; and yields stronger domain generalization performance as well. Code is available at //github.com/KaiyangZhou/CoOp.