Massive access has been challenging for the fifth generation (5G) and beyond since the abundance of devices causes communication overload to skyrocket. In an uplink massive access scenario, device traffic is sporadic in any given coherence time. Thus, channels across the antennas of each device exhibit correlation, which can be characterized by the row sparse channel matrix structure. In this work, we develop a bilinear generalized approximate message passing (BiGAMP) algorithm based on the row sparse channel matrix structure. This algorithm can jointly detect device activities, estimate channels, and detect signals in massive multiple-input multiple-output (MIMO) systems by alternating updates between channel matrices and signal matrices. The signal observation provides additional information for performance improvement compared to the existing algorithms. We further analyze state evolution (SE) to measure the performance of the proposed algorithm and characterize the convergence condition for SE. Moreover, we perform theoretical analysis on the error probability of device activity detection, the mean square error of channel estimation, and the symbol error rate of signal detection. The numerical results demonstrate the superiority of the proposed algorithm over the state-of-the-art methods in DADCE-SD, and the numerical results are relatively close to the theoretical analysis results.
Augmenting LiDAR input with multiple previous frames provides richer semantic information and thus boosts performance in 3D object detection, However, crowded point clouds in multi-frames can hurt the precise position information due to the motion blur and inaccurate point projection. In this work, we propose a novel feature fusion strategy, DynStaF (Dynamic-Static Fusion), which enhances the rich semantic information provided by the multi-frame (dynamic branch) with the accurate location information from the current single-frame (static branch). To effectively extract and aggregate complimentary features, DynStaF contains two modules, Neighborhood Cross Attention (NCA) and Dynamic-Static Interaction (DSI), operating through a dual pathway architecture. NCA takes the features in the static branch as queries and the features in the dynamic branch as keys (values). When computing the attention, we address the sparsity of point clouds and take only neighborhood positions into consideration. NCA fuses two features at different feature map scales, followed by DSI providing the comprehensive interaction. To analyze our proposed strategy DynStaF, we conduct extensive experiments on the nuScenes dataset. On the test set, DynStaF increases the performance of PointPillars in NDS by a large margin from 57.7% to 61.6%. When combined with CenterPoint, our framework achieves 61.0% mAP and 67.7% NDS, leading to state-of-the-art performance without bells and whistles.
Being capable of enhancing the spectral efficiency (SE), faster-than-Nyquist (FTN) signaling is a promising approach for wireless communication systems. This paper investigates the doubly-selective (i.e., time- and frequency-selective) channel estimation and data detection of FTN signaling. We consider the intersymbol interference (ISI) resulting from both the FTN signaling and the frequency-selective channel and adopt an efficient frame structure with reduced overhead. We propose a novel channel estimation technique of FTN signaling based on the least sum of squared errors (LSSE) approach to estimate the complex channel coefficients at the pilot locations within the frame. In particular, we find the optimal pilot sequence that minimizes the mean square error (MSE) of the channel estimation. To address the time-selective nature of the channel, we use a low-complexity linear interpolation to track the complex channel coefficients at the data symbols locations within the frame. To detect the data symbols of FTN signaling, we adopt a turbo equalization technique based on a linear soft-input soft-output (SISO) minimum mean square error (MMSE) equalizer. Simulation results show that the MSE of the proposed FTN signaling channel estimation employing the designed optimal pilot sequence is lower than its counterpart designed for conventional Nyquist transmission. The bit error rate (BER) of the FTN signaling employing the proposed optimal pilot sequence shows improvement compared to the FTN signaling employing the conventional Nyquist pilot sequence. Additionally, for the same SE, the proposed FTN signaling channel estimation employing the designed optimal pilot sequence shows better performance when compared to competing techniques from the literature.
This paper investigates the multiple-input-multiple-output (MIMO) massive unsourced random access in an asynchronous orthogonal frequency division multiplexing (OFDM) system, with both timing and frequency offsets (TFO) and non-negligible user collisions. The proposed coding framework splits the data into two parts encoded by sparse regression code (SPARC) and low-density parity check (LDPC) code. Multistage orthogonal pilots are transmitted in the first part to reduce collision density. Unlike existing schemes requiring a quantization codebook with a large size for estimating TFO, we establish a \textit{graph-based channel reconstruction and collision resolution (GB-CR$^2$)} algorithm to iteratively reconstruct channels, resolve collisions, and compensate for TFO rotations on the formulated graph jointly among multiple stages. We further propose to leverage the geometric characteristics of signal constellations to correct TFO estimations. Exhaustive simulations demonstrate remarkable performance superiority in channel estimation and data recovery with substantial complexity reduction compared to state-of-the-art schemes.
Symbiotic radio (SR) is a promising technology of spectrum- and energy-efficient wireless systems, for which the key idea is to use cognitive backscattering communication to achieve mutualistic spectrum and energy sharing with passive backscatter devices (BDs). In this paper, a reconfigurable intelligent surface (RIS) based SR system is considered, where the RIS is used not only to assist the primary active communication, but also for passive communication to transmit its own information. For the considered system, we investigate the EE trade-off between active and passive communications, by characterizing the EE region. To gain some insights, we first derive the maximum achievable individual EEs of the primary transmitter (PT) and RIS, respectively, and then analyze the asymptotic performance by exploiting the channel hardening effect. To characterize the non-trivial EE trade-off, we formulate an optimization problem to find the Pareto boundary of the EE region by jointly optimizing the transmit beamforming, power allocation and the passive beamforming of RIS. The formulated problem is non-convex, and an efficient algorithm is proposed by decomposing it into a series of subproblems by using alternating optimization (AO) and successive convex approximation (SCA) techniques. Finally, simulation results are presented to validate the effectiveness of the proposed algorithm.
This paper studies an integrated sensing and communication (ISAC) system for single-target detection in a cloud radio access network architecture. The system considers downlink communication and multi-static sensing approach, where ISAC transmit access points (APs) jointly serve the user equipments (UEs) and optionally steer a beam toward the target. A centralized operation of cell-free massive MIMO (multiple-input multiple-output) is considered for communication and sensing purposes. A maximum a posteriori ratio test detector is developed to detect the target in the presence of clutter, so-called target-free signals. Moreover, a power allocation algorithm is proposed to maximize the sensing signal-to-interference-plus-noise ratio (SINR) while ensuring a minimum communication SINR value for each UE and meeting per-AP power constraints. Two ISAC setups are studied: i) using only existing communication beams for sensing and ii) using additional sensing beams. The proposed algorithm's efficiency is investigated in both realistic and idealistic scenarios, corresponding to the presence and absence of the target-free channels, respectively. Although detection probability degrades in the presence of target-free channels that act as interference, the proposed algorithm significantly outperforms the interference-unaware benchmark by exploiting the statistics of the clutter. It has also been shown that the proposed algorithm outperforms the fully communication-centric algorithm, both in the presence and absence of clutter. Moreover, using an additional sensing beam improves the detection performance for a target with lower radar cross-section variances compared to the case without sensing beams.
Most existing studies on joint activity detection and channel estimation for grant-free massive random access (RA) systems assume perfect synchronization among all active users, which is hard to achieve in practice. Therefore, this paper considers asynchronous grant-free massive RA systems and develops novel algorithms for joint user activity detection, synchronization delay detection, and channel estimation. In particular, the framework of orthogonal approximate message passing (OAMP) is first utilized to deal with the non-independent and identically distributed (i.i.d.) pilot matrix in asynchronous grant-free massive RA systems, and an OAMP-based algorithm capable of leveraging the common sparsity among the received pilot signals from multiple base station antennas is developed. To reduce the computational complexity, a memory AMP (MAMP)based algorithm is further proposed that eliminates the matrix inversions in the OAMP-based algorithm. Simulation results demonstrate the effectiveness of the two proposed algorithms over the baseline methods. Besides, the MAMP-based algorithm reduces 37% of the computations while maintaining comparable detection/estimation accuracy, compared with the OAMP-based algorithm.
Due to the power consumption and high circuit cost in antenna arrays, the practical application of massive multiple-input multiple-output (MIMO) in the sixth generation (6G) and future wireless networks is still challenging. Employing low-resolution analog-to-digital converters (ADCs) and hybrid analog and digital (HAD) structure is two low-cost choice with acceptable performance loss.In this paper, the combination of the mixed-ADC architecture and HAD structure employed at receiver is proposed for direction of arrival (DOA) estimation, which will be applied to the beamforming tracking and alignment in 6G. By adopting the additive quantization noise model, the exact closed-form expression of the Cram\'{e}r-Rao lower bound (CRLB) for the HAD architecture with mixed-ADCs is derived. Moreover, the closed-form expression of the performance loss factor is derived as a benchmark. In addition, to take power consumption into account, energy efficiency is also investigated in our paper. The numerical results reveal that the HAD structure with mixed-ADCs can significantly reduce the power consumption and hardware cost. Furthermore, that architecture is able to achieve a better trade-off between the performance loss and the power consumption. Finally, adopting 2-4 bits of resolution may be a good choice in practical massive MIMO systems.
Heatmap-based anatomical landmark detection is still facing two unresolved challenges: 1) inability to accurately evaluate the distribution of heatmap; 2) inability to effectively exploit global spatial structure information. To address the computational inability challenge, we propose a novel position-aware and sample-aware central loss. Specifically, our central loss can absorb position information, enabling accurate evaluation of the heatmap distribution. More advanced is that our central loss is sample-aware, which can adaptively distinguish easy and hard samples and make the model more focused on hard samples while solving the challenge of extreme imbalance between landmarks and non-landmarks. To address the challenge of ignoring structure information, a Coordinated Transformer, called CoorTransformer, is proposed, which establishes long-range dependencies under the guidance of landmark coordination information, making the attention more focused on the sparse landmarks while taking advantage of global spatial structure. Furthermore, CoorTransformer can speed up convergence, effectively avoiding the defect that Transformers have difficulty converging in sparse representation learning. Using the advanced CoorTransformer and central loss, we propose a generalized detection model that can handle various scenarios, inherently exploiting the underlying relationship between landmarks and incorporating rich structural knowledge around the target landmarks. We analyzed and evaluated CoorTransformer and central loss on three challenging landmark detection tasks. The experimental results show that our CoorTransformer outperforms state-of-the-art methods, and the central loss significantly improves the performance of the model with p-values< 0.05.
Time series anomaly detection has applications in a wide range of research fields and applications, including manufacturing and healthcare. The presence of anomalies can indicate novel or unexpected events, such as production faults, system defects, or heart fluttering, and is therefore of particular interest. The large size and complex patterns of time series have led researchers to develop specialised deep learning models for detecting anomalous patterns. This survey focuses on providing structured and comprehensive state-of-the-art time series anomaly detection models through the use of deep learning. It providing a taxonomy based on the factors that divide anomaly detection models into different categories. Aside from describing the basic anomaly detection technique for each category, the advantages and limitations are also discussed. Furthermore, this study includes examples of deep anomaly detection in time series across various application domains in recent years. It finally summarises open issues in research and challenges faced while adopting deep anomaly detection models.
A community reveals the features and connections of its members that are different from those in other communities in a network. Detecting communities is of great significance in network analysis. Despite the classical spectral clustering and statistical inference methods, we notice a significant development of deep learning techniques for community detection in recent years with their advantages in handling high dimensional network data. Hence, a comprehensive overview of community detection's latest progress through deep learning is timely to both academics and practitioners. This survey devises and proposes a new taxonomy covering different categories of the state-of-the-art methods, including deep learning-based models upon deep neural networks, deep nonnegative matrix factorization and deep sparse filtering. The main category, i.e., deep neural networks, is further divided into convolutional networks, graph attention networks, generative adversarial networks and autoencoders. The survey also summarizes the popular benchmark data sets, model evaluation metrics, and open-source implementations to address experimentation settings. We then discuss the practical applications of community detection in various domains and point to implementation scenarios. Finally, we outline future directions by suggesting challenging topics in this fast-growing deep learning field.