Reconfigurable intelligent surface (RIS) has emerged as a promising technology for improving capacity and extending coverage of wireless networks. In this work, we consider RIS-aided millimeter wave (mmWave) multiple-input and multiple-output (MIMO) communications, where acquiring accurate channel state information is challenging due to the high dimensionality of channels. To fully exploit the structures of the channels, we formulate the channel estimation as a hierarchically structured matrix recovery problem, and design a low-complexity message passing algorithm to solve it. Simulation results demonstrate the superiority of the proposed algorithm and its performance close to the oracle bound.
The accurate estimation of the mutual information is a crucial task in various applications, including machine learning, communications, and biology, since it enables the understanding of complex systems. High-dimensional data render the task extremely challenging due to the amount of data to be processed and the presence of convoluted patterns. Neural estimators based on variational lower bounds of the mutual information have gained attention in recent years but they are prone to either high bias or high variance as a consequence of the partition function. We propose a novel class of discriminative mutual information estimators based on the variational representation of the $f$-divergence. We investigate the impact of the permutation function used to obtain the marginal training samples and present a novel architectural solution based on derangements. The proposed estimator is flexible as it exhibits an excellent bias/variance trade-off. Experiments on reference scenarios demonstrate that our approach outperforms state-of-the-art neural estimators both in terms of accuracy and complexity.
The acquisition of accurate channel state information (CSI) is of utmost importance since it provides performance improvement of wireless communication systems. However, acquiring accurate CSI, which can be done through channel estimation or channel prediction, is an intricate task due to the complexity of the time-varying and frequency selectivity of the wireless environment. To this end, we propose an efficient machine learning (ML)-based technique for channel prediction in orthogonal frequency-division multiplexing (OFDM) sub-bands. The novelty of the proposed approach lies in the training of channel fading samples used to estimate future channel behaviour in selective fading.
Existing works on IRS have mainly considered IRS being deployed in the environment to dynamically control the wireless channels between the BS and its served users. In contrast, we propose in this paper a new integrated IRS BS architecture by deploying IRSs inside the BS antenna radome. Since the distance between the integrated IRSs and BS antenna array is practically small, the path loss among them is significantly reduced and the real time control of the IRS reflection by the BS becomes easier to implement. However, the resultant near field channel model also becomes drastically different. Thus, we propose an element wise channel model for IRS to characterize the channel vector between each single antenna user and the antenna array of the BS, which includes the direct (without any IRS reflection) as well as the single and double IRS-reflection channel components. Then, we formulate a problem to optimize the reflection coefficients of all IRS reflecting elements for maximizing the uplink sum rate of the users. By considering two typical cases with/without perfect CSI at the BS, the formulated problem is solved efficiently by adopting the successive refinement method and iterative random phase algorithm (IRPA), respectively. Numerical results validate the substantial capacity gain of the integrated IRS BS architecture over the conventional multi antenna BS without integrated IRS. Moreover, the proposed algorithms significantly outperform other benchmark schemes in terms of sum rate, and the IRPA without CSI can approach the performance upper bound with perfect CSI as the training overhead increases.
Intelligent reflecting surfaces (IRS) have been proposed in millimeter wave (mmWave) and terahertz (THz) systems to achieve both coverage and capacity enhancement, where the design of hybrid precoders, combiners, and the IRS typically relies on channel state information. In this paper, we address the problem of uplink wideband channel estimation for IRS aided multiuser multiple-input single-output (MISO) systems with hybrid architectures. Combining the structure of model driven and data driven deep learning approaches, a hybrid driven learning architecture is devised for joint estimation and learning the properties of the channels. For a passive IRS aided system, we propose a residual learned approximate message passing as a model driven network. A denoising and attention network in the data driven network is used to jointly learn spatial and frequency features. Furthermore, we design a flexible hybrid driven network in a hybrid passive and active IRS aided system. Specifically, the depthwise separable convolution is applied to the data driven network, leading to less network complexity and fewer parameters at the IRS side. Numerical results indicate that in both systems, the proposed hybrid driven channel estimation methods significantly outperform existing deep learning-based schemes and effectively reduce the pilot overhead by about 60% in IRS aided systems.
Motivated by the need to model joint dependence between regions of interest in functional neuroconnectivity for efficient inference, we propose a new sampling-based Bayesian clustering approach for covariance structures of high-dimensional Gaussian outcomes. The key technique is based on a Dirichlet process that clusters covariance sub-matrices into independent groups of outcomes, thereby naturally inducing sparsity in the whole brain connectivity matrix. A new split-merge algorithm is employed to improve the mixing of the Markov chain sampling that is shown empirically to recover both uniform and Dirichlet partitions with high accuracy. We investigate the empirical performance of the proposed method through extensive simulations. Finally, the proposed approach is used to group regions of interest into functionally independent groups in the Autism Brain Imaging Data Exchange participants with autism spectrum disorder and attention-deficit/hyperactivity disorder.
Extremely large-scale antenna arrays, tremendously high frequencies, and new types of antennas are three clear trends in multi-antenna technology for supporting the sixth-generation (6G) networks. To properly account for the new characteristics introduced by these three trends in communication system design, the near-field spherical-wave propagation model needs to be used, which differs from the classical far-field planar-wave one. As such, near-field communication (NFC) will become essential in 6G networks. In this tutorial, we cover three key aspects of NFC. 1) Channel Modelling: We commence by reviewing near-field spherical-wave-based channel models for spatially-discrete (SPD) antennas. Then, uniform spherical wave (USW) and non-uniform spherical wave (NUSW) models are discussed. Subsequently, we introduce a general near-field channel model for SPD antennas and a Green's function-based channel model for continuous-aperture (CAP) antennas. 2) Beamfocusing and Antenna Architectures: We highlight the properties of near-field beamfocusing and discuss NFC antenna architectures for both SPD and CAP antennas. Moreover, the basic principles of near-field beam training are introduced. 3) Performance Analysis: Finally, we provide a comprehensive performance analysis framework for NFC. For near-field line-of-sight channels, the received signal-to-noise ratio and power-scaling law are derived. For statistical near-field multipath channels, a general analytical framework is proposed, based on which analytical expression for the outage probability, ergodic channel capacity, and ergodic mutual information are derived. Finally, for each aspect, the topics for future research are discussed.
Regression analysis under the assumption of monotonicity is a well-studied statistical problem and has been used in a wide range of applications. However, there remains a lack of a broadly applicable methodology that permits information borrowing, for efficiency gains, when jointly estimating multiple monotonic regression functions. We introduce such a methodology by extending the isotonic regression problem presented in the article "The isotonic regression problem and its dual" (Barlow and Brunk, 1972). The presented approach can be applied to both fixed and random designs and any number of explanatory variables (regressors). Our framework penalizes pairwise differences in the values (levels) of the monotonic function estimates, with the weight of penalty being determined based on a statistical test, which results in information being shared across data sets if similarities in the regression functions exist. Function estimates are subsequently derived using an iterative optimization routine that uses existing solution algorithms for the isotonic regression problem. Simulation studies for normally and binomially distributed response data illustrate that function estimates are consistently improved if similarities between functions exist, and are not oversmoothed otherwise. We further apply our methodology to analyse two public health data sets: neonatal mortality data for Porto Alegre, Brazil, and stroke patient data for North West England.
Traffic speed is central to characterizing the fluidity of the road network. Many transportation applications rely on it, such as real-time navigation, dynamic route planning, and congestion management. Rapid advances in sensing and communication techniques make traffic speed detection easier than ever. However, due to sparse deployment of static sensors or low penetration of mobile sensors, speeds detected are incomplete and far from network-wide use. In addition, sensors are prone to error or missing data due to various kinds of reasons, speeds from these sensors can become highly noisy. These drawbacks call for effective techniques to recover credible estimates from the incomplete data. In this work, we first identify the issue as a spatiotemporal kriging problem and propose a Laplacian enhanced low-rank tensor completion (LETC) framework featuring both lowrankness and multi-dimensional correlations for large-scale traffic speed kriging under limited observations. To be specific, three types of speed correlation including temporal continuity, temporal periodicity, and spatial proximity are carefully chosen and simultaneously modeled by three different forms of graph Laplacian, named temporal graph Fourier transform, generalized temporal consistency regularization, and diffusion graph regularization. We then design an efficient solution algorithm via several effective numeric techniques to scale up the proposed model to network-wide kriging. By performing experiments on two public million-level traffic speed datasets, we finally draw the conclusion and find our proposed LETC achieves the state-of-the-art kriging performance even under low observation rates, while at the same time saving more than half computing time compared with baseline methods. Some insights into spatiotemporal traffic data modeling and kriging at the network level are provided as well.
Network slicing in 3GPP 5G system architecture has introduced significant improvements in the flexibility and efficiency of mobile communication. However, this new functionality poses challenges in maintaining the privacy of mobile users, especially in multi-hop environments. In this paper, we propose a secure and privacy-preserving network slicing protocol (SPNS) that combines 5G network slicing and onion routing to address these challenges and provide secure and efficient communication. Our approach enables mobile users to select network slices while incorporating measures to prevent curious RAN nodes or external attackers from accessing full slice information. Additionally, we ensure that the 5G core network can authenticate all RANs, while avoiding reliance on a single RAN for service provision. Besides, SPNS implements end-to-end encryption for data transmission within the network slices, providing an extra layer of privacy and security. Finally, we conducted extensive experiments to evaluate the time cost of establishing network slice links under varying conditions. SPNS provides a promising solution for enhancing the privacy and security of communication in 5G networks.
It has been widely observed that there exists no universal best Multi-objective Evolutionary Algorithm (MOEA) dominating all other MOEAs on all possible Multi-objective Optimization Problems (MOPs). In this work, we advocate using the Parallel Algorithm Portfolio (PAP), which runs multiple MOEAs independently in parallel and gets the best out of them, to combine the advantages of different MOEAs. Since the manual construction of PAPs is non-trivial and tedious, we propose to automatically construct high-performance PAPs for solving MOPs. Specifically, we first propose a variant of PAPs, namely MOEAs/PAP, which can better determine the output solution set for MOPs than conventional PAPs. Then, we present an automatic construction approach for MOEAs/PAP with a novel performance metric for evaluating the performance of MOEAs across multiple MOPs. Finally, we use the proposed approach to construct a MOEAs/PAP based on a training set of MOPs and an algorithm configuration space defined by several variants of NSGA-II. Experimental results show that the automatically constructed MOEAs/PAP can even rival the state-of-the-art ensemble MOEAs designed by human experts, demonstrating the huge potential of automatic construction of PAPs in multi-objective optimization.