Millimeter-wave (mmW)/Terahertz (THz) wideband communication employing a large-scale antenna array is a promising technique of the sixth-generation (6G) wireless network for realizing massive machine-type communications (mMTC). To reduce the access latency and the signaling overhead, we design a grant-free random access scheme based on joint active device detection and channel estimation (JADCE) for mmW/THz wideband massive access. In particular, by exploiting the simultaneously sparse and low-rank structure of mmW/THz channels with spreads in the delay-angular domain, we propose two multi-rank aware JADCE algorithms via applying the quotient geometry of product of complex rank-$L$ matrices with the number of clusters $L$. It is proved that the proposed algorithms require a smaller number of measurements than the currently known bounds on measurements of conventional simultaneously sparse and low-rank recovery algorithms. Statistical analysis also shows that the proposed algorithms can linearly converge to the ground truth with low computational complexity. Finally, extensive simulation results confirm the superiority of the proposed algorithms in terms of the accuracy of both activity detection and channel estimation.
Contention-based wireless channel access methods like CSMA and ALOHA paved the way for the rise of the Internet of Things in industrial applications (IIoT). However, to cope with increasing demands for reliability and throughput, several mostly TDMA-based protocols like IEEE 802.15.4 and its extensions were proposed. Nonetheless, many of these IIoT-protocols still require contention-based communication, e.g., for slot allocation and broadcast transmission. In many cases, subtle but hidden patterns characterize this secondary traffic. Present contention-based protocols are unaware of these hidden patterns and can therefore not exploit this information. Especially in dense networks, they often do not provide sufficient reliability for primary traffic, e.g., they are unable to allocate transmission slots in time. In this paper, we propose QMA, a contention-based multiple access scheme based on Q-learning, which dynamically adapts transmission times to avoid collisions by learning patterns in the contention-based traffic. QMA is designed to be resource-efficient and targets small embedded devices. We show that QMA solves the hidden node problem without the additional overhead of RTS / CTS messages and verify the behaviour of QMA in the FIT IoT-LAB testbed. Finally, QMA's scalability is studied by simulation, where it is used for GTS allocation in IEEE 802.15.4 DSME. Results show that QMA considerably increases reliability and throughput in comparison to CSMA/CA, especially in networks with a high load.
In this paper, we investigate the problem of pilot optimization and channel estimation of two-way relaying network (TWRN) aided by an intelligent reflecting surface (IRS) with finite discrete phase shifters. In a TWRN, there exists a challenging problem that the two cascading channels from source-to-IRS-to-Relay and destination-to-IRS-to-relay interfere with each other. Via designing the initial phase shifts of IRS and pilot pattern, the two cascading channels are separated by using simple arithmetic operations like addition and subtraction. Then, the least-squares estimator is adopted to estimate the two cascading channels and two direct channels from source to relay and destination to relay. The corresponding mean square errors (MSE) of channel estimators are derived. By minimizing MSE, the optimal phase shift matrix of IRS is proved. Then, two special matrices Hadamard and discrete Fourier transform (DFT) matrix is shown to be two optimal training matrices for IRS. Furthermore, the IRS with discrete finite phase shifters is taken into account. Using theoretical derivation and numerical simulations, we find that 3-4 bits phase shifters are sufficient for IRS to achieve a negligible MSE performance loss. More importantly, the Hadamard matrix requires only one-bit phase shifters to achieve the optimal MSE performance while the DFT matrix requires at least three or four bits to achieve the same performance. Thus, the Hadamard matrix is a perfect choice for channel estimation using low-resolution phase-shifting IRS.
In this paper, we first establish well-posedness results for one-dimensional McKean-Vlasov stochastic differential equations (SDEs) and related particle systems with a measure-dependent drift coefficient that is discontinuous in the spatial component, and a diffusion coefficient which is a Lipschitz function of the state only. We only require a fairly mild condition on the diffusion coefficient, namely to be non-zero in a point of discontinuity of the drift, while we need to impose certain structural assumptions on the measure-dependence of the drift. Second, we study fully implementable Euler-Maruyama type schemes for the particle system to approximate the solution of the one-dimensional McKean-Vlasov SDE. Here, we will prove strong convergence results in terms of the number of time-steps and number of particles. Due to the discontinuity of the drift, the convergence analysis is non-standard and the usual strong convergence order $1/2$ known for the Lipschitz case cannot be recovered for all schemes.
In this article, wavelet OFDM based non-orthogonal-multiple-access (NOMA) combined with massive MIMO system for 6G networks is proposed. For mMIMO transmissions, the proposed system could enhance the performance by utilizing wavelets to compensate for channel impairments on the transmitted signal. Performance measures include spectral efficiency, symbol error rate (SER), and peak to average ratio (PAPR). Simulation results prove that the proposed system outperforms the conventional OFDM based NOMA systems.
Much stringent reliability and processing latency requirements in ultra-reliable-low-latency-communication (URLLC) traffic make the design of linear massive multiple-input-multiple-output (M-MIMO) receivers becomes very challenging. Recently, Bayesian concept has been used to increase the detection reliability in minimum-mean-square-error (MMSE) linear receivers. However, the latency processing time is a major concern due to the exponential complexity of matrix inversion operations in MMSE schemes. This paper proposes an iterative M-MIMO receiver that is developed by using a Bayesian concept and a parallel interference cancellation (PIC) scheme, referred to as a linear Bayesian learning (LBL) receiver. PIC has a linear complexity as it uses a combination of maximum ratio combining (MRC) and decision statistic combining (DSC) schemes to avoid matrix inversion operations. Simulation results show that the bit-error-rate (BER) and latency processing performances of the proposed receiver outperform the ones of MMSE and best Bayesian-based receivers by minimum $2$ dB and $19$ times for various M-MIMO system configurations.
The stringent requirements on reliability and processing delay in the fifth-generation ($5$G) cellular networks introduce considerable challenges in the design of massive multiple-input-multiple-output (M-MIMO) receivers. The two main components of an M-MIMO receiver are a detector and a decoder. To improve the trade-off between reliability and complexity, a Bayesian concept has been considered as a promising approach that enhances classical detectors, e.g. minimum-mean-square-error detector. This work proposes an iterative M-MIMO detector based on a Bayesian framework, a parallel interference cancellation scheme, and a decision statistics combining concept. We then develop a high performance M-MIMO receiver, integrating the proposed detector with a low complexity sequential decoding for polar codes. Simulation results of the proposed detector show a significant performance gain compared to other low complexity detectors. Furthermore, the proposed M-MIMO receiver with sequential decoding ensures one order magnitude lower complexity compared to a receiver with stack successive cancellation decoding for polar codes from the 5G New Radio standard.
The paper describes an online deep learning algorithm for the adaptive modulation and coding in massive MIMO. The algorithm is based on a fully connected neural network, which is initially trained on the output of the traditional algorithm and then is incrementally retrained by the service feedback of its output. We show the advantage of our solution over the state-of-the-art Q-Learning approach. We provide system-level simulation results to support this conclusion in various scenarios with different channel characteristics and different user speeds. Compared with traditional OLLA our algorithm shows 10\% to 20\% improvement of user throughput in the full buffer case of continuous traffic. This is a very valuable result that allows us to significantly improve the quality of wireless MIMO communications.
We consider the problem of dynamic spectrum access (DSA) in cognitive wireless networks, where only partial observations are available to the users due to narrowband sensing and transmissions. The cognitive network consists of primary users (PUs) and a secondary user (SU), which operate in a time duplexing regime. The traffic pattern for each PU is assumed to be unknown to the SU and is modeled as a finite-memory Markov chain. Since observations are partial, then both channel sensing and access actions affect the throughput. The objective is to maximize the SU's long-term throughput. To achieve this goal, we develop a novel algorithm that learns both access and sensing policies via deep Q-learning, dubbed Double Deep Q-network for Sensing and Access (DDQSA). To the best of our knowledge, this is the first paper that solves both sensing and access policies for DSA via deep Q-learning. Second, we analyze the optimal policy theoretically to validate the performance of DDQSA. Although the general DSA problem is P-SPACE hard, we derive the optimal policy explicitly for a common model of a cyclic user dynamics. Our results show that DDQSA learns a policy that implements both sensing and channel access, and significantly outperforms existing approaches.
Deep learning has made remarkable achievement in many fields. However, learning the parameters of neural networks usually demands a large amount of labeled data. The algorithms of deep learning, therefore, encounter difficulties when applied to supervised learning where only little data are available. This specific task is called few-shot learning. To address it, we propose a novel algorithm for few-shot learning using discrete geometry, in the sense that the samples in a class are modeled as a reduced simplex. The volume of the simplex is used for the measurement of class scatter. During testing, combined with the test sample and the points in the class, a new simplex is formed. Then the similarity between the test sample and the class can be quantized with the ratio of volumes of the new simplex to the original class simplex. Moreover, we present an approach to constructing simplices using local regions of feature maps yielded by convolutional neural networks. Experiments on Omniglot and miniImageNet verify the effectiveness of our simplex algorithm on few-shot learning.
In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.