The stringent requirements on reliability and processing delay in the fifth-generation ($5$G) cellular networks introduce considerable challenges in the design of massive multiple-input-multiple-output (M-MIMO) receivers. The two main components of an M-MIMO receiver are a detector and a decoder. To improve the trade-off between reliability and complexity, a Bayesian concept has been considered as a promising approach that enhances classical detectors, e.g. minimum-mean-square-error detector. This work proposes an iterative M-MIMO detector based on a Bayesian framework, a parallel interference cancellation scheme, and a decision statistics combining concept. We then develop a high performance M-MIMO receiver, integrating the proposed detector with a low complexity sequential decoding for polar codes. Simulation results of the proposed detector show a significant performance gain compared to other low complexity detectors. Furthermore, the proposed M-MIMO receiver with sequential decoding ensures one order magnitude lower complexity compared to a receiver with stack successive cancellation decoding for polar codes from the 5G New Radio standard.
We propose and analyze an unfitted finite element method for solving elliptic problems on domains with curved boundaries and interfaces. The approximation space on the whole domain is obtained by the direct extension of the finite element space defined on interior elements, in the sense that there is no degree of freedom locating in boundary/interface elements. The boundary/jump conditions are imposed in a weak sense in the scheme. The method is shown to be stable without any mesh adjustment or any special stabilization. Optimal convergence rates under the $L^2$ norm and the energy norm are derived. Numerical results in both two and three dimensions are presented to illustrate the accuracy and the robustness of the method.
With the explosive growth of data and wireless devices, federated learning (FL) has emerged as a promising technology for large-scale intelligent systems. Utilizing the analog superposition of electromagnetic waves, over-the-air computation is an appealing approach to reduce the burden of communication in the FL model aggregation. However, with the urgent demand for intelligent systems, the training of multiple tasks with over-the-air computation further aggravates the scarcity of communication resources. This issue can be alleviated to some extent by training multiple tasks simultaneously with shared communication resources, but the latter inevitably brings about the problem of inter-task interference. In this paper, we study over-the-air multi-task FL (OA-MTFL) over the multiple-input multiple-output (MIMO) interference channel. We propose a novel model aggregation method for the alignment of local gradients for different devices, which alleviates the straggler problem that exists widely in over-the-air computation due to the channel heterogeneity. We establish a unified communication-computation analysis framework for the proposed OA-MTFL scheme by considering the spatial correlation between devices, and formulate an optimization problem of designing transceiver beamforming and device selection. We develop an algorithm by using alternating optimization (AO) and fractional programming (FP) to solve this problem, which effectively relieves the impact of inter-task interference on the FL learning performance. We show that due to the use of the new model aggregation method, device selection is no longer essential to our scheme, thereby avoiding the heavy computational burden caused by implementing device selection. The numerical results demonstrate the correctness of the analysis and the outstanding performance of the proposed scheme.
The design of methods for inference from time sequences has traditionally relied on statistical models that describe the relation between a latent desired sequence and the observed one. A broad family of model-based algorithms have been derived to carry out inference at controllable complexity using recursive computations over the factor graph representing the underlying distribution. An alternative model-agnostic approach utilizes machine learning (ML) methods. Here we propose a framework that combines model-based algorithms and data-driven ML tools for stationary time sequences. In the proposed approach, neural networks are developed to separately learn specific components of a factor graph describing the distribution of the time sequence, rather than the complete inference task. By exploiting stationary properties of this distribution, the resulting approach can be applied to sequences of varying temporal duration. Learned factor graph can be realized using compact neural networks that are trainable using small training sets, or alternatively, be used to improve upon existing deep inference systems. We present an inference algorithm based on learned stationary factor graphs, which learns to implement the sum-product scheme from labeled data, and can be applied to sequences of different lengths. Our experimental results demonstrate the ability of the proposed learned factor graphs to learn to carry out accurate inference from small training sets for sleep stage detection using the Sleep-EDF dataset, as well as for symbol detection in digital communications with unknown channels.
A Gibbs distribution based combinatorial optimization algorithm for joint antenna splitting and user scheduling problem in full duplex massive multiple-input multiple-output (MIMO) system is proposed in this paper. The optimal solution of this problem can be determined by exhaustive search. However, the complexity of this approach becomes prohibitive in practice when the sample space is large, which is usually the case in massive MIMO systems. Our algorithm overcomes this drawback by converting the original problem into a Kullback-Leibler (KL) divergence minimization problem, and solving it through a related dynamical system via a stochastic gradient descent method. Using this approach, we improve the spectral efficiency (SE) of the system by performing joint antenna splitting and user scheduling. Additionally, numerical results show that the SE curves obtained with our proposed algorithm overlap with the curves achieved by exhaustive search for user scheduling.
Cell-free massive MIMO is one of the key technologies for future wireless communications, in which users are simultaneously and jointly served by all access points (APs). In this paper, we investigate the minimum mean square error (MMSE) estimation of effective channel coefficients in cell-free massive MIMO systems with massive connectivity. To facilitate the theoretical analysis, only single measurement vector (SMV) based MMSE estimation is considered in this paper, i.e., the MMSE estimation is performed based on the received pilot signals at each AP separately. Inspired by the decoupling principle of replica symmetric postulated MMSE estimation of sparse signal vectors with independent and identically distributed (i.i.d.) non-zero components, we develop the corresponding decoupling principle for the SMV based MMSE estimation of sparse signal vectors with independent and non-identically distributed (i.n.i.d.) non-zero components, which plays a key role in the theoretical analysis of SMV based MMSE estimation of the effective channel coefficients in cell-free massive MIMO systems with massive connectivity. Subsequently, based on the obtained decoupling principle of MMSE estimation, likelihood ratio test and the optimal fusion rule, we perform user activity detection based on the received pilot signals at only one AP, or cooperation among the entire set of APs for centralized or distributed detection. Via theoretical analysis, we show that the error probabilities of both centralized and distributed detection tend to zero when the number of APs tends to infinity while the asymptotic ratio between the number of users and pilots is kept constant. We also investigate the asymptotic behavior of oracle estimation in cell-free massive MIMO systems with massive connectivity via random matrix theory.
This paper analyzes how the distortion created by hardware impairments in a multiple-antenna base station affects the uplink spectral efficiency (SE), with focus on Massive MIMO. This distortion is correlated across the antennas, but has been often approximated as uncorrelated to facilitate (tractable) SE analysis. To determine when this approximation is accurate, basic properties of distortion correlation are first uncovered. Then, we separately analyze the distortion correlation caused by third-order non-linearities and by quantization. Finally, we study the SE numerically and show that the distortion correlation can be safely neglected in Massive MIMO when there are sufficiently many users. Under i.i.d. Rayleigh fading and equal signal-to-noise ratios (SNRs), this occurs for more than five transmitting users. Other channel models and SNR variations have only minor impact on the accuracy. We also demonstrate the importance of taking the distortion characteristics into account in the receive combining.
In this paper, with the aid of the mathematical tool of stochastic geometry, we introduce analytical and computational frameworks for the distribution of three different definitions of delay, i.e., the time that it takes for a user to successfully receive a data packet, in large-scale cellular networks. We also provide an asymptotic analysis of one of the delay distributions, which can be regarded as the packet loss probability of a given network. To mitigate the inherent computational difficulties of the obtained analytical formulations in some cases, we propose efficient numerical approximations based on the numerical inversion method, the Riemann sum, and the Beta distribution. Finally, we demonstrate the accuracy of the obtained analytical formulations and the corresponding approximations against Monte Carlo simulation results, and unveil insights on the delay performance with respect to several design parameters, such as the decoding threshold, the transmit power, and the deployment density of the base stations. The proposed methods can facilitate the analysis and optimization of cellular networks subject to reliability constraints on the network packet delay that are not restricted to the local (average) delay, e.g., in the context of delay sensitive applications.
Despite the recent success of graph neural networks (GNN), common architectures often exhibit significant limitations, including sensitivity to oversmoothing, long-range dependencies, and spurious edges, e.g., as can occur as a result of graph heterophily or adversarial attacks. To at least partially address these issues within a simple transparent framework, we consider a new family of GNN layers designed to mimic and integrate the update rules of two classical iterative algorithms, namely, proximal gradient descent and iterative reweighted least squares (IRLS). The former defines an extensible base GNN architecture that is immune to oversmoothing while nonetheless capturing long-range dependencies by allowing arbitrary propagation steps. In contrast, the latter produces a novel attention mechanism that is explicitly anchored to an underlying end-toend energy function, contributing stability with respect to edge uncertainty. When combined we obtain an extremely simple yet robust model that we evaluate across disparate scenarios including standardized benchmarks, adversarially-perturbated graphs, graphs with heterophily, and graphs involving long-range dependencies. In doing so, we compare against SOTA GNN approaches that have been explicitly designed for the respective task, achieving competitive or superior node classification accuracy.
The field of Multi-Agent System (MAS) is an active area of research within Artificial Intelligence, with an increasingly important impact in industrial and other real-world applications. Within a MAS, autonomous agents interact to pursue personal interests and/or to achieve common objectives. Distributed Constraint Optimization Problems (DCOPs) have emerged as one of the prominent agent architectures to govern the agents' autonomous behavior, where both algorithms and communication models are driven by the structure of the specific problem. During the last decade, several extensions to the DCOP model have enabled them to support MAS in complex, real-time, and uncertain environments. This survey aims at providing an overview of the DCOP model, giving a classification of its multiple extensions and addressing both resolution methods and applications that find a natural mapping within each class of DCOPs. The proposed classification suggests several future perspectives for DCOP extensions, and identifies challenges in the design of efficient resolution algorithms, possibly through the adaptation of strategies from different areas.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.