Compute-forward multiple access (CFMA) is a multiple access transmission scheme based on Compute-and-Forward (CF) which allows the receiver to first decode linear combinations of the transmitted signals and then solve for individual messages. This paper extends the CFMA scheme to a two-user Gaussian multiple-input multiple-output (MIMO) multiple access channel (MAC). We first derive the expression of the achievable rate pair for MIMO MAC with CFMA. We prove a general condition under which CFMA can achieve the sum capacity of the channel. Furthermore, this result is specialized to SIMO and 2-by-2 diagonal MIMO multiple access channels, for which more explicit sum capacity-achieving conditions on power and channel matrices are derived. Numerical results are also provided for the performance of CFMA on general MIMO multiple access channels.
Future wireless networks, in particular, 5G and beyond, are anticipated to deploy dense Low Earth Orbit (LEO) satellites to provide global coverage and broadband connectivity with reliable data services. However, new challenges for interference management have to be tackled due to the large scale of dense LEO satellite networks. Rate-Splitting Multiple Access (RSMA), widely studied in terrestrial communication systems and Geostationary Orbit (GEO) satellite networks, has emerged as a novel, general, and powerful framework for interference management and multiple access strategies for future wireless networks. In this paper, we propose a multilayer interference management scheme for spectrum sharing in heterogeneous GEO and LEO satellite networks, where RSMA is implemented distributedly at GEO and LEO satellites, namely Distributed-RSMA (D-RSMA), to mitigate the interference and boost the user fairness of the system. We study the problem of jointly optimizing the GEO/LEO precoders and message splits to maximize the minimum rate among User Terminals (UTs) subject to a transmit power constraint at all satellites. A Semi-Definite Programming (SDP)-based algorithm is proposed to solve the original non-convex optimization problem. Numerical results demonstrate the effectiveness and network load robustness of our proposed D-RSMA scheme for multilayer satellite networks. Because of the data sharing and the interference management capability, D-RSMA provides significant max-min fairness performance gains when compared to several benchmark schemes.
We study the problem of unsourced random access (URA) over Rayleigh block-fading channels with a receiver equipped with multiple antennas. We propose a slotted structure with multiple stages of orthogonal pilots, each of which is randomly picked from a codebook. In the proposed signaling structure, each user encodes its message using a polar code and appends it to the selected pilot sequences to construct its transmitted signal. Accordingly, the transmitted signal is composed of multiple orthogonal pilot parts and a polar-coded part, which is sent through a randomly selected slot. The performance of the proposed scheme is further improved by randomly dividing users into different groups each having a unique interleaver-power pair. We also apply the idea of multiple stages of orthogonal pilots to the case of a single receive antenna. In all the set-ups, we use an iterative approach for decoding the transmitted messages along with a suitable successive interference cancellation technique. The use of orthogonal pilots and the slotted structure lead to improved accuracy and reduced computational complexity in the proposed set-ups, and make the implementation with short blocklengths more viable. Performance of the proposed set-ups is illustrated via extensive simulation results which show that the proposed set-ups with multiple antennas perform better than the existing MIMO URA solutions for both short and large blocklengths, and that the proposed single-antenna set-ups are superior to the existing single-antenna URA schemes.
In this paper, we consider the decentralized, stochastic nonconvex strongly-concave (NCSC) minimax problem with nonsmooth regularization terms on both primal and dual variables, wherein a network of $m$ computing agents collaborate via peer-to-peer communications. We consider when the coupling function is in expectation or finite-sum form and the double regularizers are convex functions, applied separately to the primal and dual variables. Our algorithmic framework introduces a Lagrangian multiplier to eliminate the consensus constraint on the dual variable. Coupling this with variance-reduction (VR) techniques, our proposed method, entitled VRLM, by a single neighbor communication per iteration, is able to achieve an $\mathcal{O}(\kappa^3\varepsilon^{-3})$ sample complexity under the general stochastic setting, with either a big-batch or small-batch VR option, where $\kappa$ is the condition number of the problem and $\varepsilon$ is the desired solution accuracy. With a big-batch VR, we can additionally achieve $\mathcal{O}(\kappa^2\varepsilon^{-2})$ communication complexity. Under the special finite-sum setting, our method with a big-batch VR can achieve an $\mathcal{O}(n + \sqrt{n} \kappa^2\varepsilon^{-2})$ sample complexity and $\mathcal{O}(\kappa^2\varepsilon^{-2})$ communication complexity, where $n$ is the number of components in the finite sum. All complexity results match the best-known results achieved by a few existing methods for solving special cases of the problem we consider. To the best of our knowledge, this is the first work which provides convergence guarantees for NCSC minimax problems with general convex nonsmooth regularizers applied to both the primal and dual variables in the decentralized stochastic setting. Numerical experiments are conducted on two machine learning problems. Our code is downloadable from //github.com/RPI-OPT/VRLM.
This paper investigates the broadband channel estimation (CE) for intelligent reflecting surface (IRS)-aided millimeter-wave (mmWave) massive MIMO systems. The CE for such systems is a challenging task due to the large dimension of both the active massive MIMO at the base station (BS) and passive IRS. To address this problem, this paper proposes a compressive sensing (CS)-based CE solution for IRS-aided mmWave massive MIMO systems, whereby the angular channel sparsity of large-scale array at mmWave is exploited for improved CE with reduced pilot overhead. Specifically, we first propose a downlink pilot transmission framework. By designing the pilot signals based on the prior knowledge that the line-of-sight dominated BS-to-IRS channel is known, the high-dimensional channels for BS-to-user and IRS-to-user can be jointly estimated based on CS theory. Moreover, to efficiently estimate broadband channels, a distributed orthogonal matching pursuit algorithm is exploited, where the common sparsity shared by the channels at different subcarriers is utilized. Additionally, the redundant dictionary to combat the power leakage is also designed for the enhanced CE performance. Simulation results demonstrate the effectiveness of the proposed scheme.
We study the problem of Out-of-Distribution (OOD) detection, that is, detecting whether a learning algorithm's output can be trusted at inference time. While a number of tests for OOD detection have been proposed in prior work, a formal framework for studying this problem is lacking. We propose a definition for the notion of OOD that includes both the input distribution and the learning algorithm, which provides insights for the construction of powerful tests for OOD detection. We propose a multiple hypothesis testing inspired procedure to systematically combine any number of different statistics from the learning algorithm using conformal p-values. We further provide strong guarantees on the probability of incorrectly classifying an in-distribution sample as OOD. In our experiments, we find that threshold-based tests proposed in prior work perform well in specific settings, but not uniformly well across different types of OOD instances. In contrast, our proposed method that combines multiple statistics performs uniformly well across different datasets and neural networks.
There are three generic services in 5G: enhanced mobile broadband (eMBB), ultra-reliable low-latency communications (URLLC), and massive machine-type communications (mMTC). To guarantee the performance of heterogeneous services, network slicing is proposed to allocate resources to different services. Network slicing is typically done in an orthogonal multiple access (OMA) fashion, which means different services are allocated non-interfering resources. However, as the number of users grows, OMA-based slicing is not always optimal, and a non-orthogonal scheme may achieve a better performance. This work aims to analyse the performances of different slicing schemes in uplink, and a promising scheme based on rate-splitting multiple access (RSMA) is studied. RSMA can provide a more flexible decoding order and theoretically has the largest achievable rate region than OMA and non-orthogonal multiple access (NOMA) without time-sharing. Hence, RSMA has the potential to increase the rate of users requiring different services. In addition, it is not necessary to decode the two split streams of one user successively, so RSMA lets suitable users split messages and designs an appropriate decoding order depending on the service requirements. This work shows that for network slicing RSMA can outperform NOMA counterpart, and obtain significant gains over OMA in some region.
Quantum Internetworking is a recent field that promises numerous interesting applications, many of which require the distribution of entanglement between arbitrary pairs of users. This work deals with the problem of scheduling in an arbitrary entanglement swapping quantum network - often called first generation quantum network - in its general topology, multicommodity, loss-aware formulation. We introduce a linear algebraic framework that exploits quantum memory through the creation of intermediate entangled links. The framework is then employed to mathematically derive a natural class of quadratic scheduling policies for quantum networks by applying Lyapunov Drift Minimization, a standard technique in classical network science. Moreover, an additional class of Max-Weight inspired policies is proposed and benchmarked, reducing significantly the computation cost, at the price of a slight performance degradation. The policies are compared in terms of information availability, localization and overall network performance through an ad-hoc simulator that admits user-provided network topologies and scheduling policies in order to showcase the potential application of the provided tools to quantum network design.
This paper investigates the multiple testing problem for high-dimensional sparse binary sequences motivated by the crowdsourcing problem in machine learning. We adopt an empirical Bayes approach to estimate possibly sparse sequences with Bernoulli noises. We found a surprising result that the hard thresholding rule deduced from the spike-and-slab posterior is not optimal, even using a uniform prior. Two approaches are then proposed to calibrate the posterior for achieving the optimal signal detection boundary, and two multiple testing procedures are constructed based on these calibrated posteriors. Sharp frequentist theoretical results for these procedures are obtained, showing both can effectively control the false discovery rate uniformly for signals under a sparsity assumption. Numerical experiments are conducted to validate our theory in finite samples.
Substantial progress has been made recently on developing provably accurate and efficient algorithms for low-rank matrix factorization via nonconvex optimization. While conventional wisdom often takes a dim view of nonconvex optimization algorithms due to their susceptibility to spurious local minima, simple iterative methods such as gradient descent have been remarkably successful in practice. The theoretical footings, however, had been largely lacking until recently. In this tutorial-style overview, we highlight the important role of statistical models in enabling efficient nonconvex optimization with performance guarantees. We review two contrasting approaches: (1) two-stage algorithms, which consist of a tailored initialization step followed by successive refinement; and (2) global landscape analysis and initialization-free algorithms. Several canonical matrix factorization problems are discussed, including but not limited to matrix sensing, phase retrieval, matrix completion, blind deconvolution, robust principal component analysis, phase synchronization, and joint alignment. Special care is taken to illustrate the key technical insights underlying their analyses. This article serves as a testament that the integrated consideration of optimization and statistics leads to fruitful research findings.
Vision-based vehicle detection approaches achieve incredible success in recent years with the development of deep convolutional neural network (CNN). However, existing CNN based algorithms suffer from the problem that the convolutional features are scale-sensitive in object detection task but it is common that traffic images and videos contain vehicles with a large variance of scales. In this paper, we delve into the source of scale sensitivity, and reveal two key issues: 1) existing RoI pooling destroys the structure of small scale objects, 2) the large intra-class distance for a large variance of scales exceeds the representation capability of a single network. Based on these findings, we present a scale-insensitive convolutional neural network (SINet) for fast detecting vehicles with a large variance of scales. First, we present a context-aware RoI pooling to maintain the contextual information and original structure of small scale objects. Second, we present a multi-branch decision network to minimize the intra-class distance of features. These lightweight techniques bring zero extra time complexity but prominent detection accuracy improvement. The proposed techniques can be equipped with any deep network architectures and keep them trained end-to-end. Our SINet achieves state-of-the-art performance in terms of accuracy and speed (up to 37 FPS) on the KITTI benchmark and a new highway dataset, which contains a large variance of scales and extremely small objects.