We provide the first-ever performance evaluation of orthogonal time frequency space (OTFS) modulation in cell-free massive multiple-input multiple-output (MIMO) systems. To investigate trade-off between performance and overhead, we apply embedded pilot-aided and superimposed pilot-based channel estimation methods. We then derive a closed-form expression for the individual user downlink and uplink spectral efficiencies as a function of the numbers of APs, users and delay-Doppler domain channel estimate parameters. Based on these analytical results, we also present new scaling laws that the AP's and user's transmit power should satisfy, to sustain a desirable quality of service. It is found that when the number of APs, $M_a$, grows without bound, we can reduce the transmit power of each user and AP proportionally to $1/M_a$ and $1/M_a^2$, respectively, during the uplink and downlink phases. We compare the OTFS performance with that of orthogonal frequency division multiplexing (OFDM) at high-mobility conditions. Our findings reveal that with shadowing correlation, OTFS modulation with embedded pilot-based channel estimation provides $30$-folds gain over the OFDM counterpart in terms of $95\%$-likely per-user downlink rate. Finally, with superimposed pilot-based channel estimation, the increase in the per-user throughput is more pronounced at the median rates over the correlated shadowing channels.
Numerous algorithms call for computation over the integers modulo a randomly-chosen large prime. In some cases, the quasi-cubic complexity of selecting a random prime can dominate the total running time. We propose a new variant of the classic D5 algorithm for "dynamic evaluation", applied to a randomly-chosen (composite) integer. Unlike the D5 principle which has been used in the past to compute over a direct product of fields, our method is simpler as it only requires following a single path after any modulus splits. The transformation we propose can apply to any algorithm in the algebraic RAM model, even allowing randomization. The resulting transformed algorithm avoids any primality tests and will, with constant positive probability, have the same result as the original computation modulo a randomly-chosen prime. As an application, we demonstrate how to compute the exact number of nonzero terms in an unknown integer polynomial in quasi-linear time. We also show how the same algorithmic transformation technique can be used for computing modulo random irreducible polynomials over a finite field.
This paper considers the reconfigurable intelligent surface (RIS)-assisted communication scenario, where an RIS is used to assist the base station (BS) for serving multiple users. The RIS consisting of passive reflecting elements can manipulate the reflected direction of the incoming electromagnetic waves and thus it offers a new design dimension to the system designer. To maximize the sum rate, the active beamforming at the BS and the passive phase shifts at the RIS need to be jointly optimized, which is an NP-hard problem. In this work, we consider the joint active and passive (JAPB) beamforming problem over correlated fading channels. To facilitate practical implementation, we propose two low-complexity schemes along with user grouping to solve JAPB. Besides, we theoretically analyze the mean correlation coefficient between two cascade RIS channels and obtain a closed-form expression for arbitrary phase-shift values. Asymptotic analysis is also conducted to get insights into the channel correlation of cascade RIS channels when the numbers of BS antennas and RIS elements are large. Simulation results are presented to validate the analysis accuracy of the derived mean correlation coefficient. Also, the sum rate performance of the proposed methods under different system settings is evaluated and compared with the benchmark that optimizes the RIS phase shifts using element-wise successive refinement.
This article considers the massive MIMO unsourced random access problem on a quasi-static Rayleigh fading channel. Given a fixed message length and a prescribed number of channel uses, the objective is to construct a coding scheme that minimizes the energy-per-bit subject to a fixed probability of error. The proposed scheme differs from other state-of-the-art schemes in that it blends activity detection, single-user coding, pilot-aided and temporary decisions-aided iterative channel estimation and decoding, minimum-mean squared error (MMSE) estimation, and successive interference cancellation (SIC). We show that an appropriate combination of these ideas can substantially outperform state-of-the-art coding schemes when the number of active users is more than 100, making this the best performing scheme known for this regime.
Efficient usage of in-device storage and computation capabilities are key solutions to support data-intensive applications such as immersive digital experience. This paper proposes a location-dependent multi-antenna coded caching -based content delivery scheme tailored specifically for wireless immersive viewing applications. First, a memory assignment phase is performed where the content relevant to the identified wireless bottleneck areas are incentivized. As a result, unequal fractions of location-dependent multimedia content are cached at each user. Then, a novel packet generation process is carried out given asymmetric cache placement. During the subsequent delivery phase, the number of packets transmitted to each user is the same, while the sizes of the packets are proportional to the corresponding location-dependent cache ratios. Finally, each user is served with location-specific content using joint multicast beamforming and multi-rate modulation scheme that simultaneously benefits from global caching and spatial multiplexing gains. Numerical experiments and mathematical analysis demonstrate significant performance gains compared to the state-of-the-art.
A financial system is represented by a network, where nodes correspond to banks, and directed labeled edges correspond to debt contracts between banks. Once a payment schedule has been defined, where we assume that a bank cannot refuse a payment towards one of its lenders if it has sufficient funds, the liquidity of the system is defined as the sum of total payments made in the network. Maximizing systemic liquidity is a natural objective of any financial authority, so, we study the setting where the financial authority offers bailout money to some bank(s) or forgives the debts of others in order to maximize liquidity, and examine efficient ways to achieve this. We investigate the approximation ratio provided by the greedy bailout policy compared to the optimal one, and we study the computational hardness of finding the optimal debt-removal and budget-constrained optimal bailout policy, respectively. We also study financial systems from a game-theoretic standpoint. We observe that the removal of some incoming debt might be in the best interest of a bank, if that helps one of its borrowers remain solvent and avoid costs related to default. Assuming that a bank's well-being (i.e., utility) is aligned with the incoming payments they receive from the network, we define and analyze a game among banks who want to maximize their utility by strategically giving up some incoming payments. In addition, we extend the previous game by considering bailout payments. After formally defining the above games, we prove results about the existence and quality of pure Nash equilibria, as well as the computational complexity of finding such equilibria.
Group synchronization asks to recover group elements from their pairwise measurements. It has found numerous applications across various scientific disciplines. In this work, we focus on orthogonal and permutation group synchronization which are widely used in computer vision such as object matching and structure from motion. Among many available approaches, the spectral methods have enjoyed great popularity due to their efficiency and convenience. We will study the performance guarantees of the spectral methods in solving these two synchronization problems by investigating how well the computed eigenvectors approximate each group element individually. We establish our theory by applying the recent popular~\emph{leave-one-out} technique and derive a~\emph{block-wise} performance bound for the recovery of each group element via eigenvectors. In particular, for orthogonal group synchronization, we obtain a near-optimal performance bound for the group recovery in presence of additive Gaussian noise. For permutation group synchronization under random corruption, we show that the widely-used two-step procedure (spectral method plus rounding) can recover all the group elements exactly if the SNR (signal-to-noise ratio) is close to the information theoretical limit. Our numerical experiments confirm our theory and indicate a sharp phase transition for the exact group recovery.
Information cross-validation can be a powerful tool to detect manipulated, dubious GNSS data. A promising approach is to leverage time obtained over networks a mobile device can connect to, and detect discrepancies between the GNSS-provided time and the network time. The challenge lies in having reliably both accurate and trustworthy network time as the basis for the GNSS attack detection. Here, we provide a concrete proposal that leverages, together with the network time servers, the nearly ubiquitous IEEE 802.11 (Wi-Fi) infrastructure. Our framework supports application-layer, secure and robust real time broadcasting by Wi-Fi Access Points (APs), based on hash chains and infrequent digital signatures verification to minimize computational and communication overhead, allowing mobile nodes to efficiently obtain authenticated and rich time information as they roam. We pair this method with Network Time Security (NTS), for enhanced resilience through multiple sources, available, ideally, simultaneously. We analyze the performance of our scheme in a dedicated setup, gauging the overhead for authenticated time data (Wi-Fi timestamped beacons and NTS). The results show that it is possible to provide security for the external to GNSS time sources, with minimal overhead for authentication and integrity, even when the GNSS-equipped nodes are mobile, and thus have short interactions with the Wi-Fi infrastructure and possibly intermittent Internet connectivity, as well as limited resources.
Imagine a coverage area where each mobile device is communicating with a preferred set of wireless access points (among many) that are selected based on its needs and cooperate to jointly serve it, instead of creating autonomous cells. This effectively leads to a user-centric post-cellular network architecture, which can resolve many of the interference issues and service-quality variations that appear in cellular networks. This concept is called User-centric Cell-free Massive MIMO (multiple-input multiple-output) and has its roots in the intersection between three technology components: Massive MIMO, coordinated multipoint processing, and ultra-dense networks. The main challenge is to achieve the benefits of cell-free operation in a practically feasible way, with computational complexity and fronthaul requirements that are scalable to enable massively large networks with many mobile devices. This monograph covers the foundations of User-centric Cell-free Massive MIMO, starting from the motivation and mathematical definition. It continues by describing the state-of-the-art signal processing algorithms for channel estimation, uplink data reception, and downlink data transmission with either centralized or distributed implementation. The achievable spectral efficiency is mathematically derived and evaluated numerically using a running example that exposes the impact of various system parameters and algorithmic choices. The fundamental tradeoffs between communication performance, computational complexity, and fronthaul signaling requirements are thoroughly analyzed. Finally, the basic algorithms for pilot assignment, dynamic cooperation cluster formation, and power optimization are provided, while open problems related to these and other resource allocation problems are reviewed. All the numerical examples can be reproduced using the accompanying Matlab code.
We propose efficient and low complexity multiuser detection (MUD) algorithms for Gaussian multiple access channel (G-MAC) for short-packet transmission in massive machine type communications. We formulate the problem of MUD for the G- MAC as a compressive sensing problem with virtual sparsity induced by one-hot vector transmission of devices in code domain. Our proposed MUD algorithms rely on block and non-block approximate message passing (AMP) for soft decoding of the transmitted packets. Taking into account the induced sparsity leads to a known prior distribution on the transmit vector; hence, the optimal minimum mean squared error (MMSE) denoiser can be employed. We consider both separable and non-separable MMSE denoisers for AMP soft decoding. The effectiveness of the proposed MUD algorithms for a large number of devices is supported by simulation results. For packets of 8 information bits, while the state of the art AMP with soft-threshold denoising achieves 5/100 of the converse bound at Eb/N0 = 4 dB, the proposed algorithms reach 4/9 and 1/2 of the converse bound.
In this work, we analyze the downlink performance of a cell-free massive multiple-input-multiple-output system with finite capacity fronthaul links between the centralized baseband units and the access point (APs). Conditioned on the user and AP locations, we first derive an achievable rate for a randomly selected user in the network that captures the effect of finite fronthaul capacity. Further, we present the performance analysis for two different types of network architecture, namely the traditional and the user-centric. For the traditional architecture, where each user is served by all the APs in the network, we derive the user rate coverage using statistical properties of the binomial point process. For the user-centric architecture, where each user is served by a specified number of its nearest APs, we derive the rate coverage for the typical user using statistical properties of the Poisson point process. In addition, we statistically characterize the number of users per AP that is necessary for coverage analysis. From the system analyses, we conclude that for the traditional architecture the average system sum-rate is a quasi-concave function of the number of users. Further, for the user-centric architecture, there exists an optimal number of serving APs that maximizes the average user rate.