In this paper, we investigate a cell-free massive MIMO system with both access points (APs) and user equipments (UEs) equipped with multiple antennas over jointly correlated Rayleigh fading channels. We study four uplink implementations, from fully centralized processing to fully distributed processing, and derive achievable spectral efficiency (SE) expressions with minimum mean-squared error successive interference cancellation (MMSE-SIC) detectors and arbitrary combining schemes. Furthermore, the global and local MMSE combining schemes are derived based on full and local channel state information (CSI) obtained under pilot contamination, which can maximize the achievable SE for the fully centralized and distributed implementation, respectively. We study a two-layer decoding implementation with an arbitrary combining scheme in the first layer and optimal large-scale fading decoding in the second layer. Besides, we compute novel closed-form SE expressions for the two-layer decoding implementation with maximum ratio combining. We compare the SE of different implementation levels and combining schemes and investigate the effect of having additional UE antennas. Note that increasing the number of antennas per UE may degrade the SE performance and the optimal number of UE antennas maximizing the SE is related to the implementation levels, the length of the resource block, and the number of UEs.
We propose in this work to employ the Box-LASSO, a variation of the popular LASSO method, as a low-complexity decoder in a massive multiple-input multiple-output (MIMO) wireless communication system. The Box-LASSO is mainly useful for detecting simultaneously structured signals such as signals that are known to be sparse and bounded. One modulation technique that generates essentially sparse and bounded constellation points is the so-called generalized space-shift keying (GSSK) modulation. In this direction, we derive high dimensional sharp characterizations of various performance measures of the Box-LASSO such as the mean square error, probability of support recovery, and the element error rate, under independent and identically distributed (i.i.d.) Gaussian channels that are not perfectly known. In particular, the analytical characterizations can be used to demonstrate performance improvements of the Box-LASSO as compared to the widely used standard LASSO. Then, we can use these measures to optimally tune the involved hyper-parameters of Box-LASSO such as the regularization parameter. In addition, we derive optimum power allocation and training duration schemes in a training-based massive MIMO system. Monte Carlo simulations are used to validate these premises and to show the sharpness of the derived analytical results.
We consider a fully digital massive multiple-input multiple-output architecture with low-resolution analog-to-digital/digital-to-analog converters (ADCs/DACs) at the base station (BS) and analyze the performance trade-off between the number of BS antennas, the resolution of the ADCs/DACs, and the bandwidth. Assuming a hardware power consumption constraint, we determine the relationship between these design parameters by using a realistic model for the power consumption of the ADCs/DACs and the radio frequency chains. Considering uplink pilot-aided channel estimation, we build on the Bussgang decomposition to derive tractable expressions for uplink and downlink ergodic achievable sum rates. Numerical results show that the ergodic performance is boosted when many BS antennas with very low resolution (i.e., 2 to 3 bits) are adopted in both the uplink and the downlink.
Broadcast/multicast communication systems are typically designed to optimize the outage rate criterion, which neglects the performance of the fraction of clients with the worst channel conditions. Targeting ultra-reliable communication scenarios, this paper takes a complementary approach by introducing the conditional value-at-risk (CVaR) rate as the expected rate of a worst-case fraction of clients. To support differential quality-of-service (QoS) levels in this class of clients, layered division multiplexing (LDM) is applied, which enables decoding at different rates. Focusing on a practical scenario in which the transmitter does not know the fading distribution, layer allocation is optimized based on a dataset sampled during deployment. The optimality gap caused by the availability of limited data is bounded via a generalization analysis, and the sample complexity is shown to increase as the designated fraction of worst-case clients decreases. Considering this theoretical result, meta-learning is introduced as a means to reduce sample complexity by leveraging data from previous deployments. Numerical experiments demonstrate that LDM improves spectral efficiency even for small datasets; that, for sufficiently large datasets, the proposed mirror-descent-based layer optimization scheme achieves a CVaR rate close to that achieved when the transmitter knows the fading distribution; and that meta-learning can significantly reduce data requirements.
Recent research investigates the decode-and-forward (DF) relaying for mixed radio frequency (RF) and terahertz (THz) wireless links with zero-boresight pointing errors. In this letter, we analyze the performance of a fixed-gain amplify-and-forward (AF) relaying for the RF-THz link to interface the access network on the RF technology with wireless THz transmissions. We develop probability density function (PDF) and cumulative distribution function (CDF) of the end-to-end SNR for the relay-assisted system in terms of bivariate Fox's H function considering $\alpha$-$\mu$ fading for the THz system with non-zero boresight pointing errors and $\alpha$-$\kappa$-$\mu$ shadowed ($\alpha$-KMS) fading model for the RF link. Using the derived PDF and CDF, we present exact analytical expressions of the outage probability, average bit-error-rate (BER), and ergodic capacity of the considered system. We also analyze the outage probability and average BER asymptotically for a better insight into the system behavior at high SNR. We use simulations to compare the performance of the AF relaying having a semi-blind gain factor with the recently proposed DF relaying for THz-RF transmissions.
In multiuser communication systems, user scheduling and beamforming design are two fundamental problems, which are usually investigated separately in the existing literature. In this work, we focus on the joint optimization of user scheduling and beamforming design with the goal of maximizing the set cardinality of scheduled users. Observing that this problem is computationally challenging due to the non-convex objective function and coupled constraints in continuous and binary variables. To tackle these difficulties, we first propose an iterative optimization algorithm (IOA) relying on the successive convex approximation and uplink-downlink duality theory. Then, motivated by IOA and graph neural networks, a joint user scheduling and power allocation network (JEEPON) is developed to address the investigated problem in an unsupervised manner. The effectiveness of IOA and JEEPON is verified by various numerical results, and the latter achieves a close performance but lower complexity compared with IOA and the greedy-based algorithm. Remarkably, the proposed JEEPON is also competitive in terms of the generalization ability in dynamic wireless network scenarios.
Classifiers are often utilized in time-constrained settings where labels must be assigned to inputs quickly. To address these scenarios, budgeted multi-stage classifiers (MSC) process inputs through a sequence of partial feature acquisition and evaluation steps with early-exit options until a confident prediction can be made. This allows for fast evaluation that can prevent expensive, unnecessary feature acquisition in time-critical instances. However, performance of MSCs is highly sensitive to several design aspects -- making optimization of these systems an important but difficult problem. To approximate an initially intractable combinatorial problem, current approaches to MSC configuration rely on well-behaved surrogate loss functions accounting for two primary objectives (processing cost, error). These approaches have proven useful in many scenarios but are limited by analytic constraints (convexity, smoothness, etc.) and do not manage additional performance objectives. Notably, such methods do not explicitly account for an important aspect of real-time detection systems -- the ratio of "accepted" predictions satisfying some confidence criterion imposed by a risk-averse monitor. This paper proposes a problem-specific genetic algorithm, EMSCO, that incorporates a terminal reject option for indecisive predictions and treats MSC design as an evolutionary optimization problem with distinct objectives (accuracy, cost, coverage). The algorithm's design emphasizes Pareto efficiency while respecting a notion of aggregated performance via a unique scalarization. Experiments are conducted to demonstrate EMSCO's ability to find global optima in a variety of Theta(k^n) solution spaces, and multiple experiments show EMSCO is competitive with alternative budgeted approaches.
Does Federated Learning (FL) work when both uplink and downlink communications have errors? How much communication noise can FL handle and what is its impact to the learning performance? This work is devoted to answering these practically important questions by explicitly incorporating both uplink and downlink noisy channels in the FL pipeline. We present several novel convergence analyses of FL over simultaneous uplink and downlink noisy communication channels, which encompass full and partial clients participation, direct model and model differential transmissions, and non-independent and identically distributed (IID) local datasets. These analyses characterize the sufficient conditions for FL over noisy channels to have the same convergence behavior as the ideal case of no communication error. More specifically, in order to maintain the O(1/T) convergence rate of FedAvg with perfect communications, the uplink and downlink signal-to-noise ratio (SNR) for direct model transmissions should be controlled such that they scale as O(t^2) where t is the index of communication rounds, but can stay constant for model differential transmissions. The key insight of these theoretical results is a "flying under the radar" principle - stochastic gradient descent (SGD) is an inherent noisy process and uplink/downlink communication noises can be tolerated as long as they do not dominate the time-varying SGD noise. We exemplify these theoretical findings with two widely adopted communication techniques - transmit power control and diversity combining - and further validating their performance advantages over the standard methods via extensive numerical experiments using several real-world FL tasks.
Recent advances in Transformer models allow for unprecedented sequence lengths, due to linear space and time complexity. In the meantime, relative positional encoding (RPE) was proposed as beneficial for classical Transformers and consists in exploiting lags instead of absolute positions for inference. Still, RPE is not available for the recent linear-variants of the Transformer, because it requires the explicit computation of the attention matrix, which is precisely what is avoided by such methods. In this paper, we bridge this gap and present Stochastic Positional Encoding as a way to generate PE that can be used as a replacement to the classical additive (sinusoidal) PE and provably behaves like RPE. The main theoretical contribution is to make a connection between positional encoding and cross-covariance structures of correlated Gaussian processes. We illustrate the performance of our approach on the Long-Range Arena benchmark and on music generation.
We consider the task of learning the parameters of a {\em single} component of a mixture model, for the case when we are given {\em side information} about that component, we call this the "search problem" in mixture models. We would like to solve this with computational and sample complexity lower than solving the overall original problem, where one learns parameters of all components. Our main contributions are the development of a simple but general model for the notion of side information, and a corresponding simple matrix-based algorithm for solving the search problem in this general setting. We then specialize this model and algorithm to four common scenarios: Gaussian mixture models, LDA topic models, subspace clustering, and mixed linear regression. For each one of these we show that if (and only if) the side information is informative, we obtain parameter estimates with greater accuracy, and also improved computation complexity than existing moment based mixture model algorithms (e.g. tensor methods). We also illustrate several natural ways one can obtain such side information, for specific problem instances. Our experiments on real data sets (NY Times, Yelp, BSDS500) further demonstrate the practicality of our algorithms showing significant improvement in runtime and accuracy.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.