We consider a fully digital massive multiple-input multiple-output architecture with low-resolution analog-to-digital/digital-to-analog converters (ADCs/DACs) at the base station (BS) and analyze the performance trade-off between the number of BS antennas, the resolution of the ADCs/DACs, and the bandwidth. Assuming a hardware power consumption constraint, we determine the relationship between these design parameters by using a realistic model for the power consumption of the ADCs/DACs and the radio frequency chains. Considering uplink pilot-aided channel estimation, we build on the Bussgang decomposition to derive tractable expressions for uplink and downlink ergodic achievable sum rates. Numerical results show that the ergodic performance is boosted when many BS antennas with very low resolution (i.e., 2 to 3 bits) are adopted in both the uplink and the downlink.
The Internet of Things (IoT) brings connectivity to a massive number of devices that demand energy-efficient solutions to deal with limited battery capacities, uplink-dominant traffic, and channel impairments. In this work, we explore the use of Unmanned Aerial Vehicles (UAVs) equipped with configurable antennas as a flexible solution for serving low-power IoT networks. We formulate an optimization problem to set the position and antenna beamwidth of the UAV, and the transmit power of the IoT devices subject to average-Signal-to-average-Interference-plus-Noise Ratio ($\bar{\text{S}}\overline{\text{IN}}\text{R}$) Quality of Service (QoS) constraints. We minimize the worst-case average energy consumption of the latter, thus, targeting the fairest allocation of the energy resources. The problem is non-convex and highly non-linear; therefore, we re-formulate it as a series of three geometric programs that can be solved iteratively. Results reveal the benefits of planning the network compared to a random deployment in terms of reducing the worst-case average energy consumption. Furthermore, we show that the target $\bar{\text{S}}\overline{\text{IN}}\text{R}$ is limited by the number of IoT devices, and highlight the dominant impact of the UAV hovering height when serving wider areas. Our proposed algorithm outperforms other optimization benchmarks in terms of minimizing the average energy consumption at the most energy-demanding IoT device, and convergence time.
This paper considers the one-bit precoding problem for the multiuser downlink massive multiple-input multiple-output (MIMO) system with phase shift keying (PSK) modulation and focuses on the celebrated constructive interference (CI)-based problem formulation. The existence of the discrete one-bit constraint makes the problem generally hard to solve. In this paper, we propose an efficient negative $\ell_1$ penalty approach for finding a high-quality solution of the considered problem. Specifically, we first propose a novel negative $\ell_1$ penalty model, which penalizes the one-bit constraint into the objective with a negative $\ell_1$-norm term, and show the equivalence between (global and local) solutions of the original problem and the penalty problem when the penalty parameter is sufficiently large. We further transform the penalty model into an equivalent min-max problem and propose an efficient alternating optimization (AO) algorithm for solving it. The AO algorithm enjoys low per-iteration complexity and is guaranteed to converge to the stationary point of the min-max problem. Numerical results show that, compared against the state-of-the-art CI-based algorithms, the proposed algorithm generally achieves better bit-error-rate (BER) performance with lower computational cost.
Millimeter wave systems suffer from high power consumption and are constrained to use low resolution quantizers --digital to analog and analog to digital converters (DACs and ADCs). However, low resolution quantization leads to reduced data rate and increased out-of-band emission noise. In this paper, a multiple-input multiple-output (MIMO) system with linear transceivers using low resolution DACs and ADCs is considered. An information-theoretic analysis of the system to model the effect of quantization on spectrospatial power distribution and capacity of the system is provided. More precisely, it is shown that the impact of quantization can be accurately described via a linear model with additive independent Gaussian noise. This model in turn leads to simple and intuitive expressions for spectrospatial power distribution of the transmitter and a lower bound on the achievable rate of the system. Furthermore, the derived model is validated through simulations and numerical evaluations, where it is shown to accurately predict both spectral and spatial power distributions.
Analog to Digital Converters (ADCs) are a major contributor to the energy consumption on the receiver side of millimeter-wave multiple-input multiple-output (MIMO) systems with large antenna arrays. Consequently, there has been significant interest in using low-resolution ADCs along with hybrid beam-forming at MIMO receivers for energy efficiency. However, decreasing the ADC resolution results in performance loss -- in terms of achievable rates -- due to increased quantization error. In this work, we study the application of practically implementable nonlinear analog operations, prior to sampling and quantization at the ADCs, as a way to mitigate the aforementioned rate-loss. A receiver architecture consisting of linear analog combiners, implementable nonlinear analog operators, and one-bit threshold ADCs is designed. The fundamental information theoretic performance limits of the resulting communication system, in terms of achievable rates, are investigated under various assumptions on the set of implementable nonlinear analog functions. In order to justify the feasibility of the nonlinear operations in the proposed receiver architecture, an analog circuit is introduced, and circuit simulations exhibiting the generation of the desired nonlinear analog operations are provided.
One of the most important aspects of moving forward to the next generation networks like 5G/6G, is to enable network slicing in an efficient manner. The most challenging issues are the uncertainties in computation and communication demand. Because the slices' arrive to the network in different times and their lifespans vary, the solution should dynamically react to online slice requests. The joint problem of online admission control and resource allocation considering the energy consumption is formulated mathematically. It is based on Binary Linear Programming (BLP), where, the $\Gamma$-Robustness concept is exploited to overcome Virtual Links (VL) bandwidths' and Virtual Network Functions (VNF) workloads' uncertainties. Then, an optimal algorithm is proposed. This optimal algorithm cannot be solved in a reasonable amount of time for real-world and large-scale networks. To find near-optimal solution efficiently, a new heuristic algorithm is developed. The assessments' results indicate that the efficiency of heuristic is vital in increasing the accepted requests' count, decreasing power consumption and providing adjustable tolerance vs. the VNFs workloads' and VLs traffics' uncertainties, separately. Considering the acceptance ratio and power consumption that constitute the two important components of the objective function, heuristic has about 7% and 10% optimality gaps, respectively, while being about 30X faster than that of optimal algorithm.
Millimeter wave (mmWave) is a key technology for fifth-generation (5G) and beyond communications. Hybrid beamforming has been proposed for large-scale antenna systems in mmWave communications. Existing hybrid beamforming designs based on infinite-resolution phase shifters (PSs) are impractical due to hardware cost and power consumption. In this paper, we propose an unsupervised-learning-based scheme to jointly design the analog precoder and combiner with low-resolution PSs for multiuser multiple-input multiple-output (MU-MIMO) systems. We transform the analog precoder and combiner design problem into a phase classification problem and propose a generic neural network architecture, termed the phase classification network (PCNet), capable of producing solutions of various PS resolutions. Simulation results demonstrate the superior sum-rate and complexity performance of the proposed scheme, as compared to state-of-the-art hybrid beamforming designs for the most commonly used low-resolution PS configurations.
We consider the problem of estimating channel fading coefficients (modeled as a correlated Gaussian vector) via Downlink (DL) training and Uplink (UL) feedback in wideband FDD massive MIMO systems. Using rate-distortion theory, we derive optimal bounds on the achievable channel state estimation error in terms of the number of training pilots in DL ($\beta_{tr}$) and feedback dimension in UL ($\beta_{fb}$), with random, spatially isotropic pilots. It is shown that when the number of training pilots exceeds the channel covariance rank ($r$), the optimal rate-distortion feedback strategy achieves an estimation error decay of $\Theta (SNR^{-\alpha})$ in estimating the channel state, where $\alpha = min (\beta_{fb}/r , 1)$ is the so-called quality scaling exponent. We also discuss an "analog" feedback strategy, showing that it can achieve the optimal quality scaling exponent for a wide range of training and feedback dimensions with no channel covariance knowledge and simple signal processing at the user side. Our findings are supported by numerical simulations comparing various strategies in terms of channel state mean squared error and achievable ergodic sum-rate in DL with zero-forcing precoding.
While scalable cell-free massive MIMO (CF-mMIMO) shows advantages in static conditions, the impact of its changing serving access point (AP) set in a mobile network is not yet addressed. In this paper we first derive the CPU cluster and AP handover rates of scalable CF-mMIMO as exact numerical results and tight closed form approximations. We then use our closed form handover rate result to analyse the mobility-aware throughput. We compare the mobility-aware spectral efficiency (SE) of scalable CF-mMIMO against distributed MIMO with pure network- and UE-centric AP selection, for different AP densities and handover delays. Our results reveal an important trade-off for future dense networks with low control delay: under moderate to high mobility, scalable CF-mMIMO maintains its advantage for the 95th-percentile users but at the cost of degraded median SE.
This paper studies the single image super-resolution problem using adder neural networks (AdderNet). Compared with convolutional neural networks, AdderNet utilizing additions to calculate the output features thus avoid massive energy consumptions of conventional multiplications. However, it is very hard to directly inherit the existing success of AdderNet on large-scale image classification to the image super-resolution task due to the different calculation paradigm. Specifically, the adder operation cannot easily learn the identity mapping, which is essential for image processing tasks. In addition, the functionality of high-pass filters cannot be ensured by AdderNet. To this end, we thoroughly analyze the relationship between an adder operation and the identity mapping and insert shortcuts to enhance the performance of SR models using adder networks. Then, we develop a learnable power activation for adjusting the feature distribution and refining details. Experiments conducted on several benchmark models and datasets demonstrate that, our image super-resolution models using AdderNet can achieve comparable performance and visual quality to that of their CNN baselines with an about 2$\times$ reduction on the energy consumption.
Recently, adaptive inference is gaining increasing attention due to its high computational efficiency. Different from existing works, which mainly exploit architecture redundancy for adaptive network design, in this paper, we focus on spatial redundancy of input samples, and propose a novel Resolution Adaptive Network (RANet). Our motivation is that low-resolution representations can be sufficient for classifying "easy" samples containing canonical objects, while high-resolution features are curial for recognizing some "hard" ones. In RANet, input images are first routed to a lightweight sub-network that efficiently extracts coarse feature maps, and samples with high confident predictions will exit early from the sub-network. The high-resolution paths are only activated for those "hard" samples whose previous predictions are unreliable. By adaptively processing the features in varying resolutions, the proposed RANet can significantly improve its computational efficiency. Experiments on three classification benchmark tasks (CIFAR-10, CIFAR-100 and ImageNet) demonstrate the effectiveness of the proposed model in both anytime prediction setting and budgeted batch classification setting.