This paper compares the sum rates and rate regions achieved by power-domain NOMA (non-orthogonal multiple access) and standard massive MIMO (multiple-input multiple-output) techniques. We prove analytically that massive MIMO always outperforms NOMA in i.i.d.~Rayleigh fading channels, if a sufficient number of antennas are used at the base stations. The simulation results show that the crossing point occurs already when having 20-30 antennas, which is far less than what is considered for the next generation cellular networks.
This paper investigates a new downlink nonorthogonal multiple access (NOMA) system, where a multiantenna unmanned aerial vehicle (UAV) is powered by wireless power transfer (WPT) and serves as the base station for multiple pairs of ground users (GUs) running NOMA in each pair. An energy efficiency (EE) maximization problem is formulated to jointly optimize the WPT time and the placement for the UAV, and the allocation of the UAV's transmit power between different NOMA user pairs and within each pair. To efficiently solve this nonconvex problem, we decompose the problem into three subproblems using block coordinate descent. For the subproblem of intra-pair power allocation within each NOMA user pair, we construct a supermodular game with confirmed convergence to a Nash equilibrium. Given the intra-pair power allocation, successive convex approximation is applied to convexify and solve the subproblem of WPT time allocation and inter-pair power allocation between the user pairs. Finally, we solve the subproblem of UAV placement by using the Lagrange multiplier method. Simulations show that our approach can substantially outperform its alternatives that do not use NOMA and WPT techniques or that do not optimize the UAV location.
We study a heterogeneous Rayleigh fading wireless sensor network (WSN) in which densely deployed sensor nodes monitor an environment and transmit their sensed information to base stations (BSs) using access points (APs) as relays to facilitate the data transfer. We consider both large-scale and small-scale propagation effects in our system model and formulate the node deployment problem as an optimization problem aimed at minimizing the wireless communication network's power consumption. By imposing a desired outage probability constraint on all communication channels, we derive the necessary conditions for the optimal deployment that not only minimize the power consumption, but also guarantee all wireless links to have an outage probability below the given threshold. In addition, we study the necessary conditions for an optimal deployment given ergodic capacity constraints. We compare our node deployment algorithms with similar algorithms in the literature and demonstrate their efficacy and superiority.
This paper studies the application of reconfigurable intelligent surface (RIS) to cooperative non-orthogonal multiple access (C-NOMA) networks with simultaneous wireless information and power transfer (SWIPT). We aim for maximizing the rate of the strong user with guaranteed weak user's quality of service (QoS) by jointly optimizing power splitting factors, beamforming coefficients, and RIS reflection coefficients in two transmission phases. The formulated problem is difficult to solve due to its complex and non-convex constraints. To tackle this challenging problem, we first use alternating optimization (AO) framework to transform it into three subproblems, and then use the penalty-based arithmetic-geometric mean approximation (PBAGM) algorithm and the successive convex approximation (SCA)-based method to solve them. Numerical results verify the superiority of the proposed algorithm over the baseline schemes.
Computing a maximum independent set (MaxIS) is a fundamental NP-hard problem in graph theory, which has important applications in a wide spectrum of fields. Since graphs in many applications are changing frequently over time, the problem of maintaining a MaxIS over dynamic graphs has attracted increasing attention over the past few years. Due to the intractability of maintaining an exact MaxIS, this paper aims to develop efficient algorithms that can maintain an approximate MaxIS with an accuracy guarantee theoretically. In particular, we propose a framework that maintains a $(\frac{\Delta}{2} + 1)$-approximate MaxIS over dynamic graphs and prove that it achieves a constant approximation ratio in many real-world networks. To the best of our knowledge, this is the first non-trivial approximability result for the dynamic MaxIS problem. Following the framework, we implement an efficient linear-time dynamic algorithm and a more effective dynamic algorithm with near-linear expected time complexity. Our thorough experiments over real and synthetic graphs demonstrate the effectiveness and efficiency of the proposed algorithms, especially when the graph is highly dynamic.
The similarity between a pair of time series, i.e., sequences of indexed values in time order, is often estimated by the dynamic time warping (DTW) distance, instead of any in the well-studied family of measures including the longest common subsequence (LCS) length and the edit distance. Although it may seem as if the DTW and the LCS(-like) measures are essentially different, we reveal that the DTW distance can be represented by the longest increasing subsequence (LIS) length of a sequence of integers, which is the LCS length between the integer sequence and itself sorted. For a given pair of time series of length $n$ such that the dissimilarity between any elements is an integer between zero and $c$, we propose an integer sequence that represents any substring-substring DTW distance as its band-substring LIS length. The length of the produced integer sequence is $O(c n^2)$, which can be translated to $O(n^2)$ for constant dissimilarity functions. To demonstrate that techniques developed under the LCS(-like) measures are directly applicable to analysis of time series via our reduction of DTW to LIS, we present time-efficient algorithms for DTW-related problems utilizing the semi-local sequence comparison technique developed for LCS-related problems.
Semi-grant-free (SGF) transmission scheme enables grant-free (GF) users to utilize resource blocks allocated for grant-based (GB) users while maintaining the quality of service of GB users. This work investigates the secrecy performance of non-orthogonal multiple access (NOMA)-aided SGF systems. First, analytical expressions for the exact and asymptotic secrecy outage probability (SOP) of NOMA-aided SGF systems with a single GF user are derived. Then, the SGF systems with multiple GF users and a best-user scheduling scheme is considered. By utilizing order statistics theory, closed-form expressions for the exact and asymptotic SOP are derived. Monte Carlo simulation results demonstrate the effects of system parameters on the SOP of the considered system and verify the accuracy of the developed analytical results. The results indicate that both the outage target rate for GB and the secure target rate for GF are the main factors of the secrecy performance of SGF systems.
This paper introduces an objective for optimizing proper scoring rules. The objective is to maximize the increase in payoff of a forecaster who exerts a binary level of effort to refine a posterior belief from a prior belief. In this framework we characterize optimal scoring rules in simple settings, give efficient algorithms for computing optimal scoring rules in complex settings, and identify simple scoring rules that are approximately optimal. In comparison, standard scoring rules in theory and practice -- for example the quadratic rule, scoring rules for the expectation, and scoring rules for multiple tasks that are averages of single-task scoring rules -- can be very far from optimal.
Multihop relaying is a potential technique to mitigate channel impairments in optical wireless communications (OWC). In this paper, multiple fixed-gain amplify-and-forward (AF) relays are employed to enhance the OWC performance under the combined effect of atmospheric turbulence, pointing errors, and fog. We consider a long-range OWC link by modeling the atmospheric turbulence by the Fisher-Snedecor ${\cal{F}}$ distribution, pointing errors by the generalized non-zero boresight model, and random path loss due to fog. We also consider a short-range OWC system by ignoring the impact of atmospheric turbulence. We derive novel upper bounds on the probability density function (PDF) and cumulative distribution function (CDF) of the end-to-end signal-to-noise ratio (SNR) for both short and long-range multihop OWC systems by developing exact statistical results for a single-hop OWC system under the combined effect of ${\cal{F}}$-turbulence channels, non-zero boresight pointing errors, and fog-induced fading. Based on these expressions, we present analytical expressions of outage probability (OP) and average bit-error-rate (ABER) performance for the considered OWC systems involving single-variate Fox's H and Meijer's G functions. Moreover, asymptotic expressions of the outage probability in high SNR region are developed using simpler Gamma functions to provide insights on the effect of channel and system parameters. The derived analytical expressions are validated through Monte-Carlo simulations, and the scaling of the OWC performance with the number of relay nodes is demonstrated with a comparison to the single-hop transmission.
We present a pipelined multiplier with reduced activities and minimized interconnect based on online digit-serial arithmetic. The working precision has been truncated such that $p<n$ bits are used to compute $n$ bits product, resulting in significant savings in area and power. The digit slices follow variable precision according to input, increasing upto $p$ and then decreases according to the error profile. Pipelining has been done to achieve high throughput and low latency which is desirable for compute intensive inner products. Synthesis results of the proposed designs have been presented and compared with the non-pipelined online multiplier, pipelined online multiplier with full working precision and conventional serial-parallel and array multipliers. For $8, 16, 24$ and $32$ bit precision, the proposed low power pipelined design show upto $38\%$ and $44\%$ reduction in power and area respectively compared to the pipelined online multiplier without working precision truncation.
With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.