Ultra-reliability and low latency communication has long been an important but challenging task in the fifth and sixth generation wireless communication systems. Scheduling as many users as possible to serve on the limited time-frequency resource is one of a crucial topic, subjecting to the maximum allowable transmission power and the minimum rate requirement of each user. We address it by proposing a mixed integer programming model, with the goal of maximizing the set cardinality of users instead of maximizing the system sum rate or energy efficiency. Mathematical transformations and successive convex approximation are combined to solve the complex optimization problem. Numerical results show that the proposed method achieves a considerable performance compared with exhaustive search method, but with lower computational complexity.
Many real-world optimization problems such as engineering design can be eventually modeled as the corresponding multiobjective optimization problems (MOPs) which must be solved to obtain approximate Pareto optimal fronts. Multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been regarded as a very promising approach for solving MOPs. Recent studies have shown that MOEA/D with uniform weight vectors is well-suited to MOPs with regular Pareto optimal fronts, but its performance in terms of diversity deteriorates on MOPs with irregular Pareto optimal fronts such as highly nonlinear and convex. In this way, the solution set obtained by the algorithm can not provide more reasonable choices for decision makers. In order to efficiently overcome this drawback, in this paper, we propose an improved MOEA/D algorithm by virtue of the well-known Pascoletti-Serafini scalarization method and a new strategy of multi-reference points. Specifically, this strategy consists of the setting and adaptation of reference points generated by the techniques of equidistant partition and projection. For performance assessment, the proposed algorithm is compared with existing four state-of-the-art multiobjective evolutionary algorithms on both benchmark test problems with various types of Pareto optimal fronts and two real-world MOPs including the hatch cover design and the rocket injector design in engineering optimization. According to the experimental results, the proposed algorithm exhibits better diversity performance than that of the other compared algorithms.
We propose a joint channel estimation and data detection (JED) algorithm for densely-populated cell-free massive multiuser (MU) multiple-input multiple-output (MIMO) systems, which reduces the channel training overhead caused by the presence of hundreds of simultaneously transmitting user equipments (UEs). Our algorithm iteratively solves a relaxed version of a maximum a-posteriori JED problem and simultaneously exploits the sparsity of cell-free massive MU-MIMO channels as well as the boundedness of QAM constellations. In order to improve the performance and convergence of the algorithm, we propose methods that permute the access point and UE indices to form so-called virtual cells, which leads to better initial solutions. We assess the performance of our algorithm in terms of root-mean-squared-symbol error, bit error rate, and mutual information, and we demonstrate that JED significantly reduces the pilot overhead compared to orthogonal training, which enables reliable communication with short packets to a large number of UEs.
This paper investigates a cognitive unmanned aerial vehicle (UAV) enabled Internet of Things (IoT) network, where secondary/cognitive IoT devices upload their data to the UAV hub following a non-orthogonal multiple access (NOMA) protocol in the spectrum of the primary network. We aim to maximize the minimum lifetime of IoT devices by jointly optimizing the UAV location, transmit power, and decoding order subject to interference-power constraints in presence of the imperfect channel state information (CSI). To solve the formulated non-convex mixed-integer programming problem, we first jointly optimize the UAV location and transmit power for a given decoding order and obtain the globally optimal solution with the assistance of Lagrange duality and then obtain the best decoding order by exhaustive search, which is applicable to relatively small-scale scenarios. For large-scale scenarios, we propose a low-complexity sub-optimal algorithm by transforming the original problem into a more tractable equivalent form and applying the successive convex approximation (SCA) technique and penalty function method. Numerical results demonstrate that the proposed design significantly outperforms the benchmark schemes.
Minimizing a sum of simple submodular functions of limited support is a special case of general submodular function minimization that has seen numerous applications in machine learning. We develop fast techniques for instances where components in the sum are cardinality-based, meaning they depend only on the size of the input set. This variant is one of the most widely applied in practice, encompassing, e.g., common energy functions arising in image segmentation and recent generalized hypergraph cut functions. We develop the first approximation algorithms for this problem, where the approximations can be quickly computed via reduction to a sparse graph cut problem, with graph sparsity controlled by the desired approximation factor. Our method relies on a new connection between sparse graph reduction techniques and piecewise linear approximations to concave functions. Our sparse reduction technique leads to significant improvements in theoretical runtimes, as well as substantial practical gains in problems ranging from benchmark image segmentation tasks to hypergraph clustering problems.
5G technology allows the presence of heterogeneous services in the same physical network. On the radio access network (RAN), the spectrum slicing of the shared radio resources is a critical task to guarantee the performance of each service. In this paper, we analyze a downlink communication in which a base station (BS) should serve two types of traffic, enhanced mobile broadband (eMBB) and ultra-reliable low-latency communication (URLLC), respectively. Due to the nature of low-latency traffic, the BS knows the channel state information (CSI) of the eMBB users only. In this setting, we study the power minimization problem employing orthogonal multiple access (OMA) and non-orthogonal multiple access (NOMA) schemes. We analyze the impact of resource sharing, showing that the knowledge of eMBB CSI can be used also in resource allocation for URLLC users. Based on this analysis, we propose two algorithms: a feasible and a block coordinated descent approach (BCD). We show that the BCD is optimal for the URLLC power allocation. The numerical results show that NOMA leads to a lower power consumption compared to OMA, except when the URLLC user is very close to the BS. For the last case, the optimal approach depends on the channel condition of the eMBB user. In any case, even when the OMA paradigm attains the best performance, the gap with NOMA is negligible, proving the NOMA capacity in exploiting the shared resources to reduce the power consumption in every condition.
Spectrum slicing of the shared radio resources is a critical task in 5G networks with heterogeneous services, through which each service gets performance guarantees. In this paper, we consider a setup in which a Base Station (BS) should serve two types of traffic in the downlink, enhanced mobile broadband (eMBB) and ultra-reliable low-latency communication (URLLC), respectively. Two resource allocation strategies are compared: non-orthogonal multiple access (NOMA) and orthogonal multiple access (OMA). A framework for power minimization is presented, in which the BS knows the channel state information (CSI) of the eMBB users only. Nevertheless, due to the resource sharing, it is shown that this knowledge can be used also to the benefit of the URLLC users. The numerical results show that NOMA leads to a lower power consumption compared to OMA for every simulation parameter under test.
In this paper we propose a modified Lie-type spectral splitting approximation where the external potential is of quadratic type. It is proved that we can approximate the solution to a nonlinear Schroedinger equation by solving the linear problem and treating the nonlinear term separately, with a rigorous estimate of the remainder term. Furthermore, we show by means of numerical experiments that such a modified approximation is more efficient than the standard one.
Much stringent reliability and processing latency requirements in ultra-reliable-low-latency-communication (URLLC) traffic make the design of linear massive multiple-input-multiple-output (M-MIMO) receivers becomes very challenging. Recently, Bayesian concept has been used to increase the detection reliability in minimum-mean-square-error (MMSE) linear receivers. However, the latency processing time is a major concern due to the exponential complexity of matrix inversion operations in MMSE schemes. This paper proposes an iterative M-MIMO receiver that is developed by using a Bayesian concept and a parallel interference cancellation (PIC) scheme, referred to as a linear Bayesian learning (LBL) receiver. PIC has a linear complexity as it uses a combination of maximum ratio combining (MRC) and decision statistic combining (DSC) schemes to avoid matrix inversion operations. Simulation results show that the bit-error-rate (BER) and latency processing performances of the proposed receiver outperform the ones of MMSE and best Bayesian-based receivers by minimum $2$ dB and $19$ times for various M-MIMO system configurations.
The stringent requirements on reliability and processing delay in the fifth-generation ($5$G) cellular networks introduce considerable challenges in the design of massive multiple-input-multiple-output (M-MIMO) receivers. The two main components of an M-MIMO receiver are a detector and a decoder. To improve the trade-off between reliability and complexity, a Bayesian concept has been considered as a promising approach that enhances classical detectors, e.g. minimum-mean-square-error detector. This work proposes an iterative M-MIMO detector based on a Bayesian framework, a parallel interference cancellation scheme, and a decision statistics combining concept. We then develop a high performance M-MIMO receiver, integrating the proposed detector with a low complexity sequential decoding for polar codes. Simulation results of the proposed detector show a significant performance gain compared to other low complexity detectors. Furthermore, the proposed M-MIMO receiver with sequential decoding ensures one order magnitude lower complexity compared to a receiver with stack successive cancellation decoding for polar codes from the 5G New Radio standard.
Ranking has always been one of the top concerns in information retrieval researches. For decades, the lexical matching signal has dominated the ad-hoc retrieval process, but solely using this signal in retrieval may cause the vocabulary mismatch problem. In recent years, with the development of representation learning techniques, many researchers turn to Dense Retrieval (DR) models for better ranking performance. Although several existing DR models have already obtained promising results, their performance improvement heavily relies on the sampling of training examples. Many effective sampling strategies are not efficient enough for practical usage, and for most of them, there still lacks theoretical analysis in how and why performance improvement happens. To shed light on these research questions, we theoretically investigate different training strategies for DR models and try to explain why hard negative sampling performs better than random sampling. Through the analysis, we also find that there are many potential risks in static hard negative sampling, which is employed by many existing training methods. Therefore, we propose two training strategies named a Stable Training Algorithm for dense Retrieval (STAR) and a query-side training Algorithm for Directly Optimizing Ranking pErformance (ADORE), respectively. STAR improves the stability of DR training process by introducing random negatives. ADORE replaces the widely-adopted static hard negative sampling method with a dynamic one to directly optimize the ranking performance. Experimental results on two publicly available retrieval benchmark datasets show that either strategy gains significant improvements over existing competitive baselines and a combination of them leads to the best performance.