Enhanced mobile broadband (eMBB) and ultrareliable and low-latency communications (URLLC) are two major expected services in the fifth-generation mobile communication systems (5G). Specifically, eMBB applications support extremely high data rate communications, while URLLC services aim to provide stringent latency with high reliability communications. Due to their differentiated quality-of-service (QoS) requirements, the spectrum sharing between URLLC and eMBB services becomes a challenging scheduling issue. In this paper, we aim to investigate the URLLC and eMBB coscheduling/coexistence problem under a puncturing technique in multiple-input multiple-output (MIMO) non-orthogonal multiple access (NOMA) systems. The objective function is formulated to maximize the data rate of eMBB users while satisfying the latency requirements of URLLC users through joint user selection and power allocation scheduling. To solve this problem, we first introduce an eMBB user clustering mechanism to balance the system performance and computational complexity. Thereafter, we decompose the original problem into two subproblems, namely the scheduling problem of user selection and power allocation. We introduce a Gale-Shapley (GS) theory to solve with the user selection problem, and a successive convex approximation (SCA) and a difference of convex (D.C.) programming to deal with the power allocation problem. Finally, an iterative algorithm is utilized to find the global solution with low computational complexity. Numerical results show the effectiveness of the proposed algorithms, and also verify the proposed approach outperforms other baseline methods.
Unmanned Aerial Vehicles (UAVs) promise to become an intrinsic part of next generation communications, as they can be deployed to provide wireless connectivity to ground users to supplement existing terrestrial networks. The majority of the existing research into the use of UAV access points for cellular coverage considers rotary-wing UAV designs (i.e. quadcopters). However, we expect fixed-wing UAVs to be more appropriate for connectivity purposes in scenarios where long flight times are necessary (such as for rural coverage), as fixed-wing UAVs rely on a more energy-efficient form of flight when compared to the rotary-wing design. As fixed-wing UAVs are typically incapable of hovering in place, their deployment optimisation involves optimising their individual flight trajectories in a way that allows them to deliver high quality service to the ground users in an energy-efficient manner. In this paper, we propose a multi-agent deep reinforcement learning approach to optimise the energy efficiency of fixed-wing UAV cellular access points while still allowing them to deliver high-quality service to users on the ground. In our decentralized approach, each UAV is equipped with a Dueling Deep Q-Network (DDQN) agent which can adjust the 3D trajectory of the UAV over a series of timesteps. By coordinating with their neighbours, the UAVs adjust their individual flight trajectories in a manner that optimises the total system energy efficiency. We benchmark the performance of our approach against a series of heuristic trajectory planning strategies, and demonstrate that our method can improve the system energy efficiency by as much as 70%.
Private information retrieval (PIR) protocols ensure that a user can download a file from a database without revealing any information on the identity of the requested file to the servers storing the database. While existing protocols strictly impose that no information is leaked on the file's identity, this work initiates the study of the tradeoffs that can be achieved by relaxing the perfect privacy requirement. We refer to such protocols as weakly-private information retrieval (WPIR) protocols. In particular, for the case of multiple noncolluding replicated servers, we study how the download rate, the upload cost, and the access complexity can be improved when relaxing the full privacy constraint. To quantify the information leakage on the requested file's identity we consider mutual information (MI), worst-case information leakage, and maximal leakage (MaxL). We present two WPIR schemes, denoted by Scheme A and Scheme B, based on two recent PIR protocols and show that the download rate of the former can be optimized by solving a convex optimization problem. We also show that Scheme A achieves an improved download rate compared to the recently proposed scheme by Samy et al. under the so-called $\epsilon$-privacy metric. Additionally, a family of schemes based on partitioning is presented. Moreover, we provide an information-theoretic converse bound for the maximum possible download rate for the MI and MaxL privacy metrics under a practical restriction on the alphabet size of queries and answers. For two servers and two files, the bound is tight under the MaxL metric, which settles the WPIR capacity in this particular case. Finally, we compare the performance of the proposed schemes and their gap to the converse bound.
Different from conventional wired line connections, industrial control through wireless transmission is widely regarded as a promising solution due to its reduced cost, increased long-term reliability, and enhanced reliability. However, mission-critical applications impose stringent quality of service (QoS) requirements that entail ultra-reliability low-latency communications (URLLC). The primary feature of URLLC is that the blocklength of channel codes is short, and the conventional Shannon's Capacity is not applicable. In this paper, we consider the URLLC in a factory automation (FA) scenario. Due to densely deployed equipment in FA, wireless signal are easily blocked by the obstacles. To address this issue, we propose to deploy intelligent reflecting surface (IRS) to create an alternative transmission link, which can enhance the transmission reliability. In this paper, we focus on the performance analysis for IRS-aided URLLC-enabled communications in a FA scenario. Both the average data rate (ADR) and the average decoding error probability (ADEP) are derived under finite channel blocklength for seven cases: 1) Rayleigh fading channel; 2) With direct channel link; 3) Nakagami-m fading channel; 4) Imperfect phase alignment; 5) Multiple-IRS case; 6) Rician fading channel; 7) Correlated channels. Extensive numerical results are provided to verify the accuracy of our derived results.
Guessing random additive noise decoding (GRAND) algorithm has emerged as an excellent decoding strategy that can meet both the high reliability and low latency constraints. This paper proposes a successive addition-subtraction algorithm to generate noise error permutations. A noise error patterns generation scheme is presented by embedding the "1" and "0" bursts alternately. Then detailed procedures of the proposed algorithm are presented, and its correctness is also demonstrated through theoretical derivations. The aim of this work is to provide a preliminary paradigm and reference for future research on GRAND algorithm and hardware implementation.
LoraWAN has turned out to be one of the most successful frameworks in IoT devices. Real world scenarios demand the use of such networks along with a robust stream processing application layer. To maintain the exactly once processing semantics one must ensure that we have proper ways to proactively detect message drops and handle the same. An important use case where stream processing plays a crucial role is joining various data streams that are transmitted via gateways connected to edge devices which are related to each other as part of some common business requirement. LoraWAN supports connectivity to multiple gateways for edge devices and by virtue of its different device classes, the network can send and receive messages in an effective way that conserves battery power as well as network bandwidth. Rather than relying on explicit acknowledgements for the transmitted messages we take the advantage of these characteristics of the devices to detect , handle missing messages and finally process them.
We propose a joint channel estimation and data detection (JED) algorithm for densely-populated cell-free massive multiuser (MU) multiple-input multiple-output (MIMO) systems, which reduces the channel training overhead caused by the presence of hundreds of simultaneously transmitting user equipments (UEs). Our algorithm iteratively solves a relaxed version of a maximum a-posteriori JED problem and simultaneously exploits the sparsity of cell-free massive MU-MIMO channels as well as the boundedness of QAM constellations. In order to improve the performance and convergence of the algorithm, we propose methods that permute the access point and UE indices to form so-called virtual cells, which leads to better initial solutions. We assess the performance of our algorithm in terms of root-mean-squared-symbol error, bit error rate, and mutual information, and we demonstrate that JED significantly reduces the pilot overhead compared to orthogonal training, which enables reliable communication with short packets to a large number of UEs.
This paper investigates a cognitive unmanned aerial vehicle (UAV) enabled Internet of Things (IoT) network, where secondary/cognitive IoT devices upload their data to the UAV hub following a non-orthogonal multiple access (NOMA) protocol in the spectrum of the primary network. We aim to maximize the minimum lifetime of IoT devices by jointly optimizing the UAV location, transmit power, and decoding order subject to interference-power constraints in presence of the imperfect channel state information (CSI). To solve the formulated non-convex mixed-integer programming problem, we first jointly optimize the UAV location and transmit power for a given decoding order and obtain the globally optimal solution with the assistance of Lagrange duality and then obtain the best decoding order by exhaustive search, which is applicable to relatively small-scale scenarios. For large-scale scenarios, we propose a low-complexity sub-optimal algorithm by transforming the original problem into a more tractable equivalent form and applying the successive convex approximation (SCA) technique and penalty function method. Numerical results demonstrate that the proposed design significantly outperforms the benchmark schemes.
Driven by the visions of Internet of Things and 5G communications, the edge computing systems integrate computing, storage and network resources at the edge of the network to provide computing infrastructure, enabling developers to quickly develop and deploy edge applications. Nowadays the edge computing systems have received widespread attention in both industry and academia. To explore new research opportunities and assist users in selecting suitable edge computing systems for specific applications, this survey paper provides a comprehensive overview of the existing edge computing systems and introduces representative projects. A comparison of open source tools is presented according to their applicability. Finally, we highlight energy efficiency and deep learning optimization of edge computing systems. Open issues for analyzing and designing an edge computing system are also studied in this survey.
Most existing recommender systems leverage user behavior data of one type only, such as the purchase behavior in E-commerce that is directly related to the business KPI (Key Performance Indicator) of conversion rate. Besides the key behavioral data, we argue that other forms of user behaviors also provide valuable signal, such as views, clicks, adding a product to shop carts and so on. They should be taken into account properly to provide quality recommendation for users. In this work, we contribute a new solution named NMTR (short for Neural Multi-Task Recommendation) for learning recommender systems from user multi-behavior data. We develop a neural network model to capture the complicated and multi-type interactions between users and items. In particular, our model accounts for the cascading relationship among different types of behaviors (e.g., a user must click on a product before purchasing it). To fully exploit the signal in the data of multiple types of behaviors, we perform a joint optimization based on the multi-task learning framework, where the optimization on a behavior is treated as a task. Extensive experiments on two real-world datasets demonstrate that NMTR significantly outperforms state-of-the-art recommender systems that are designed to learn from both single-behavior data and multi-behavior data. Further analysis shows that modeling multiple behaviors is particularly useful for providing recommendation for sparse users that have very few interactions.
Smart services are an important element of the smart cities and the Internet of Things (IoT) ecosystems where the intelligence behind the services is obtained and improved through the sensory data. Providing a large amount of training data is not always feasible; therefore, we need to consider alternative ways that incorporate unlabeled data as well. In recent years, Deep reinforcement learning (DRL) has gained great success in several application domains. It is an applicable method for IoT and smart city scenarios where auto-generated data can be partially labeled by users' feedback for training purposes. In this paper, we propose a semi-supervised deep reinforcement learning model that fits smart city applications as it consumes both labeled and unlabeled data to improve the performance and accuracy of the learning agent. The model utilizes Variational Autoencoders (VAE) as the inference engine for generalizing optimal policies. To the best of our knowledge, the proposed model is the first investigation that extends deep reinforcement learning to the semi-supervised paradigm. As a case study of smart city applications, we focus on smart buildings and apply the proposed model to the problem of indoor localization based on BLE signal strength. Indoor localization is the main component of smart city services since people spend significant time in indoor environments. Our model learns the best action policies that lead to a close estimation of the target locations with an improvement of 23% in terms of distance to the target and at least 67% more received rewards compared to the supervised DRL model.