Unmanned aerial vehicles (UAVs) are envisioned to be extensively employed for assisting wireless communications in Internet of Things (IoT) applications. On the other hand, terahertz (THz) enabled intelligent reflecting surface (IRS) is expected to be one of the core enabling technologies for forthcoming beyond-5G wireless communications that promise a broad range of data-demand applications. In this paper, we propose a UAV-mounted IRS (UIRS) communication system over THz bands for confidential data dissemination from an access point (AP) towards multiple ground user equipments (UEs) in IoT networks. Specifically, the AP intends to send data to the scheduled UE, while unscheduled UEs may pose potential adversaries. To protect information messages and the privacy of the scheduled UE, we aim to devise an energy-efficient multi-UAV covert communication scheme, where the UIRS is for reliable data transmissions, and an extra UAV is utilized as a cooperative jammer generating artificial noise (AN) to degrade unscheduled UEs detection. We then formulate a novel minimum average energy efficiency (mAEE) optimization problem, targetting to improve the covert throughput and reduce UAVs' propulsion energy consumption subject to the covertness requirement, which is determined analytically. Since the optimization problem is non-convex, we tackle it via the block successive convex approximation (BSCA) approach to iteratively solve a sequence of approximated convex sub-problems, designing the binary user scheduling, AP's power allocation, maximum AN jamming power, IRS beamforming, and both UAVs' trajectory planning. Finally, we present a low-complex overall algorithm for system performance enhancement with complexity and convergence analysis. Numerical results are provided to verify our analysis and demonstrate significant outperformance of our design over other existing benchmark schemes.
Recently Reinforcement Learning (RL) has been applied as an anti-adversarial remedy in wireless communication networks. However, studying the RL-based approaches from the adversary's perspective has received little attention. Additionally, RL-based approaches in an anti-adversary or adversarial paradigm mostly consider single-channel communication (either channel selection or single channel power control), while multi-channel communication is more common in practice. In this paper, we propose a multi-agent adversary system (MAAS) for modeling and analyzing adversaries in a wireless communication scenario by careful design of the reward function under realistic communication scenarios. In particular, by modeling the adversaries as learning agents, we show that the proposed MAAS is able to successfully choose the transmitted channel(s) and their respective allocated power(s) without any prior knowledge of the sender strategy. Compared to the single-agent adversary (SAA), multi-agents in MAAS can achieve significant reduction in signal-to-noise ratio (SINR) under the same power constraints and partial observability, while providing improved stability and a more efficient learning process. Moreover, through empirical studies we show that the results in simulation are close to the ones in communication in reality, a conclusion that is pivotal to the validity of performance of agents evaluated in simulations.
The 5G wireless networks are potentially revolutionizing future technologies. The 5G technologies are expected to foresee demands of diverse vertical applications with diverse requirements including high traffic volume, massive connectivity, high quality of service, and low latency. To fulfill such requirements in 5G and beyond, new emerging technologies such as SDN, NFV, MEC, and CC are being deployed. However, these technologies raise several issues regarding transparency, decentralization, and reliability. Furthermore, 5G networks are expected to connect many heterogeneous devices and machines which will raise several security concerns regarding users' confidentiality, data privacy, and trustworthiness. To work seamlessly and securely in such scenarios, future 5G networks need to deploy smarter and more efficient security functions. Motivated by the aforementioned issues, blockchain was proposed by researchers to overcome 5G issues because of its capacities to ensure transparency, data reliability, trustworthiness, immutability in a distributed environment. Indeed, blockchain has gained momentum as a novel technology that gives rise to a plethora of new decentralized technologies. In this chapter, we discuss the integration of the blockchain with 5G networks and beyond. We then present how blockchain applications in 5G networks and beyond could facilitate enabling various services at the edge and the core.
Industrial Internet of Things (IIoT) has exploded key revolutions in several leading industries, such as energy, agriculture, mining, transportation, and healthcare. Due to the nature of high capacity and fast transmission speed, 5G plays a pivot role in enhancing the industrial procedures, practices and guidelines, such as crowdsourcing, cloud outsourcing and platform subcontracting. Spatial crowdsourcing (SC)-servers (such as offered by DiDi, MeiTuan and Uber) assign different tasks based on workers' location information.However, SC-servers are often untrustworthy and have the threat of revealing workers' privacy. In this paper, we introduce a framework Geo-MOEA (Multi-Objective Evolutionary Algorithm) to protect location privacy of workers involved on SC platform in 5G environment. We propose an adaptive regionalized obfuscation mechanism with inference error bounds based on geo-indistinguishability (a strong notion of differential privacy), which is suitable for the context of large-scale location data and task allocations. This offers locally generated pseudo-locations of workers to be reported instead of their actual locations.Further, to optimize the trade-off between SC service availability and privacy protection, we utilize MOEA to improve the global applicability of the mechanism in 5G environment. Finally, by simulating the location scenario, the visual results on experiments show that the mechanism can not only protect location privacy, but also achieve high availability of services as desired.
The interconnection of vehicles in the future fifth generation (5G) wireless ecosystem forms the so-called Internet of vehicles (IoV). IoV offers new kinds of applications requiring delay-sensitive, compute-intensive and bandwidth-hungry services. Mobile edge computing (MEC) and network slicing (NS) are two of the key enabler technologies in 5G networks that can be used to optimize the allocation of the network resources and guarantee the diverse requirements of IoV applications. As traditional model-based optimization techniques generally end up with NP-hard and strongly non-convex and non-linear mathematical programming formulations, in this paper, we introduce a model-free approach based on deep reinforcement learning (DRL) to solve the resource allocation problem in MEC-enabled IoV network based on network slicing. Furthermore, the solution uses non-orthogonal multiple access (NOMA) to enable a better exploitation of the scarce channel resources. The considered problem addresses jointly the channel and power allocation, the slice selection and the vehicles selection (vehicles grouping). We model the problem as a single-agent Markov decision process. Then, we solve it using DRL using the well-known DQL algorithm. We show that our approach is robust and effective under different network conditions compared to benchmark solutions.
This paper studies the multi-agent resource allocation problem in vehicular networks using non-orthogonal multiple access (NOMA) and network slicing. To ensure heterogeneous service requirements for different vehicles, we propose a network slicing architecture. We focus on a non-cellular network scenario where vehicles communicate by the broadcast approach via the direct device-to-device interface. In such a vehicular network, resource allocation among vehicles is very difficult, mainly due to (i) the rapid variation of wireless channels among highly mobile vehicles and (ii) the lack of a central coordination point. Thus, the possibility of acquiring instantaneous channel state information to perform centralized resource allocation is precluded. The resource allocation problem considered is therefore very complex. It includes not only the usual spectrum and power allocation, but also coverage selection (which target vehicles to broadcast to) and packet selection (which network slice to use). This problem must be solved jointly since selected packets can be overlaid using NOMA and therefore spectrum and power must be carefully allocated for better vehicle coverage. To do so, we provide a optimization approach and study the NP-hardness of the problem. Then, we model the problem using multi-agent Markov decision process. Finally, we use a deep reinforcement learning (DRL) approach to solve the problem. The proposed DRL algorithm is practical because it can be implemented in an online and distributed manner. We show that our approach is robust and efficient when faced with different variations of the network parameters and compared to centralized benchmarks.
This paper studies the problem of online user grouping, scheduling and power allocation in beyond 5G cellular-based Internet of things networks. Due to the massive number of devices trying to be granted to the network, non-orthogonal multiple access method is adopted in order to accommodate multiple devices in the same radio resource block. Different from most previous works, the objective is to maximize the number of served devices while allocating their transmission powers such that their real-time requirements as well as their limited operating energy are respected. First, we formulate the general problem as a mixed integer non-linear program (MINLP) that can be transformed easily to MILP for some special cases. Second, we study its computational complexity by characterizing the NP-hardness of different special cases. Then, by dividing the problem into multiple NOMA grouping and scheduling subproblems, efficient online competitive algorithms are proposed. Further, we show how to use these online algorithms and combine their solutions in a reinforcement learning setting to obtain the power allocation and hence the global solution to the problem. Our analysis are supplemented by simulation results to illustrate the performance of the proposed algorithms with comparison to optimal and state-of-the-art methods.
The term "cyber resilience by design" is growing in popularity. Here, by cyber resilience we refer to the ability of the system to resist, minimize and mitigate a degradation caused by a successful cyber-attack on a system or network of computing and communicating devices. Some use the term "by design" when arguing that systems must be designed and implemented in a provable mission assurance fashion, with the system's intrinsic properties ensuring that a cyber-adversary is unable to cause a meaningful degradation. Others recommend that a system should include a built-in autonomous intelligent agent responsible for thinking and acting towards continuous observation, detection, minimization and remediation of a cyber degradation. In all cases, the qualifier "by design" indicates that the source of resilience is somehow inherent in the structure and operation of the system. But what, then, is the other resilience, not by design? Clearly, there has to be another type of resilience, otherwise what's the purpose of the qualifier "by design"? Indeed, while mentioned less frequently, there exists an alternative form of resilience called "resilience by intervention." In this article we explore differences and mutual reliance of resilience by design and resilience by intervention.
With the growth of 5G, Internet of Things (IoT), edge computing and cloud computing technologies, the infrastructure (compute and network) available to emerging applications (AR/VR, autonomous driving, industry 4.0, etc.) has become quite complex. There are multiple tiers of computing (IoT devices, near edge, far edge, cloud, etc.) that are connected with different types of networking technologies (LAN, LTE, 5G, MAN, WAN, etc.). Deployment and management of applications in such an environment is quite challenging. In this paper, we propose ROMA, which performs resource orchestration for microservices-based 5G applications in a dynamic, heterogeneous, multi-tiered compute and network fabric. We assume that only application-level requirements are known, and the detailed requirements of the individual microservices in the application are not specified. As part of our solution, ROMA identifies and leverages the coupling relationship between compute and network usage for various microservices and solves an optimization problem in order to appropriately identify how each microservice should be deployed in the complex, multi-tiered compute and network fabric, so that the end-to-end application requirements are optimally met. We implemented two real-world 5G applications in video surveillance and intelligent transportation system (ITS) domains. Through extensive experiments, we show that ROMA is able to save up to 90%, 55% and 44% compute and up to 80%, 95% and 75% network bandwidth for the surveillance (watchlist) and transportation application (person and car detection), respectively. This improvement is achieved while honoring the application performance requirements, and it is over an alternative scheme that employs a static and overprovisioned resource allocation strategy by ignoring the resource coupling relationships.
`Tracking' is the collection of data about an individual's activity across multiple distinct contexts and the retention, use, or sharing of data derived from that activity outside the context in which it occurred. This paper aims to introduce tracking on the web, smartphones, and the Internet of Things, to an audience with little or no previous knowledge. It covers these topics primarily from the perspective of computer science and human-computer interaction, but also includes relevant law and policy aspects. Rather than a systematic literature review, it aims to provide an over-arching narrative spanning this large research space. Section 1 introduces the concept of tracking. Section 2 provides a short history of the major developments of tracking on the web. Section 3 presents research covering the detection, measurement and analysis of web tracking technologies. Section 4 delves into the countermeasures against web tracking and mechanisms that have been proposed to allow users to control and limit tracking, as well as studies into end-user perspectives on tracking. Section 5 focuses on tracking on `smart' devices including smartphones and the internet of things. Section 6 covers emerging issues affecting the future of tracking across these different platforms.
Multi-access edge computing (MEC) is viewed as an integral part of future wireless networks to support new applications with stringent service reliability and latency requirements. However, guaranteeing ultra-reliable and low-latency MEC (URLL MEC) is very challenging due to uncertainties of wireless links, limited communications and computing resources, as well as dynamic network traffic. Enabling URLL MEC mandates taking into account the statistics of the end-to-end (E2E) latency and reliability across the wireless and edge computing systems. In this paper, a novel framework is proposed to optimize the reliability of MEC networks by considering the distribution of E2E service delay, encompassing over-the-air transmission and edge computing latency. The proposed framework builds on correlated variational autoencoders (VAEs) to estimate the full distribution of the E2E service delay. Using this result, a new optimization problem based on risk theory is formulated to maximize the network reliability by minimizing the Conditional Value at Risk (CVaR) as a risk measure of the E2E service delay. To solve this problem, a new algorithm is developed to efficiently allocate users' processing tasks to edge computing servers across the MEC network, while considering the statistics of the E2E service delay learned by VAEs. The simulation results show that the proposed scheme outperforms several baselines that do not account for the risk analyses or statistics of the E2E service delay.