In this paper, we investigate an unmanned aerial vehicle (UAV)-assisted integrated communication and localization network in emergency scenarios where a single UAV is deployed as both an airborne base station (BS) and anchor node to assist ground BSs in communication and localization services. We formulate an optimization problem to maximize the sum communication rate of all users under localization accuracy constraints by jointly optimizing the 3D position of the UAV, and communication bandwidth and power allocation of the UAV and ground BSs. To address the intractable localization accuracy constraints, we introduce a new performance metric and geometrically characterize the UAV feasible deployment region in which the localization accuracy constraints are satisfied. Accordingly, we combine Gibbs sampling (GS) and block coordinate descent (BCD) techniques to tackle the non-convex joint optimization problem. Numerical results show that the proposed method attains almost identical rate performance as the meta-heuristic benchmark method while reducing the CPU time by 89.3%.
Relay-enabled backscatter communication (BC) is an intriguing paradigm to alleviate energy shortage and improve throughput of Internet-of-Things (IoT) devices. Most of the existing works focus on the resource allocation that considered the unequal and continuous time allocation for both source-relay and relay-destination links. However, the continuous time allocation may be infeasible since in practice, the time allocation shall be carried out in integral multiple of the subframe duration unit. In this article, we study a discrete time scheme from the perspective of frame structure, where one transmission block is divided into two phases and the linear mapping is employed as a re-encoding method to determine the number of subframes for both phases and the power allocation for each subframe in a relay-enabled BC system. Based on this, we derive an accurate system-throughput expression and formulate a mixed-integral non-convex optimization problem to maximize the system throughput by jointly optimizing the power reflection coefficient (PRC) of the IoT node, the power allocation of the hybrid access point (HAP) and the linear mapping matrix, and solve it via a three-step approach. Accordingly, we propose a low complexity iterative algorithm to obtain the throughput maximization-based resource allocation solution. Numerical results analyze the performance of our proposed algorithm, verify the superiority of our proposed scheme, and evaluate the impacts of network parameters on the system throughput.
In this paper, we consider intelligent reflecting surface (IRS) in a non-orthogonal multiple access (NOMA)-aided Integrated Sensing and Multicast-Unicast Communication (ISMUC) system, where the multicast signal is used for sensing and communications while the unicast signal is used only for communications. Our goal is to depict whether the IRS improves the performance of NOMA-ISMUC system or not under the imperfect/perfect successive interference cancellation (SIC) scenario. Towards this end, we formulate a non-convex problem to maximize the unicast rate while ensuring the minimum target illumination power and multicast rate. To settle this problem, we employ the Dinkelbach method to transform this original problem into an equivalent one, which is then solved via alternating optimization algorithm and semidefinite relaxation (SDR) with Sequential Rank-One Constraint Relaxation (SROCR). Based on this, an iterative algorithm is devised to obtain a near-optimal solution. Computer simulations verify the quick convergence of the devised iterative algorithm, and provide insightful results. Compared to NOMA-ISMUC without IRS, IRS-aided NOMA-ISMUC achieves a higher rate with perfect SIC but keeps the almost same rate in the case of imperfect SIC.
Physical-layer authentication is a popular alternative to the conventional key-based authentication for internet of things (IoT) devices due to their limited computational capacity and battery power. However, this approach has limitations due to poor robustness under channel fluctuations, reconciliation overhead, and no clear safeguard distance to ensure the secrecy of the generated authentication keys. In this regard, we propose a novel, secure, and lightweight continuous authentication scheme for IoT device authentication. Our scheme utilizes the inherent properties of the IoT devices transmission model as its source for seed generation and device authentication. Specifically, our proposed scheme provides continuous authentication by checking the access time slots and spreading sequences of the IoT devices instead of repeatedly generating and verifying shared keys. Due to this, access to a coherent key is not required in our proposed scheme, resulting in the concealment of the seed information from attackers. Our proposed authentication scheme for IoT devices demonstrates improved performance compared to the benchmark schemes relying on physical-channel. Our empirical results find a near threefold decrease in misdetection rate of illegitimate devices and close to zero false alarm rate in various system settings with varied numbers of active devices up to 200 and signal-to-noise ratio from 0 dB to 30 dB. Our proposed authentication scheme also has a lower computational complexity of at least half the computational cost of the benchmark schemes based on support vector machine and binary hypothesis testing in our studies. This further corroborates the practicality of our scheme for IoT deployments.
In this work, we consider the problem of distributed computing of functions of structured sources, focusing on the classical setting of two correlated sources and one user that seeks the outcome of the function while benefiting from low-rate side information provided by a helper node. Focusing on the case where the sources are jointly distributed according to a very general mixture model, we here provide an achievable coding scheme that manages to substantially reduce the communication cost of distributed computing by exploiting the nature of the joint distribution of the sources, the side information, as well as the symmetry enjoyed by the desired functions. Our scheme -- which can readily apply in a variety of real-life scenarios including learning, combinatorics, and graph neural network applications -- is here shown to provide substantial reductions in the communication costs, while simultaneously providing computational savings by reducing the exponential complexity of joint decoding techniques to a complexity that is merely linear.
The use of vehicle-to-everything (V2X) communication is expected to significantly improve road safety and traffic management. We present an efficient protocol, called the AEE protocol, for protecting data authenticity and user privacy in V2X applications. Our protocol provides event-based likability, which enables messages from a subject vehicle to be linked to a specific event in order to prevent Sybil attacks. Messages on different events are unlinkable to preserve the long-term privacy of vehicles. Moreover, our protocol introduces a new method for generating temporary public keys to reduce computing and transmission overheads. Such a temporary public key is bound with a certain event and is automatically revoked when the event is over. We describe how to apply our protocol in vehicular communications using two exemplar use cases. To further reduce the real-time computational complexity, our protocol enables us to decompose the cryptographic operations into offline processes for complex operations and real-time processes for fast computations.
Conditional Monte Carlo or pre-integration is a powerful tool for reducing variance and improving the regularity of integrands when using Monte Carlo and quasi-Monte Carlo (QMC) methods. To select the variable to pre-integrate, one must consider both the variable's importance and the tractability of the conditional expectation. For integrals over a Gaussian distribution, any linear combination of variables can potentially be pre-integrated. Liu and Owen (2022) propose to select the linear combination based on an active subspace decomposition of the integrand. However, pre-integrating the selected direction might be intractable. In this work, we address this issue by finding the active subspace subject to constraints such that pre-integration can be easily carried out. The proposed algorithm also provides a computationally-efficient alternative to dimension reduction for pre-integrated functions. The method is applied to examples from computational finance, density estimation, and computational chemistry, and is shown to achieve smaller errors than previous methods.
Edge computing facilitates low-latency services at the network's edge by distributing computation, communication, and storage resources within the geographic proximity of mobile and Internet-of-Things (IoT) devices. The recent advancement in Unmanned Aerial Vehicles (UAVs) technologies has opened new opportunities for edge computing in military operations, disaster response, or remote areas where traditional terrestrial networks are limited or unavailable. In such environments, UAVs can be deployed as aerial edge servers or relays to facilitate edge computing services. This form of computing is also known as UAV-enabled Edge Computing (UEC), which offers several unique benefits such as mobility, line-of-sight, flexibility, computational capability, and cost-efficiency. However, the resources on UAVs, edge servers, and IoT devices are typically very limited in the context of UEC. Efficient resource management is, therefore, a critical research challenge in UEC. In this article, we present a survey on the existing research in UEC from the resource management perspective. We identify a conceptual architecture, different types of collaborations, wireless communication models, research directions, key techniques and performance indicators for resource management in UEC. We also present a taxonomy of resource management in UEC. Finally, we identify and discuss some open research challenges that can stimulate future research directions for resource management in UEC.
Maritime activities represent a major domain of economic growth with several emerging maritime Internet of Things use cases, such as smart ports, autonomous navigation, and ocean monitoring systems. The major enabler for this exciting ecosystem is the provision of broadband, low-delay, and reliable wireless coverage to the ever-increasing number of vessels, buoys, platforms, sensors, and actuators. Towards this end, the integration of unmanned aerial vehicles (UAVs) in maritime communications introduces an aerial dimension to wireless connectivity going above and beyond current deployments, which are mainly relying on shore-based base stations with limited coverage and satellite links with high latency. Considering the potential of UAV-aided wireless communications, this survey presents the state-of-the-art in UAV-aided maritime communications, which, in general, are based on both conventional optimization and machine-learning-aided approaches. More specifically, relevant UAV-based network architectures are discussed together with the role of their building blocks. Then, physical-layer, resource management, and cloud/edge computing and caching UAV-aided solutions in maritime environments are discussed and grouped based on their performance targets. Moreover, as UAVs are characterized by flexible deployment with high re-positioning capabilities, studies on UAV trajectory optimization for maritime applications are thoroughly discussed. In addition, aiming at shedding light on the current status of real-world deployments, experimental studies on UAV-aided maritime communications are presented and implementation details are given. Finally, several important open issues in the area of UAV-aided maritime communications are given, related to the integration of sixth generation (6G) advancements.
Effective multi-robot teams require the ability to move to goals in complex environments in order to address real-world applications such as search and rescue. Multi-robot teams should be able to operate in a completely decentralized manner, with individual robot team members being capable of acting without explicit communication between neighbors. In this paper, we propose a novel game theoretic model that enables decentralized and communication-free navigation to a goal position. Robots each play their own distributed game by estimating the behavior of their local teammates in order to identify behaviors that move them in the direction of the goal, while also avoiding obstacles and maintaining team cohesion without collisions. We prove theoretically that generated actions approach a Nash equilibrium, which also corresponds to an optimal strategy identified for each robot. We show through extensive simulations that our approach enables decentralized and communication-free navigation by a multi-robot system to a goal position, and is able to avoid obstacles and collisions, maintain connectivity, and respond robustly to sensor noise.
The U-Net was presented in 2015. With its straight-forward and successful architecture it quickly evolved to a commonly used benchmark in medical image segmentation. The adaptation of the U-Net to novel problems, however, comprises several degrees of freedom regarding the exact architecture, preprocessing, training and inference. These choices are not independent of each other and substantially impact the overall performance. The present paper introduces the nnU-Net ('no-new-Net'), which refers to a robust and self-adapting framework on the basis of 2D and 3D vanilla U-Nets. We argue the strong case for taking away superfluous bells and whistles of many proposed network designs and instead focus on the remaining aspects that make out the performance and generalizability of a method. We evaluate the nnU-Net in the context of the Medical Segmentation Decathlon challenge, which measures segmentation performance in ten disciplines comprising distinct entities, image modalities, image geometries and dataset sizes, with no manual adjustments between datasets allowed. At the time of manuscript submission, nnU-Net achieves the highest mean dice scores across all classes and seven phase 1 tasks (except class 1 in BrainTumour) in the online leaderboard of the challenge.