亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Integrated sensing, computation, and communication (ISCC) has been recently considered as a promising technique for beyond 5G systems. In ISCC systems, the competition for communication and computation resources between sensing tasks for ambient intelligence and computation tasks from mobile devices becomes an increasingly challenging issue. To address it, we first propose an efficient sensing framework with a novel action detection module. In this module, a threshold is used for detecting whether the sensing target is static and thus the overhead can be reduced. Subsequently, we mathematically analyze the sensing performance of the proposed framework and theoretically prove its effectiveness with the help of the sampling theorem. Based on sensing performance models, we formulate a sensing performance maximization problem while guaranteeing the quality-of-service (QoS) requirements of tasks. To solve it, we propose an optimal resource allocation strategy, in which the minimum resource is allocated to computation tasks, and the rest is devoted to the sensing task. Besides, a threshold selection policy is derived and the results further demonstrate the necessity of the proposed sensing framework. Finally, a real-world test of action recognition tasks based on USRP B210 is conducted to verify the sensing performance analysis. Extensive experiments demonstrate the performance improvement of our proposal by comparing it with some benchmark schemes.

相關內容

The advent of 5G New Radio (NR) technology has revolutionized the landscape of wireless communication, offering various enhancements such as elevated system capacity, improved spectrum efficiency, and higher data transmission rates. To achieve these benefits, 5G has implemented the Ultra-Dense Network (UDN) architecture, characterized by the deployment of numerous small general Node B (gNB) units. While this approach boosts system capacity and frequency reuse, it also raises concerns such as increased signal interference, longer handover times, and higher handover failure rates. To address these challenges, the critical factor of Time to Trigger (TTT) in handover management must be accurately determined. Furthermore, the density of gNBs has a significant impact on handover performance. This study provides a comprehensive analysis of 5G handover management. Through the development and utilization of a downlink system-level simulator, the effects of various TTT values and gNB densities on 5G handover were evaluated, taking into consideration the movement of Traffic Users (TUs) with varying velocities. Simulation results showed that the handover performance can be optimized by adjusting the TTT under different gNB densities, providing valuable insights into the proper selection of TTT, UDN, and TU velocity to enhance 5G handover performance.

An assumption that has often been used by researchers to model the interference in a wireless network is the unit disk graph model. While many theoretical results and performance guarantees have been obtained under this model, an open research direction is to extend these results to hypergraph interference models. Motivated by recent results that the worst-case performance of the distributed maximal scheduling algorithm is characterized by the interference degree of the hypergraph, in the present work we investigate properties of the interference degree of the hypergraph and the structure of hypergraphs arising from physical constraints. We show that the problem of computing the interference degree of a hypergraph is NP-hard and we prove some properties and results concerning this hypergraph invariant. We investigate which hypergraphs are realizable, i.e. which hypergraphs arise in practice, based on physical constraints, as the interference model of a wireless network. In particular, a question that arises naturally is: what is the maximal value of $r$ such that the hypergraph $K_{1,r}$ is realizable? We determine this quantity for various values of the path loss exponent of signal propagation. We also investigate hypergraphs generated by line networks.

In this paper, we consider intelligent reflecting surface (IRS) in a non-orthogonal multiple access (NOMA)-aided Integrated Sensing and Multicast-Unicast Communication (ISMUC) system, where the multicast signal is used for sensing and communications while the unicast signal is used only for communications. Our goal is to depict whether the IRS improves the performance of NOMA-ISMUC system or not under the imperfect/perfect successive interference cancellation (SIC) scenario. Towards this end, we formulate a non-convex problem to maximize the unicast rate while ensuring the minimum target illumination power and multicast rate. To settle this problem, we employ the Dinkelbach method to transform this original problem into an equivalent one, which is then solved via alternating optimization algorithm and semidefinite relaxation (SDR) with Sequential Rank-One Constraint Relaxation (SROCR). Based on this, an iterative algorithm is devised to obtain a near-optimal solution. Computer simulations verify the quick convergence of the devised iterative algorithm, and provide insightful results. Compared to NOMA-ISMUC without IRS, IRS-aided NOMA-ISMUC achieves a higher rate with perfect SIC but keeps the almost same rate in the case of imperfect SIC.

Physical-layer authentication is a popular alternative to the conventional key-based authentication for internet of things (IoT) devices due to their limited computational capacity and battery power. However, this approach has limitations due to poor robustness under channel fluctuations, reconciliation overhead, and no clear safeguard distance to ensure the secrecy of the generated authentication keys. In this regard, we propose a novel, secure, and lightweight continuous authentication scheme for IoT device authentication. Our scheme utilizes the inherent properties of the IoT devices transmission model as its source for seed generation and device authentication. Specifically, our proposed scheme provides continuous authentication by checking the access time slots and spreading sequences of the IoT devices instead of repeatedly generating and verifying shared keys. Due to this, access to a coherent key is not required in our proposed scheme, resulting in the concealment of the seed information from attackers. Our proposed authentication scheme for IoT devices demonstrates improved performance compared to the benchmark schemes relying on physical-channel. Our empirical results find a near threefold decrease in misdetection rate of illegitimate devices and close to zero false alarm rate in various system settings with varied numbers of active devices up to 200 and signal-to-noise ratio from 0 dB to 30 dB. Our proposed authentication scheme also has a lower computational complexity of at least half the computational cost of the benchmark schemes based on support vector machine and binary hypothesis testing in our studies. This further corroborates the practicality of our scheme for IoT deployments.

In the present era of advanced technology, the Internet of Things (IoT) plays a crucial role in enabling smart connected environments. This includes various domains such as smart homes, smart healthcare, smart cities, smart vehicles, and many others.With ubiquitous smart connected devices and systems, a large amount of data associated with them is at a prime risk from malicious entities (e.g., users, devices, applications) in these systems. Innovative technologies, including cloud computing, Machine Learning (ML), and data analytics, support the development of anomaly detection models for the Vehicular Internet of Things (V-IoT), which encompasses collaborative automatic driving and enhanced transportation systems. However, traditional centralized anomaly detection models fail to provide better services for connected vehicles due to issues such as high latency, privacy leakage, performance overhead, and model drift. Recently, Federated Learning (FL) has gained significant recognition for its ability to address data privacy concerns in the IoT domain. Digital Twin (DT), proves beneficial in addressing uncertain crises and data security issues by creating a virtual replica that simulates various factors, including traffic trajectories, city policies, and vehicle utilization. However, the effectiveness of a V-IoT DT system heavily relies on the collection of long-term and high-quality data to make appropriate decisions. This paper introduces a Hierarchical Federated Learning (HFL) based anomaly detection model for V-IoT, aiming to enhance the accuracy of the model. Our proposed model integrates both DT and HFL approaches to create a comprehensive system for detecting malicious activities using an anomaly detection model. Additionally, real-world V-IoT use case scenarios are presented to demonstrate the application of the proposed model.

Ultra-reliable low latency communications (URLLC) service is envisioned to enable use cases with strict reliability and latency requirements in 5G. One approach for enabling URLLC services is to leverage Reinforcement Learning (RL) to efficiently allocate wireless resources. However, with conventional RL methods, the decision variables (though being deployed at various network layers) are typically optimized in the same control loop, leading to significant practical limitations on the control loop's delay as well as excessive signaling and energy consumption. In this paper, we propose a multi-agent Hierarchical RL (HRL) framework that enables the implementation of multi-level policies with different control loop timescales. Agents with faster control loops are deployed closer to the base station, while the ones with slower control loops are at the edge or closer to the core network providing high-level guidelines for low-level actions. On a use case from the prior art, with our HRL framework, we optimized the maximum number of retransmissions and transmission power of industrial devices. Our extensive simulation results on the factory automation scenario show that the HRL framework achieves better performance as the baseline single-agent RL method, with significantly less overhead of signal transmissions and delay compared to the one-agent RL methods.

Mobile robots navigating in outdoor environments frequently encounter the issue of undesired traces left by dynamic objects and manifested as obstacles on map, impeding robots from achieving accurate localization and effective navigation. To tackle the problem, a novel map construction framework based on 3D region-wise hash map structure (RH-Map) is proposed, consisting of front-end scan fresher and back-end removal modules, which realizes real-time map construction and online dynamic object removal (DOR). First, a two-layer 3D region-wise hash map structure of map management is proposed for effective online DOR. Then, in scan fresher, region-wise ground plane estimation (R-GPE) is adopted for estimating and preserving ground information and Scan-to-Map Removal (S2M-R) is proposed to discriminate and remove dynamic regions. Moreover, the lightweight back-end removal module maintaining keyframes is proposed for further DOR. As experimentally verified on SemanticKITTI, our proposed framework yields promising performance on online DOR of map construction compared with the state-of-the-art methods. And we also validate the proposed framework in real-world environments.

Edge computing facilitates low-latency services at the network's edge by distributing computation, communication, and storage resources within the geographic proximity of mobile and Internet-of-Things (IoT) devices. The recent advancement in Unmanned Aerial Vehicles (UAVs) technologies has opened new opportunities for edge computing in military operations, disaster response, or remote areas where traditional terrestrial networks are limited or unavailable. In such environments, UAVs can be deployed as aerial edge servers or relays to facilitate edge computing services. This form of computing is also known as UAV-enabled Edge Computing (UEC), which offers several unique benefits such as mobility, line-of-sight, flexibility, computational capability, and cost-efficiency. However, the resources on UAVs, edge servers, and IoT devices are typically very limited in the context of UEC. Efficient resource management is, therefore, a critical research challenge in UEC. In this article, we present a survey on the existing research in UEC from the resource management perspective. We identify a conceptual architecture, different types of collaborations, wireless communication models, research directions, key techniques and performance indicators for resource management in UEC. We also present a taxonomy of resource management in UEC. Finally, we identify and discuss some open research challenges that can stimulate future research directions for resource management in UEC.

Deep neural networks (DNNs) have achieved unprecedented success in the field of artificial intelligence (AI), including computer vision, natural language processing and speech recognition. However, their superior performance comes at the considerable cost of computational complexity, which greatly hinders their applications in many resource-constrained devices, such as mobile phones and Internet of Things (IoT) devices. Therefore, methods and techniques that are able to lift the efficiency bottleneck while preserving the high accuracy of DNNs are in great demand in order to enable numerous edge AI applications. This paper provides an overview of efficient deep learning methods, systems and applications. We start from introducing popular model compression methods, including pruning, factorization, quantization as well as compact model design. To reduce the large design cost of these manual solutions, we discuss the AutoML framework for each of them, such as neural architecture search (NAS) and automated pruning and quantization. We then cover efficient on-device training to enable user customization based on the local data on mobile devices. Apart from general acceleration techniques, we also showcase several task-specific accelerations for point cloud, video and natural language processing by exploiting their spatial sparsity and temporal/token redundancy. Finally, to support all these algorithmic advancements, we introduce the efficient deep learning system design from both software and hardware perspectives.

Driven by the visions of Internet of Things and 5G communications, the edge computing systems integrate computing, storage and network resources at the edge of the network to provide computing infrastructure, enabling developers to quickly develop and deploy edge applications. Nowadays the edge computing systems have received widespread attention in both industry and academia. To explore new research opportunities and assist users in selecting suitable edge computing systems for specific applications, this survey paper provides a comprehensive overview of the existing edge computing systems and introduces representative projects. A comparison of open source tools is presented according to their applicability. Finally, we highlight energy efficiency and deep learning optimization of edge computing systems. Open issues for analyzing and designing an edge computing system are also studied in this survey.

北京阿比特科技有限公司