亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Unmanned aerial vehicle (UAV) swarm enabled edge computing is envisioned to be promising in the sixth generation wireless communication networks due to their wide application sensories and flexible deployment. However, most of the existing works focus on edge computing enabled by a single or a small scale UAVs, which are very different from UAV swarm-enabled edge computing. In order to facilitate the practical applications of UAV swarm-enabled edge computing, the state of the art research is presented in this article. The potential applications, architectures and implementation considerations are illustrated. Moreover, the promising enabling technologies for UAV swarm-enabled edge computing are discussed. Furthermore, we outline challenges and open issues in order to shed light on the future research directions.

相關內容

邊(bian)(bian)緣計算(英語(yu):Edge computing),又譯為邊(bian)(bian)緣計算,是一種(zhong)分散式運算的(de)(de)架(jia)構,將應用程序、數(shu)據資(zi)(zi)(zi)料(liao)(liao)(liao)與(yu)(yu)服務的(de)(de)運算,由網絡中(zhong)心節(jie)點(dian)(dian),移往網絡邏輯上的(de)(de)邊(bian)(bian)緣節(jie)點(dian)(dian)來處(chu)理(li)[1]。邊(bian)(bian)緣運算將原(yuan)本(ben)完全(quan)由中(zhong)心節(jie)點(dian)(dian)處(chu)理(li)大(da)型服務加(jia)(jia)以分解,切(qie)割(ge)成(cheng)更(geng)(geng)小與(yu)(yu)更(geng)(geng)容(rong)易管理(li)的(de)(de)部分,分散到邊(bian)(bian)緣節(jie)點(dian)(dian)去(qu)處(chu)理(li)。邊(bian)(bian)緣節(jie)點(dian)(dian)更(geng)(geng)接近于用戶終端裝置(zhi),可以加(jia)(jia)快資(zi)(zi)(zi)料(liao)(liao)(liao)的(de)(de)處(chu)理(li)與(yu)(yu)傳送速度,減少(shao)延遲(chi)。在這種(zhong)架(jia)構下,資(zi)(zi)(zi)料(liao)(liao)(liao)的(de)(de)分析與(yu)(yu)知識的(de)(de)產生,更(geng)(geng)接近于數(shu)據資(zi)(zi)(zi)料(liao)(liao)(liao)的(de)(de)來源,因此更(geng)(geng)適(shi)合處(chu)理(li)大(da)數(shu)據。

As mobile edge computing (MEC) finds widespread use for relieving the computational burden of compute- and interaction-intensive applications on end user devices, understanding the resulting delay and cost performance is drawing significant attention. While most existing works focus on singletask offloading in single-hop MEC networks, next generation applications (e.g., industrial automation, augmented/virtual reality) require advance models and algorithms for dynamic configuration of multi-task services over multi-hop MEC networks. In this work, we leverage recent advances in dynamic cloud network control to provide a comprehensive study of the performance of multi-hop MEC networks, addressing the key problems of multi-task offloading, timely packet scheduling, and joint computation and communication resource allocation. We present a fully distributed algorithm based on Lyapunov control theory that achieves throughput-optimal performance with delay and cost guarantees. Simulation results validate our theoretical analysis and provide insightful guidelines on the interplay between communication and computation resources in MEC networks.

Extremely large antenna array (ELAA) is a common feature of several key candidate technologies for 6G, such as ultra-massive multiple-input-multiple-output (UM-MIMO), cell-free massive MIMO, reconfigurable intelligent surface (RIS), and terahertz communications. Since the number of antennas is very large for ELAA, near-field communications will become essential in 6G wireless networks. In this article, we systematically investigate the emerging near-field communication techniques. Firstly, the fundamental of near-field communications is explained, and the metric to determine the near-field ranges in typical communication scenarios is introduced. Then, we investigate recent studies on near-field communication techniques by classifying them into two categories, i.e., techniques addressing the challenges and those exploiting the potentials in near-field regions. Their principles, recent progress, pros and cons are discussed. More importantly, several open problems and future research directions for near-field communications are pointed out. We believe that this article would inspire more innovations for this important research topic of near-field communications for 6G.

Despite its technological benefits, Internet of Things (IoT) has cyber weaknesses due to the vulnerabilities in the wireless medium. Machine learning (ML)-based methods are widely used against cyber threats in IoT networks with promising performance. Advanced persistent threat (APT) is prominent for cybercriminals to compromise networks, and it is crucial to long-term and harmful characteristics. However, it is difficult to apply ML-based approaches to identify APT attacks to obtain a promising detection performance due to an extremely small percentage among normal traffic. There are limited surveys to fully investigate APT attacks in IoT networks due to the lack of public datasets with all types of APT attacks. It is worth to bridge the state-of-the-art in network attack detection with APT attack detection in a comprehensive review article. This survey article reviews the security challenges in IoT networks and presents the well-known attacks, APT attacks, and threat models in IoT systems. Meanwhile, signature-based, anomaly-based, and hybrid intrusion detection systems are summarized for IoT networks. The article highlights statistical insights regarding frequently applied ML-based methods against network intrusion alongside the number of attacks types detected. Finally, open issues and challenges for common network intrusion and APT attacks are presented for future research.

Multi-camera vehicle tracking is one of the most complicated tasks in Computer Vision as it involves distinct tasks including Vehicle Detection, Tracking, and Re-identification. Despite the challenges, multi-camera vehicle tracking has immense potential in transportation applications including speed, volume, origin-destination (O-D), and routing data generation. Several recent works have addressed the multi-camera tracking problem. However, most of the effort has gone towards improving accuracy on high-quality benchmark datasets while disregarding lower camera resolutions, compression artifacts and the overwhelming amount of computational power and time needed to carry out this task on its edge and thus making it prohibitive for large-scale and real-time deployment. Therefore, in this work we shed light on practical issues that should be addressed for the design of a multi-camera tracking system to provide actionable and timely insights. Moreover, we propose a real-time city-scale multi-camera vehicle tracking system that compares favorably to computationally intensive alternatives and handles real-world, low-resolution CCTV instead of idealized and curated video streams. To show its effectiveness, in addition to integration into the Regional Integrated Transportation Information System (RITIS), we participated in the 2021 NVIDIA AI City multi-camera tracking challenge and our method is ranked among the top five performers on the public leaderboard.

Microservices have become the de-facto software architecture for cloud-native applications. A contentious architectural decision in microservices is to compose them using choreography or orchestration. In choreography, every service works independently, whereas, in orchestration, there is a controller that coordinates service interactions. This paper makes a case for orchestration. The promise of microservices is that each microservice can be independently developed, deployed, tested, upgraded, and scaled. This makes them suitable for systems running on cloud infrastructures. However, microservice-based systems become complicated due to the complex interactions of various services, concurrent events, failing components, developers' lack of global view, and configurations of the environment. This makes maintaining and debugging such systems very challenging. We hypothesize that orchestrated services are easier to debug and to test this we ported the largest publicly available microservices' benchmark TrainTicket, which is implemented using choreography, to a fault-oblivious stateful workflow framework Temporal. We report our experience in porting the code from traditional choreographed microservice architecture to one orchestrated by Temporal and present our initial findings of time to debug the 22 bugs present in the benchmark. Our findings suggest that an effort towards making a transition to orchestrated approach is worthwhile, making the ported code easier to debug.

With its powerful capability to deal with graph data widely found in practical applications, graph neural networks (GNNs) have received significant research attention. However, as societies become increasingly concerned with data privacy, GNNs face the need to adapt to this new normal. This has led to the rapid development of federated graph neural networks (FedGNNs) research in recent years. Although promising, this interdisciplinary field is highly challenging for interested researchers to enter into. The lack of an insightful survey on this topic only exacerbates this problem. In this paper, we bridge this gap by offering a comprehensive survey of this emerging field. We propose a unique 3-tiered taxonomy of the FedGNNs literature to provide a clear view into how GNNs work in the context of Federated Learning (FL). It puts existing works into perspective by analyzing how graph data manifest themselves in FL settings, how GNN training is performed under different FL system architectures and degrees of graph data overlap across data silo, and how GNN aggregation is performed under various FL settings. Through discussions of the advantages and limitations of existing works, we envision future research directions that can help build more robust, dynamic, efficient, and interpretable FedGNNs.

Owing to effective and flexible data acquisition, unmanned aerial vehicle (UAV) has recently become a hotspot across the fields of computer vision (CV) and remote sensing (RS). Inspired by recent success of deep learning (DL), many advanced object detection and tracking approaches have been widely applied to various UAV-related tasks, such as environmental monitoring, precision agriculture, traffic management. This paper provides a comprehensive survey on the research progress and prospects of DL-based UAV object detection and tracking methods. More specifically, we first outline the challenges, statistics of existing methods, and provide solutions from the perspectives of DL-based models in three research topics: object detection from the image, object detection from the video, and object tracking from the video. Open datasets related to UAV-dominated object detection and tracking are exhausted, and four benchmark datasets are employed for performance evaluation using some state-of-the-art methods. Finally, prospects and considerations for the future work are discussed and summarized. It is expected that this survey can facilitate those researchers who come from remote sensing field with an overview of DL-based UAV object detection and tracking methods, along with some thoughts on their further developments.

Multi-object tracking (MOT) is a crucial component of situational awareness in military defense applications. With the growing use of unmanned aerial systems (UASs), MOT methods for aerial surveillance is in high demand. Application of MOT in UAS presents specific challenges such as moving sensor, changing zoom levels, dynamic background, illumination changes, obscurations and small objects. In this work, we present a robust object tracking architecture aimed to accommodate for the noise in real-time situations. We propose a kinematic prediction model, called Deep Extended Kalman Filter (DeepEKF), in which a sequence-to-sequence architecture is used to predict entity trajectories in latent space. DeepEKF utilizes a learned image embedding along with an attention mechanism trained to weight the importance of areas in an image to predict future states. For the visual scoring, we experiment with different similarity measures to calculate distance based on entity appearances, including a convolutional neural network (CNN) encoder, pre-trained using Siamese networks. In initial evaluation experiments, we show that our method, combining scoring structure of the kinematic and visual models within a MHT framework, has improved performance especially in edge cases where entity motion is unpredictable, or the data presents frames with significant gaps.

Deep neural networks have revolutionized many machine learning tasks in power systems, ranging from pattern recognition to signal processing. The data in these tasks is typically represented in Euclidean domains. Nevertheless, there is an increasing number of applications in power systems, where data are collected from non-Euclidean domains and represented as the graph-structured data with high dimensional features and interdependency among nodes. The complexity of graph-structured data has brought significant challenges to the existing deep neural networks defined in Euclidean domains. Recently, many studies on extending deep neural networks for graph-structured data in power systems have emerged. In this paper, a comprehensive overview of graph neural networks (GNNs) in power systems is proposed. Specifically, several classical paradigms of GNNs structures (e.g., graph convolutional networks, graph recurrent neural networks, graph attention networks, graph generative networks, spatial-temporal graph convolutional networks, and hybrid forms of GNNs) are summarized, and key applications in power systems such as fault diagnosis, power prediction, power flow calculation, and data generation are reviewed in detail. Furthermore, main issues and some research trends about the applications of GNNs in power systems are discussed.

Driven by the visions of Internet of Things and 5G communications, the edge computing systems integrate computing, storage and network resources at the edge of the network to provide computing infrastructure, enabling developers to quickly develop and deploy edge applications. Nowadays the edge computing systems have received widespread attention in both industry and academia. To explore new research opportunities and assist users in selecting suitable edge computing systems for specific applications, this survey paper provides a comprehensive overview of the existing edge computing systems and introduces representative projects. A comparison of open source tools is presented according to their applicability. Finally, we highlight energy efficiency and deep learning optimization of edge computing systems. Open issues for analyzing and designing an edge computing system are also studied in this survey.

北京阿比特科技有限公司