亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Low-Latency IoT applications such as autonomous vehicles, augmented/virtual reality devices and security applications require high computation resources to make decisions on the fly. However, these kinds of applications cannot tolerate offloading their tasks to be processed on a cloud infrastructure due to the experienced latency. Therefore, edge computing is introduced to enable low latency by moving the tasks processing closer to the users at the edge of the network. The edge of the network is characterized by the heterogeneity of edge devices forming it; thus, it is crucial to devise novel solutions that take into account the different physical resources of each edge device. In this paper, we propose a resource representation scheme, allowing each edge device to expose its resource information to the supervisor of the edge node through the mobile edge computing application programming interfaces proposed by European Telecommunications Standards Institute. The information about the edge device resource is exposed to the supervisor of the EN each time a resource allocation is required. To this end, we leverage a Lyapunov optimization framework to dynamically allocate resources at the edge devices. To test our proposed model, we performed intensive theoretical and experimental simulations on a testbed to validate the proposed scheme and its impact on different system's parameters. The simulations have shown that our proposed approach outperforms other benchmark approaches and provides low latency and optimal resource consumption.

相關內容

As mobile edge computing (MEC) finds widespread use for relieving the computational burden of compute- and interaction-intensive applications on end user devices, understanding the resulting delay and cost performance is drawing significant attention. While most existing works focus on singletask offloading in single-hop MEC networks, next generation applications (e.g., industrial automation, augmented/virtual reality) require advance models and algorithms for dynamic configuration of multi-task services over multi-hop MEC networks. In this work, we leverage recent advances in dynamic cloud network control to provide a comprehensive study of the performance of multi-hop MEC networks, addressing the key problems of multi-task offloading, timely packet scheduling, and joint computation and communication resource allocation. We present a fully distributed algorithm based on Lyapunov control theory that achieves throughput-optimal performance with delay and cost guarantees. Simulation results validate our theoretical analysis and provide insightful guidelines on the interplay between communication and computation resources in MEC networks.

Internet of things (IoT) connects all items to the Internet through information-sensing devices to exchange information for intelligent identification and management. Sybil attack is a famous and crippling attack in IoT. Most of the previous methods of detecting Sybil attacks in IoT mainly focus on static IoT while there are very rare methods applicable to mobile IoT. In this paper, a novel, lightweight, and distributive detection scheme based on edge computing is proposed for detecting Sybil attacks in mobile IoT. In the proposed scheme, a detection consists of two rounds. In each round, member nodes are required to send packets to edge nodes. Edge nodes calculate a possible interval of the received signal strength indication (RSSI) from the first round and check whether the RSSI from the second round is in the interval to detect Sybil attack. Extensive experimental studies are included to show that the presented approach outperforms many existing approaches in terms of true detection and false detection rates. Moreover, experimental results show that the fault tolerance design in the proposed approach greatly enhances the detection scheme.

Recruitment in large organisations often involves interviewing a large number of candidates. The process is resource intensive and complex. Therefore, it is important to carry it out efficiently and effectively. Planning the selection process consists of several problems, each of which maps to one or the other well-known computing problem. Research that looks at each of these problems in isolation is rich and mature. However, research that takes an integrated view of the problem is not common. In this paper, we take two of the most important aspects of the application processing problem, namely review/interview panel creation and interview scheduling. We have implemented our approach as a prototype system and have used it to automatically plan the interview process of a real-life data set. Our system provides a distinctly better plan than the existing practice, which is predominantly manual. We have explored various algorithmic options and have customised them to solve these panel creation and interview scheduling problems. We have evaluated these design options experimentally on a real data set and have presented our observations. Our prototype and experimental process and results may be a very good starting point for a full-fledged development project for automating application processing process.

Multi-access edge computing (MEC) is an emerging paradigm that pushes resources for sensing, communications, computing, storage and intelligence (SCCSI) to the premises closer to the end users, i.e., the edge, so that they could leverage the nearby rich resources to improve their quality of experience (QoE). Due to the growing emerging applications targeting at intelligentizing life-sustaining cyber-physical systems, this paradigm has become a hot research topic, particularly when MEC is utilized to provide edge intelligence and real-time processing and control. This article is to elaborate the research issues along this line, including basic concepts and performance metrics, killer applications, architectural design, modeling approaches and solutions, and future research directions. It is hoped that this article provides a quick introduction to this fruitful research area particularly for beginning researchers.

Multi-camera vehicle tracking is one of the most complicated tasks in Computer Vision as it involves distinct tasks including Vehicle Detection, Tracking, and Re-identification. Despite the challenges, multi-camera vehicle tracking has immense potential in transportation applications including speed, volume, origin-destination (O-D), and routing data generation. Several recent works have addressed the multi-camera tracking problem. However, most of the effort has gone towards improving accuracy on high-quality benchmark datasets while disregarding lower camera resolutions, compression artifacts and the overwhelming amount of computational power and time needed to carry out this task on its edge and thus making it prohibitive for large-scale and real-time deployment. Therefore, in this work we shed light on practical issues that should be addressed for the design of a multi-camera tracking system to provide actionable and timely insights. Moreover, we propose a real-time city-scale multi-camera vehicle tracking system that compares favorably to computationally intensive alternatives and handles real-world, low-resolution CCTV instead of idealized and curated video streams. To show its effectiveness, in addition to integration into the Regional Integrated Transportation Information System (RITIS), we participated in the 2021 NVIDIA AI City multi-camera tracking challenge and our method is ranked among the top five performers on the public leaderboard.

Given a heterogeneous group of robots executing a complex task represented in Linear Temporal Logic, and a new set of tasks for the group, we define the task update problem and propose a framework for automatically updating individual robot tasks given their respective existing tasks and capabilities. Our heuristic, token-based, conflict resolution task allocation algorithm generates a near-optimal assignment for the new task. We demonstrate the scalability of our approach through simulations of multi-robot tasks.

Recent advances in computer vision has led to a growth of interest in deploying visual analytics model on mobile devices. However, most mobile devices have limited computing power, which prohibits them from running large scale visual analytics neural networks. An emerging approach to solve this problem is to offload the computation of these neural networks to computing resources at an edge server. Efficient computation offloading requires optimizing the trade-off between multiple objectives including compressed data rate, analytics performance, and computation speed. In this work, we consider a "split computation" system to offload a part of the computation of the YOLO object detection model. We propose a learnable feature compression approach to compress the intermediate YOLO features with light-weight computation. We train the feature compression and decompression module together with the YOLO model to optimize the object detection accuracy under a rate constraint. Compared to baseline methods that apply either standard image compression or learned image compression at the mobile and perform image decompression and YOLO at the edge, the proposed system achieves higher detection accuracy at the low to medium rate range. Furthermore, the proposed system requires substantially lower computation time on the mobile device with CPU only.

Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis in locations close to where data is captured based on artificial intelligence. The aim of edge intelligence is to enhance the quality and speed of data processing and protect the privacy and security of the data. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this paper, we present a thorough and comprehensive survey on the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, namely edge caching, edge training, edge inference, and edge offloading, based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare and analyse the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, etc. This survey article provides a comprehensive introduction to edge intelligence and its application areas. In addition, we summarise the development of the emerging research field and the current state-of-the-art and discuss the important open issues and possible theoretical and technical solutions.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

Driven by the visions of Internet of Things and 5G communications, the edge computing systems integrate computing, storage and network resources at the edge of the network to provide computing infrastructure, enabling developers to quickly develop and deploy edge applications. Nowadays the edge computing systems have received widespread attention in both industry and academia. To explore new research opportunities and assist users in selecting suitable edge computing systems for specific applications, this survey paper provides a comprehensive overview of the existing edge computing systems and introduces representative projects. A comparison of open source tools is presented according to their applicability. Finally, we highlight energy efficiency and deep learning optimization of edge computing systems. Open issues for analyzing and designing an edge computing system are also studied in this survey.

北京阿比特科技有限公司