亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Edge computing processes data near its source, reducing latency and enhancing security compared to traditional cloud computing while providing its benefits. This paper explores edge computing for migrating an existing safety-critical robotics use case from an onboard dedicated hardware solution. We propose an edge robotics architecture based on Linux, Docker containers, Kubernetes, and a local wireless area network based on the TTWiFi protocol. Inspired by previous work on real-time cloud, we complement the architecture with a resource management and orchestration layer to help Linux manage, and Kubernetes orchestrate the system-wide shared resources (e.g., caches, memory bandwidth, and network). Our architecture aims to ensure the fault-tolerant and predictable execution of robotic applications (e.g., path planning) on the edge while upper-bounding the end-to-end latency and ensuring the best possible quality of service without jeopardizing safety and security.

相關內容

Mobile edge computing (MEC) is a promising computing paradigm that offers users proximity and instant computing services for various applications, and it has become an essential component of the Internet of Things (IoT). However, as compute-intensive services continue to emerge and the number of IoT devices explodes, MEC servers are confronted with resource limitations. In this work, we investigate a task-offloading framework for device-assisted edge computing, which allows MEC servers to assign certain tasks to auxiliary IoT devices (ADs) for processing. To facilitate efficient collaboration among task IoT devices (TDs), the MEC server, and ADs, we propose an incentive-driven pricing and task allocation scheme. Initially, the MEC server employs the Vickrey auction mechanism to recruit ADs. Subsequently, based on the Stackelberg game, we analyze the interactions between TDs and the MEC server. Finally, we establish the optimal service pricing and task allocation strategy, guided by the Stackelberg model and priority settings. Simulation results show that the proposed scheme dramatically improves the utility of the MEC server while safeguarding the interests of TDs and ADs, achieving a triple-win scenario.

In applications related to big data and service computing, dynamic connections tend to be encountered, especially the dynamic data of user-perspective quality of service (QoS) in Web services. They are transformed into high-dimensional and incomplete (HDI) tensors which include abundant temporal pattern information. Latent factorization of tensors (LFT) is an extremely efficient and typical approach for extracting such patterns from an HDI tensor. However, current LFT models require the QoS data to be maintained in a central place (e.g., a central server), which is impossible for increasingly privacy-sensitive users. To address this problem, this article creatively designs a federated learning based on latent factorization of tensors (FL-LFT). It builds a data-density -oriented federated learning model to enable isolated users to collaboratively train a global LFT model while protecting user's privacy. Extensive experiments on a QoS dataset collected from the real world verify that FL-LFT shows a remarkable increase in prediction accuracy when compared to state-of-the-art federated learning (FL) approaches.

Massive multiple-input multiple-output (MIMO) technology has significantly enhanced spectral and power efficiency in cellular communications and is expected to further evolve towards extra-large-scale MIMO. However, centralized processing for massive MIMO faces practical obstacles, including excessive computational complexity and a substantial volume of baseband data to be exchanged. To address these challenges, decentralized baseband processing has emerged as a promising solution. This approach involves partitioning the antenna array into clusters with dedicated computing hardware for parallel processing. In this paper, we investigate the gradient-based Markov chain Monte Carlo (MCMC) method -- an advanced MIMO detection technique known for its near-optimal performance in centralized implementation -- within the context of a decentralized baseband processing architecture. This decentralized design mitigates the computation burden at a single processing unit by utilizing computational resources in a distributed and parallel manner. Additionally, we integrate the mini-batch stochastic gradient descent method into the proposed decentralized detector, achieving remarkable performance with high efficiency. Simulation results demonstrate substantial performance gains of the proposed method over existing decentralized detectors across various scenarios. Moreover, complexity analysis reveals the advantages of the proposed decentralized strategy in terms of computation delay and interconnection bandwidth when compared to conventional centralized detectors.

Earth observation with small satellites serves a wide range of relevant applications. However, significant advances in sensor technology (e.g., higher resolution, multiple spectrums beyond visible light) in combination with challenging channel characteristics lead to a communication bottleneck when transmitting the collected data to Earth. Recently, joint source coding, channel coding, and modulation based on neuronal networks has been proposed to combine image compression and communication. Though this approach achieves promising results when applied to standard terrestrial channel models, it remains an open question whether it is suitable for the more complicated and quickly varying satellite communication channel. In this paper, we consider a detailed satellite channel model accounting for different shadowing conditions and train an encoder-decoder architecture with realistic Sentinel-2 satellite imagery. In addition, to reduce the overhead associated with applying multiple neural networks for various channel states, we leverage attention modules and train a single adaptable neural network that covers a wide range of different channel conditions. Our evaluation results show that the proposed approach achieves similar performance when compared to less space-efficient schemes that utilize separate neuronal networks for differing channel conditions.

The exponential growth of data traffic and the increasing complexity of networked applications demand effective solutions capable of passively inspecting and analysing the network traffic for monitoring and security purposes. Implementing network probes in software using general-purpose operating systems has been made possible by advances in packet-capture technologies, such as kernel-bypass frameworks, and by multi-queue adapters designed to distribute the network workload in multi-core processors. Modern SmartNICs, in addition, have introduced stateful mechanisms to associate actions to network flows such as forwarding packets or updating traffic statistics for an individual flow. In this paper, we describe our experience in exploiting those functionalities in a modern network probe and we perform a detailed study of the performance characteristics under different scenarios. Compared to pure CPU-based solutions, SmartNICs with flow-offload technologies provide substantial benefits when implementing forwarding applications. However, the main limitation of having to keep large flow tables in the host memory remains largely unsolved for realistic monitoring and security applications.

Communication constraints can significantly impact robots' ability to share information, coordinate their movements, and synchronize their actions, thus limiting coordination in Multi-Robot Exploration (MRE) applications. In this work, we address these challenges by modeling the MRE application as a DEC-POMDP and designing a joint policy that follows a rendezvous plan. This policy allows robots to explore unknown environments while intermittently sharing maps opportunistically or at rendezvous locations without being constrained by joint path optimizations. To generate the rendezvous plan, robots represent the MRE task as an instance of the Job Shop Scheduling Problem (JSSP) and minimize JSSP metrics. They aim to reduce waiting times and increase connectivity, which correlates to the DEC-POMDP rewards and time to complete the task. Our simulation results suggest that our method is more efficient than using relays or maintaining intermittent communication with a base station, being a suitable approach for Multi-Robot Exploration. We developed a proof-of-concept using the Robot Operating System (ROS) that is available at: //github.com/multirobotplayground/ROS-Noetic-Multi-robot-Sandbox.

In appearance-based localization and mapping, loop closure detection is the process used to determinate if the current observation comes from a previously visited location or a new one. As the size of the internal map increases, so does the time required to compare new observations with all stored locations, eventually limiting online processing. This paper presents an online loop closure detection approach for large-scale and long-term operation. The approach is based on a memory management method, which limits the number of locations used for loop closure detection so that the computation time remains under real-time constraints. The idea consists of keeping the most recent and frequently observed locations in a Working Memory (WM) used for loop closure detection, and transferring the others into a Long-Term Memory (LTM). When a match is found between the current location and one stored in WM, associated locations stored in LTM can be updated and remembered for additional loop closure detections. Results demonstrate the approach's adaptability and scalability using ten standard data sets from other appearance-based loop closure approaches, one custom data set using real images taken over a 2 km loop of our university campus, and one custom data set (7 hours) using virtual images from the racing video game ``Need for Speed: Most Wanted''.

Edge computing facilitates low-latency services at the network's edge by distributing computation, communication, and storage resources within the geographic proximity of mobile and Internet-of-Things (IoT) devices. The recent advancement in Unmanned Aerial Vehicles (UAVs) technologies has opened new opportunities for edge computing in military operations, disaster response, or remote areas where traditional terrestrial networks are limited or unavailable. In such environments, UAVs can be deployed as aerial edge servers or relays to facilitate edge computing services. This form of computing is also known as UAV-enabled Edge Computing (UEC), which offers several unique benefits such as mobility, line-of-sight, flexibility, computational capability, and cost-efficiency. However, the resources on UAVs, edge servers, and IoT devices are typically very limited in the context of UEC. Efficient resource management is, therefore, a critical research challenge in UEC. In this article, we present a survey on the existing research in UEC from the resource management perspective. We identify a conceptual architecture, different types of collaborations, wireless communication models, research directions, key techniques and performance indicators for resource management in UEC. We also present a taxonomy of resource management in UEC. Finally, we identify and discuss some open research challenges that can stimulate future research directions for resource management in UEC.

Autonomic computing investigates how systems can achieve (user) specified control outcomes on their own, without the intervention of a human operator. Autonomic computing fundamentals have been substantially influenced by those of control theory for closed and open-loop systems. In practice, complex systems may exhibit a number of concurrent and inter-dependent control loops. Despite research into autonomic models for managing computer resources, ranging from individual resources (e.g., web servers) to a resource ensemble (e.g., multiple resources within a data center), research into integrating Artificial Intelligence (AI) and Machine Learning (ML) to improve resource autonomy and performance at scale continues to be a fundamental challenge. The integration of AI/ML to achieve such autonomic and self-management of systems can be achieved at different levels of granularity, from full to human-in-the-loop automation. In this article, leading academics, researchers, practitioners, engineers, and scientists in the fields of cloud computing, AI/ML, and quantum computing join to discuss current research and potential future directions for these fields. Further, we discuss challenges and opportunities for leveraging AI and ML in next generation computing for emerging computing paradigms, including cloud, fog, edge, serverless and quantum computing environments.

Leveraging datasets available to learn a model with high generalization ability to unseen domains is important for computer vision, especially when the unseen domain's annotated data are unavailable. We study a novel and practical problem of Open Domain Generalization (OpenDG), which learns from different source domains to achieve high performance on an unknown target domain, where the distributions and label sets of each individual source domain and the target domain can be different. The problem can be generally applied to diverse source domains and widely applicable to real-world applications. We propose a Domain-Augmented Meta-Learning framework to learn open-domain generalizable representations. We augment domains on both feature-level by a new Dirichlet mixup and label-level by distilled soft-labeling, which complements each domain with missing classes and other domain knowledge. We conduct meta-learning over domains by designing new meta-learning tasks and losses to preserve domain unique knowledge and generalize knowledge across domains simultaneously. Experiment results on various multi-domain datasets demonstrate that the proposed Domain-Augmented Meta-Learning (DAML) outperforms prior methods for unseen domain recognition.

北京阿比特科技有限公司