亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Today, vehicles use smart sensors to collect data from the road environment. This data is often processed onboard of the vehicles, using expensive hardware. Such onboard processing increases the vehicle's cost, quickly drains its battery, and exhausts its computing resources. Therefore, offloading tasks onto the cloud is required. Still, data offloading is challenging due to low latency requirements for safe and reliable vehicle driving decisions. Moreover, age of processing was not considered in prior research dealing with low-latency offloading for autonomous vehicles. This paper proposes an age of processing-based offloading approach for autonomous vehicles using unsupervised machine learning, Multi-Radio Access Technologies (multi-RATs), and Edge Computing in Open Radio Access Network (O-RAN). We design a collaboration space of edge clouds to process data in proximity to autonomous vehicles. To reduce the variation in offloading delay, we propose a new communication planning approach that enables the vehicle to optimally preselect the available RATs such as Wi-Fi, LTE, or 5G to offload tasks to edge clouds when its local resources are insufficient. We formulate an optimization problem for age-based offloading that minimizes elapsed time from generating tasks and receiving computation output. To handle this non-convex problem, we develop a surrogate problem. Then, we use the Lagrangian method to transform the surrogate problem to unconstrained optimization problem and apply the dual decomposition method. The simulation results show that our approach significantly minimizes the age of processing in data offloading with 90.34 % improvement over similar method.

相關內容

Processing 是一門開(kai)(kai)源編(bian)程(cheng)(cheng)語言(yan)和與(yu)之配(pei)套的(de)集(ji)成(cheng)開(kai)(kai)發環(huan)境(IDE)的(de)名稱。Processing 在電(dian)子藝術和視(shi)覺設計社區(qu)被用(yong)(yong)來(lai)教(jiao)授編(bian)程(cheng)(cheng)基礎,并(bing)運用(yong)(yong)于大量的(de)新媒體(ti)和互動藝術作品中。

Collaborative perception, which greatly enhances the sensing capability of connected and autonomous vehicles (CAVs) by incorporating data from external resources, also brings forth potential security risks. CAVs' driving decisions rely on remote untrusted data, making them susceptible to attacks carried out by malicious participants in the collaborative perception system. However, security analysis and countermeasures for such threats are absent. To understand the impact of the vulnerability, we break the ground by proposing various real-time data fabrication attacks in which the attacker delivers crafted malicious data to victims in order to perturb their perception results, leading to hard brakes or increased collision risks. Our attacks demonstrate a high success rate of over 86% on high-fidelity simulated scenarios and are realizable in real-world experiments. To mitigate the vulnerability, we present a systematic anomaly detection approach that enables benign vehicles to jointly reveal malicious fabrication. It detects 91.5% of attacks with a false positive rate of 3% in simulated scenarios and significantly mitigates attack impacts in real-world scenarios.

Intelligent transportation systems (ITS) have gained significant attention from various communities, driven by rapid advancements in informational technology. Within the realm of ITS, navigational recommendation systems (RS) play a pivotal role, as users often face diverse path (route) options in such complex urban environments. However, RS is not immune to vulnerabilities, especially when confronted with potential information-based attacks. This study aims to explore the impacts of these cyber threats on RS, explicitly focusing on local targeted information attacks in which the attacker favors certain groups or businesses. We study human behaviors and propose the coordinated incentive-compatible RS that guides users toward a mixed Nash equilibrium, under which each user has no incentive to deviate from the recommendation. Then, we delve into the vulnerabilities within the recommendation process, focusing on scenarios involving misinformed demands. In such cases, the attacker can fabricate fake users to mislead the RS's recommendations. Using the Stackelberg game approach, the analytical results and the numerical case study reveal that RS is susceptible to informational attacks. This study highlights the need to consider informational attacks for a more resilient and effective navigational recommendation.

Due to the unavailability of routing information in design stages prior to detailed routing (DR), the tasks of timing prediction and optimization pose major challenges. Inaccurate timing prediction wastes design effort, hurts circuit performance, and may lead to design failure. This work focuses on timing prediction after clock tree synthesis and placement legalization, which is the earliest opportunity to time and optimize a "complete" netlist. The paper first documents that having "oracle knowledge" of the final post-DR parasitics enables post-global routing (GR) optimization to produce improved final timing outcomes. To bridge the gap between GR-based parasitic and timing estimation and post-DR results during post-GR optimization, machine learning (ML)-based models are proposed, including the use of features for macro blockages for accurate predictions for designs with macros. Based on a set of experimental evaluations, it is demonstrated that these models show higher accuracy than GR-based timing estimation. When used during post-GR optimization, the ML-based models show demonstrable improvements in post-DR circuit performance. The methodology is applied to two different tool flows - OpenROAD and a commercial tool flow - and results on 45nm bulk and 12nm FinFET enablements show improvements in post-DR slack metrics without increasing congestion. The models are demonstrated to be generalizable to designs generated under different clock period constraints and are robust to training data with small levels of noise.

For the autonomous operation of articulated vehicles at distribution centers, accurate positioning of the vehicle is of the utmost importance. Automation of these vehicle poses several challenges, e.g. large swept path, asymmetric steering response, large slide slip angles of non-steered trailer axles and trailer instability while reversing. Therefore, a validated vehicle model is required that accurately and efficiently predicts the states of the vehicle. Unlike forward driving, open-loop validation methods can not be used for reverse driving of articulated vehicles due to their unstable dynamics. This paper proposes an approach to stabilize the unstable pole of the system and compares three vehicle models (kinematic, non-linear single track and multibody dynamics model) against real-world test data obtained from low-speed experiments at a distribution center. It is concluded that single track non-linear model has a better performance in comparison to other models for large articulation angles and reverse driving maneuvers.

Machine Learning approaches like clustering methods deal with massive datasets that present an increasing challenge. We devise parallel algorithms to compute the Multi-Slice Clustering (MSC) for 3rd-order tensors. The MSC method is based on spectral analysis of the tensor slices and works independently on each tensor mode. Such features fit well in the parallel paradigm via a distributed memory system. We show that our parallel scheme outperforms sequential computing and allows for the scalability of the MSC method.

Searching in a denied environment is challenging for swarm robots as no assistance from GNSS, mapping, data sharing, and central processing is allowed. However, using olfactory and auditory signals to cooperate like animals could be an important way to improve the collaboration of swarm robots. In this paper, an Olfactory-Auditory augmented Bug algorithm (OA-Bug) is proposed for a swarm of autonomous robots to explore a denied environment. A simulation environment is built to measure the performance of OA-Bug. The coverage of the search task can reach 96.93% using OA-Bug, which is significantly improved compared with a similar algorithm, SGBA. Furthermore, experiments are conducted on real swarm robots to prove the validity of OA-Bug. Results show that OA-Bug can improve the performance of swarm robots in a denied environment.

Currently, truss tomato weighing and packaging require significant manual work. The main obstacle to automation lies in the difficulty of developing a reliable robotic grasping system for already harvested trusses. We propose a method to grasp trusses that are stacked in a crate with considerable clutter, which is how they are commonly stored and transported after harvest. The method consists of a deep learning-based vision system to first identify the individual trusses in the crate and then determine a suitable grasping location on the stem. To this end, we have introduced a grasp pose ranking algorithm with online learning capabilities. After selecting the most promising grasp pose, the robot executes a pinch grasp without needing touch sensors or geometric models. Lab experiments with a robotic manipulator equipped with an eye-in-hand RGB-D camera showed a 100% clearance rate when tasked to pick all trusses from a pile. 93% of the trusses were successfully grasped on the first try, while the remaining 7% required more attempts.

Deep neural networks have revolutionized many machine learning tasks in power systems, ranging from pattern recognition to signal processing. The data in these tasks is typically represented in Euclidean domains. Nevertheless, there is an increasing number of applications in power systems, where data are collected from non-Euclidean domains and represented as the graph-structured data with high dimensional features and interdependency among nodes. The complexity of graph-structured data has brought significant challenges to the existing deep neural networks defined in Euclidean domains. Recently, many studies on extending deep neural networks for graph-structured data in power systems have emerged. In this paper, a comprehensive overview of graph neural networks (GNNs) in power systems is proposed. Specifically, several classical paradigms of GNNs structures (e.g., graph convolutional networks, graph recurrent neural networks, graph attention networks, graph generative networks, spatial-temporal graph convolutional networks, and hybrid forms of GNNs) are summarized, and key applications in power systems such as fault diagnosis, power prediction, power flow calculation, and data generation are reviewed in detail. Furthermore, main issues and some research trends about the applications of GNNs in power systems are discussed.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.

北京阿比特科技有限公司