亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Learning-based approaches have achieved impressive performance for autonomous driving and an increasing number of data-driven works are being studied in the decision-making and planning module. However, the reliability and the stability of the neural network is still full of challenges. In this paper, we introduce a hierarchical imitation method including a high-level grid-based behavior planner and a low-level trajectory planner, which is not only an individual data-driven driving policy and can also be easily embedded into the rule-based architecture. We evaluate our method both in closed-loop simulation and real world driving, and demonstrate the neural network planner has outstanding performance in complex urban autonomous driving scenarios.

相關內容

模(mo)(mo)仿(fang)(fang)(fang)(fang)學(xue)(xue)(xue)習(xi)(xi)是(shi)學(xue)(xue)(xue)習(xi)(xi)嘗(chang)試(shi)模(mo)(mo)仿(fang)(fang)(fang)(fang)專(zhuan)家行(xing)(xing)為(wei)從而(er)獲取最佳(jia)性能(neng)(neng)(neng)的一(yi)系列任(ren)務(wu)。目(mu)前主流方(fang)法包括監(jian)督式模(mo)(mo)仿(fang)(fang)(fang)(fang)學(xue)(xue)(xue)習(xi)(xi)、隨機(ji)混合迭代學(xue)(xue)(xue)習(xi)(xi)和數(shu)據聚合模(mo)(mo)擬學(xue)(xue)(xue)習(xi)(xi)等方(fang)法。模(mo)(mo)仿(fang)(fang)(fang)(fang)學(xue)(xue)(xue)習(xi)(xi)(Imitation Learning)背后(hou)的原理(li)是(shi)是(shi)通過隱(yin)含地給(gei)學(xue)(xue)(xue)習(xi)(xi)器關(guan)于這個世(shi)界(jie)的先(xian)驗信息,比如執行(xing)(xing)、學(xue)(xue)(xue)習(xi)(xi)人類行(xing)(xing)為(wei)。在模(mo)(mo)仿(fang)(fang)(fang)(fang)學(xue)(xue)(xue)習(xi)(xi)任(ren)務(wu)中,智能(neng)(neng)(neng)體(agent)為(wei)了學(xue)(xue)(xue)習(xi)(xi)到策略(lve)從而(er)盡可(ke)能(neng)(neng)(neng)像人類專(zhuan)家那樣(yang)執行(xing)(xing)一(yi)種行(xing)(xing)為(wei),它(ta)會(hui)尋找一(yi)種最佳(jia)的方(fang)式來(lai)使用(yong)由該專(zhuan)家示范的訓練集(輸(shu)入-輸(shu)出對)。當智能(neng)(neng)(neng)體學(xue)(xue)(xue)習(xi)(xi)人類行(xing)(xing)為(wei)時,雖然我們也需(xu)要使用(yong)模(mo)(mo)仿(fang)(fang)(fang)(fang)學(xue)(xue)(xue)習(xi)(xi),但實時的行(xing)(xing)為(wei)模(mo)(mo)擬成本會(hui)非常高(gao)。與之相反,吳恩(en)達提出的學(xue)(xue)(xue)徒(tu)學(xue)(xue)(xue)習(xi)(xi)(Apprenticeship learning)執行(xing)(xing)的是(shi)存粹(cui)的貪婪(lan)/利(li)用(yong)(exploitative)策略(lve),并使用(yong)強化學(xue)(xue)(xue)習(xi)(xi)方(fang)法遍(bian)歷所(suo)(suo)有的(狀態(tai)和行(xing)(xing)為(wei))軌跡(ji)(trajectories)來(lai)學(xue)(xue)(xue)習(xi)(xi)近優化策略(lve)。它(ta)需(xu)要極難的計略(lve)(maneuvers),而(er)且(qie)幾乎不可(ke)能(neng)(neng)(neng)從未(wei)觀察到的狀態(tai)還原。模(mo)(mo)仿(fang)(fang)(fang)(fang)學(xue)(xue)(xue)習(xi)(xi)能(neng)(neng)(neng)夠處理(li)這些未(wei)探(tan)索到的狀態(tai),所(suo)(suo)以可(ke)為(wei)自動駕駛這樣(yang)的許多任(ren)務(wu)提供(gong)更(geng)可(ke)靠(kao)的通用(yong)框架。

The use of AI systems in healthcare for the early screening of diseases is of great clinical importance. Deep learning has shown great promise in medical imaging, but the reliability and trustworthiness of AI systems limit their deployment in real clinical scenes, where patient safety is at stake. Uncertainty estimation plays a pivotal role in producing a confidence evaluation along with the prediction of the deep model. This is particularly important in medical imaging, where the uncertainty in the model's predictions can be used to identify areas of concern or to provide additional information to the clinician. In this paper, we review the various types of uncertainty in deep learning, including aleatoric uncertainty and epistemic uncertainty. We further discuss how they can be estimated in medical imaging. More importantly, we review recent advances in deep learning models that incorporate uncertainty estimation in medical imaging. Finally, we discuss the challenges and future directions in uncertainty estimation in deep learning for medical imaging. We hope this review will ignite further interest in the community and provide researchers with an up-to-date reference regarding applications of uncertainty estimation models in medical imaging.

Molecular de novo design is a critical yet challenging task in scientific fields, aiming to design novel molecular structures with desired property profiles. Significant progress has been made by resorting to generative models for graphs. However, limited attention is paid to hierarchical generative models, which can exploit the inherent hierarchical structure (with rich semantic information) of the molecular graphs and generate complex molecules of larger size that we shall demonstrate to be difficult for most existing models. The primary challenge to hierarchical generation is the non-differentiable issue caused by the generation of intermediate discrete coarsened graph structures. To sidestep this issue, we cast the tricky hierarchical generation problem over discrete spaces as the reverse process of hierarchical representation learning and propose MolHF, a new hierarchical flow-based model that generates molecular graphs in a coarse-to-fine manner. Specifically, MolHF first generates bonds through a multi-scale architecture, then generates atoms based on the coarsened graph structure at each scale. We demonstrate that MolHF achieves state-of-the-art performance in random generation and property optimization, implying its high capacity to model data distribution. Furthermore, MolHF is the first flow-based model that can be applied to model larger molecules (polymer) with more than 100 heavy atoms. The code and models are available at //github.com/violet-sto/MolHF.

This letter addresses the problem of trajectory planning in a marsupial robotic system consisting of an unmanned aerial vehicle (UAV) linked to an unmanned ground vehicle (UGV) through a non-taut tether with controllable length. To the best of our knowledge, this is the first method that addresses the trajectory planning of a marsupial UGV-UAV with a non-taut tether. The objective is to determine a synchronized collision-free trajectory for the three marsupial system agents: UAV, UGV, and tether. First, we present a path planning solution based on optimal Rapidly-exploring Random Trees (RRT*) with novel sampling and steering techniques to speed-up the computation. This algorithm is able to obtain collision-free paths for the UAV and the UGV, taking into account the 3D environment and the tether. Then, the letter presents a trajectory planner based on non-linear least squares. The optimizer takes into account aspects not considered in the path planning, like temporal constraints of the motion imposed by limits on the velocities and accelerations of the robots, or raising the tether's clearance. Simulated and field test results demonstrate that the approach generates obstacle-free, smooth, and feasible trajectories for the marsupial system.

LiDAR sensors are an integral part of modern autonomous vehicles as they provide an accurate, high-resolution 3D representation of the vehicle's surroundings. However, it is computationally difficult to make use of the ever-increasing amounts of data from multiple high-resolution LiDAR sensors. As frame-rates, point cloud sizes and sensor resolutions increase, real-time processing of these point clouds must still extract semantics from this increasingly precise picture of the vehicle's environment. One deciding factor of the run-time performance and accuracy of deep neural networks operating on these point clouds is the underlying data representation and the way it is computed. In this work, we examine the relationship between the computational representations used in neural networks and their performance characteristics. To this end, we propose a novel computational taxonomy of LiDAR point cloud representations used in modern deep neural networks for 3D point cloud processing. Using this taxonomy, we perform a structured analysis of different families of approaches. Thereby, we uncover common advantages and limitations in terms of computational efficiency, memory requirements, and representational capacity as measured by semantic segmentation performance. Finally, we provide some insights and guidance for future developments in neural point cloud processing methods.

We present a robot base placement and control method that enables a mobile manipulator to gracefully recover from manipulation failures while performing tasks on-the-move. A mobile manipulator in motion has a limited window to complete a task, unlike when stationary where it can make repeated attempts until successful. Existing approaches to manipulation on-the-move are typically based on open-loop execution of planned trajectories which does not allow the base controller to react to manipulation failures, slowing down or stopping as required. To overcome this limitation, we present a reactive base control method that repeatedly evaluates the best base placement given the robot's current state, the immediate manipulation task, as well as the next part of a multi-step task. The result is a system that retains the reliability of traditional mobile manipulation approaches where the base comes to a stop, but leverages the performance gains available by performing manipulation on-the-move. The controller keeps the base in range of the target for as long as required to recover from manipulation failures while making as much progress as possible toward the next objective. See //benburgesslimerick.github.io/MotM-FailureRecovery for videos of experiments.

Drones have the potential to revolutionize power line inspection by increasing productivity, reducing inspection time, improving data quality, and eliminating the risks for human operators. Current state-of-the-art systems for power line inspection have two shortcomings: (i) control is decoupled from perception and needs accurate information about the location of the power lines and masts; (ii) collision avoidance is decoupled from the power line tracking, which results in poor tracking in the vicinity of the power masts, and, consequently, in decreased data quality for visual inspection. In this work, we propose a model predictive controller (MPC) that overcomes these limitations by tightly coupling perception and action. Our controller generates commands that maximize the visibility of the power lines while, at the same time, safely avoiding the power masts. For power line detection, we propose a lightweight learning-based detector that is trained only on synthetic data and is able to transfer zero-shot to real-world power line images. We validate our system in simulation and real-world experiments on a mock-up power line infrastructure.

A key challenge in off-road navigation is that even visually similar terrains or ones from the same semantic class may have substantially different traction properties. Existing work typically assumes no wheel slip or uses the expected traction for motion planning, where the predicted trajectories provide a poor indication of the actual performance if the terrain traction has high uncertainty. In contrast, this work models traversability as the empirical distribution of traction parameters in unicycle dynamics, which can be learned by a neural network in a self-supervised fashion. The probabilistic traction model leads to two risk-aware cost formulations that account for the worst-case expected cost and traction. To help the learned model generalize to unseen environment, terrains with features that lead to unreliable predictions are detected via a density estimator fit to the trained network's latent space and avoided via auxiliary penalties during planning. Simulation results demonstrate that the proposed approach outperforms existing work that assumes no slip or uses the expected traction in both navigation success rate and completion time. Furthermore, avoiding terrains with low density-based confidence score achieves up to 30% improvement in success rate when the learned traction model is used in a novel environment.

Multi-object tracking (MOT) is a crucial component of situational awareness in military defense applications. With the growing use of unmanned aerial systems (UASs), MOT methods for aerial surveillance is in high demand. Application of MOT in UAS presents specific challenges such as moving sensor, changing zoom levels, dynamic background, illumination changes, obscurations and small objects. In this work, we present a robust object tracking architecture aimed to accommodate for the noise in real-time situations. We propose a kinematic prediction model, called Deep Extended Kalman Filter (DeepEKF), in which a sequence-to-sequence architecture is used to predict entity trajectories in latent space. DeepEKF utilizes a learned image embedding along with an attention mechanism trained to weight the importance of areas in an image to predict future states. For the visual scoring, we experiment with different similarity measures to calculate distance based on entity appearances, including a convolutional neural network (CNN) encoder, pre-trained using Siamese networks. In initial evaluation experiments, we show that our method, combining scoring structure of the kinematic and visual models within a MHT framework, has improved performance especially in edge cases where entity motion is unpredictable, or the data presents frames with significant gaps.

Graph convolutional networks (GCNs) have been successfully applied in node classification tasks of network mining. However, most of these models based on neighborhood aggregation are usually shallow and lack the "graph pooling" mechanism, which prevents the model from obtaining adequate global information. In order to increase the receptive field, we propose a novel deep Hierarchical Graph Convolutional Network (H-GCN) for semi-supervised node classification. H-GCN first repeatedly aggregates structurally similar nodes to hyper-nodes and then refines the coarsened graph to the original to restore the representation for each node. Instead of merely aggregating one- or two-hop neighborhood information, the proposed coarsening procedure enlarges the receptive field for each node, hence more global information can be learned. Comprehensive experiments conducted on public datasets demonstrate the effectiveness of the proposed method over the state-of-art methods. Notably, our model gains substantial improvements when only a few labeled samples are provided.

Recently, deep learning has achieved very promising results in visual object tracking. Deep neural networks in existing tracking methods require a lot of training data to learn a large number of parameters. However, training data is not sufficient for visual object tracking as annotations of a target object are only available in the first frame of a test sequence. In this paper, we propose to learn hierarchical features for visual object tracking by using tree structure based Recursive Neural Networks (RNN), which have fewer parameters than other deep neural networks, e.g. Convolutional Neural Networks (CNN). First, we learn RNN parameters to discriminate between the target object and background in the first frame of a test sequence. Tree structure over local patches of an exemplar region is randomly generated by using a bottom-up greedy search strategy. Given the learned RNN parameters, we create two dictionaries regarding target regions and corresponding local patches based on the learned hierarchical features from both top and leaf nodes of multiple random trees. In each of the subsequent frames, we conduct sparse dictionary coding on all candidates to select the best candidate as the new target location. In addition, we online update two dictionaries to handle appearance changes of target objects. Experimental results demonstrate that our feature learning algorithm can significantly improve tracking performance on benchmark datasets.

北京阿比特科技有限公司