亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Intelligent vehicles and autonomous driving systems rely on scenario engineering for intelligence and index (I&I), calibration and certification (C&C), and verification and validation (V&V). To extract and index scenarios, various vehicle interactions are worthy of much attention, and deserve refined descriptions and labels. However, existing methods cannot cope well with the problem of scenario classification and labeling with vehicle interactions as the core. In this paper, we propose VistaScenario framework to conduct interaction scenario engineering for vehicles with intelligent systems for transport automation. Based on the summarized basic types of vehicle interactions, we slice scenario data stream into a series of segments via spatiotemporal scenario evolution tree. We also propose the scenario metric Graph-DTW based on Graph Computation Tree and Dynamic Time Warping to conduct refined scenario comparison and labeling. The extreme interaction scenarios and corner cases can be efficiently filtered and extracted. Moreover, with naturalistic scenario datasets, testing examples on trajectory prediction model demonstrate the effectiveness and advantages of our framework. VistaScenario can provide solid support for the usage and indexing of scenario data, further promote the development of intelligent vehicles and transport automation.

相關內容

IFIP TC13 Conference on Human-Computer Interaction是人機交互領域的研究者和實踐者展示其工作的重要平臺。多年來,這些會議吸引了來自幾個國家和文化的研究人員。官網鏈接: · 真實值 · HTTPS · 原點 · 訓練數據 ·
2024 年 6 月 27 日

Thoroughly testing autonomy systems is crucial in the pursuit of safe autonomous driving vehicles. It necessitates creating safety-critical scenarios that go beyond what can be safely collected from real-world data, as many of these scenarios occur infrequently on public roads. However, the evaluation of most existing NVS methods relies on sporadic sampling of image frames from the training data, comparing the rendered images with ground truth images using metrics. Unfortunately, this evaluation protocol falls short of meeting the actual requirements in closed-loop simulations. Specifically, the true application demands the capability to render novel views that extend beyond the original trajectory (such as cross-lane views), which are challenging to capture in the real world. To address this, this paper presents a novel driving view synthesis dataset and benchmark specifically designed for autonomous driving simulations. This dataset is unique as it includes testing images captured by deviating from the training trajectory by 1-4 meters. It comprises six sequences encompassing various time and weather conditions. Each sequence contains 450 training images, 150 testing images, and their corresponding camera poses and intrinsic parameters. Leveraging this novel dataset, we establish the first realistic benchmark for evaluating existing NVS approaches under front-only and multi-camera settings. The experimental findings underscore the significant gap that exists in current approaches, revealing their inadequate ability to fulfill the demanding prerequisites of cross-lane or closed-loop simulation. Our dataset is released publicly at the project page: //3d-aigc.github.io/XLD/.

The proliferation of robot-assisted minimally invasive surgery highlights the need for advanced training tools such as cost-effective robotic endotrainers. Current surgical robots often lack haptic feedback, which is crucial for providing surgeons with a real-time sense of touch. This absence can impact the surgeon's ability to perform delicate operations effectively. To enhance surgical training and address this deficiency, we have integrated a cost-effective haptic feedback system into a robotic endotrainer. This system incorporates both kinesthetic (force) and tactile feedback, improving the fidelity of surgical simulations and enabling more precise control during operations. Our system incorporates an innovative, cost-effective Force/Torque sensor utilizing optoelectronic technology, specifically designed to accurately detect forces and moments exerted on surgical tools with a 95% accuracy, providing essential kinesthetic feedback. Additionally, we implemented a tactile feedback mechanism that informs the surgeon of the gripping forces between the tool's tip and the tissue. This dual feedback system enhances the fidelity of training simulations and the execution of robotic surgeries, promoting broader adoption and safer practices.

Recent advances in reinforcement learning (RL) heavily rely on a variety of well-designed benchmarks, which provide environmental platforms and consistent criteria to evaluate existing and novel algorithms. Specifically, in multi-agent RL (MARL), a plethora of benchmarks based on cooperative games have spurred the development of algorithms that improve the scalability of cooperative multi-agent systems. However, for the competitive setting, a lightweight and open-sourced benchmark with challenging gaming dynamics and visual inputs has not yet been established. In this work, we present FightLadder, a real-time fighting game platform, to empower competitive MARL research. Along with the platform, we provide implementations of state-of-the-art MARL algorithms for competitive games, as well as a set of evaluation metrics to characterize the performance and exploitability of agents. We demonstrate the feasibility of this platform by training a general agent that consistently defeats 12 built-in characters in single-player mode, and expose the difficulty of training a non-exploitable agent without human knowledge and demonstrations in two-player mode. FightLadder provides meticulously designed environments to address critical challenges in competitive MARL research, aiming to catalyze a new era of discovery and advancement in the field. Videos and code at //sites.google.com/view/fightladder/home.

The rapid research and development of generative artificial intelligence has enabled the generation of high-quality images, text, and 3D models from text prompts. This advancement impels an inquiry into whether these models can be leveraged to create digital artifacts for both creative and engineering applications. Drawing on innovative designs from other domains may be one answer to this question, much like the historical practice of ``bionics", where humans have sought inspiration from nature's exemplary designs. This raises the intriguing possibility of using generative models to simultaneously tackle design tasks across multiple domains, facilitating cross-domain learning and resulting in a series of innovative design solutions. In this paper, we propose LLM2FEA as the first attempt to discover novel designs in generative models by transferring knowledge across multiple domains. By utilizing a multi-factorial evolutionary algorithm (MFEA) to drive a large language model, LLM2FEA integrates knowledge from various fields to generate prompts that guide the generative model in discovering novel and practical objects. Experimental results in the context of 3D aerodynamic design verify the discovery capabilities of the proposed LLM2FEA. The designs generated by LLM2FEA not only satisfy practicality requirements to a certain degree but also feature novel and aesthetically pleasing shapes, demonstrating the potential applications of LLM2FEA in discovery tasks.

Understanding the interdependence between autonomous and human-operated vehicles remains an ongoing challenge, with significant implications for the safety and feasibility of autonomous driving.This interdependence arises from inherent interactions among road users.Thus, it is crucial for Autonomous Vehicles (AVs) to understand and analyze the intentions of human-driven vehicles, and to display behavior comprehensible to other traffic participants.To this end, this paper presents GTP-UDRIVE, a unified game-theoretic trajectory planner and decision-maker considering a mixed-traffic environment. Our model considers the intentions of other vehicles in the decision-making process and provides the AV with a human-like trajectory, based on the clothoid interpolation technique.% This study investigates a solver based on Particle Swarm Optimization (PSO) that quickly converges to an optimal decision.Among highly interactive traffic scenarios, the intersection crossing is particularly challenging. Hence, we choose to demonstrate the feasibility and effectiveness of our method in real traffic conditions, using an experimental autonomous vehicle at an unsignalized intersection. Testing results reveal that our approach is suitable for 1) Making decisions and generating trajectories simultaneously. 2) Describing the vehicle's trajectory as a piecewise clothoid and enforcing geometric constraints. 3) Reducing search space dimensionality for the trajectory optimization problem.

Traditional robotic motion planning methods often struggle with fixed resolutions in dynamically changing environments. To address these challenges, we introduce the A-OctoMap, an adaptive Octo-Tree structure that enhances spatial representation and facilitates real-time, efficient motion planning. This novel framework allows for dynamic space partitioning and multi-resolution queries, significantly improving computational efficiency and precision. Key innovations include a tree-based data structure for enhanced geometric processing, real-time map updating for accurate trajectory planning, and efficient collision detection. Our extensive testing shows superior navigation safety and efficiency in complex settings compared to conventional methods. A-OctoMap sets a new standard for adaptive spatial mapping in autonomous systems, promising significant advancements in navigating unpredictable environments.

This work introduces EUvsDisinfo, a multilingual dataset of trustworthy and disinformation articles related to pro-Kremlin themes. It is sourced directly from the debunk articles written by experts leading the EUvsDisinfo project. Our dataset is the largest to-date resource in terms of the overall number of articles and distinct languages. It also provides the largest topical and temporal coverage. Using this dataset, we investigate the dissemination of pro-Kremlin disinformation across different languages, uncovering language-specific patterns targeting specific disinformation topics. We further analyse the evolution of topic distribution over an eight-year period, noting a significant surge in disinformation content before the full-scale invasion of Ukraine in 2022. Lastly, we demonstrate the dataset's applicability in training models to effectively distinguish between disinformation and trustworthy content in multilingual settings.

With the increasing role of Natural Language Processing (NLP) in various applications, challenges concerning bias and stereotype perpetuation are accentuated, which often leads to hate speech and harm. Despite existing studies on sexism and misogyny, issues like homophobia and transphobia remain underexplored and often adopt binary perspectives, putting the safety of LGBTQIA+ individuals at high risk in online spaces. In this paper, we assess the potential harm caused by sentence completions generated by English large language models (LLMs) concerning LGBTQIA+ individuals. This is achieved using QueerBench, our new assessment framework, which employs a template-based approach and a Masked Language Modeling (MLM) task. The analysis indicates that large language models tend to exhibit discriminatory behaviour more frequently towards individuals within the LGBTQIA+ community, reaching a difference gap of 7.2% in the QueerBench score of harmfulness.

Gait recognition, a growing field in biological recognition technology, utilizes distinct walking patterns for accurate individual identification. However, existing methods lack the incorporation of temporal information. To reach the full potential of gait recognition, we advocate for the consideration of temporal features at varying granularities and spans. This paper introduces a novel framework, GaitGS, which aggregates temporal features simultaneously in both granularity and span dimensions. Specifically, the Multi-Granularity Feature Extractor (MGFE) is designed to capture micro-motion and macro-motion information at fine and coarse levels respectively, while the Multi-Span Feature Extractor (MSFE) generates local and global temporal representations. Through extensive experiments on two datasets, our method demonstrates state-of-the-art performance, achieving Rank-1 accuracy of 98.2%, 96.5%, and 89.7% on CASIA-B under different conditions, and 97.6% on OU-MVLP. The source code will be available at //github.com/Haijun-Xiong/GaitGS.

Vehicle trajectory prediction is crucial for advancing autonomous driving and advanced driver assistance systems (ADAS), enhancing road safety and traffic efficiency. While traditional methods have laid foundational work, modern deep learning techniques, particularly transformer-based models and generative approaches, have significantly improved prediction accuracy by capturing complex and non-linear patterns in vehicle motion and traffic interactions. However, these models often overlook the detailed car-following behaviors and inter-vehicle interactions essential for real-world driving scenarios. This study introduces a Cross-Attention Transformer Enhanced Conditional Diffusion Model (Crossfusor) specifically designed for car-following trajectory prediction. Crossfusor integrates detailed inter-vehicular interactions and car-following dynamics into a robust diffusion framework, improving both the accuracy and realism of predicted trajectories. The model leverages a novel temporal feature encoding framework combining GRU, location-based attention mechanisms, and Fourier embedding to capture historical vehicle dynamics. It employs noise scaled by these encoded historical features in the forward diffusion process, and uses a cross-attention transformer to model intricate inter-vehicle dependencies in the reverse denoising process. Experimental results on the NGSIM dataset demonstrate that Crossfusor outperforms state-of-the-art models, particularly in long-term predictions, showcasing its potential for enhancing the predictive capabilities of autonomous driving systems.

北京阿比特科技有限公司