亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Mobile telepresence robots (MTRs) have become increasingly popular in the expanding world of remote work, providing new avenues for people to actively participate in activities at a distance. However, humans operating MTRs often have difficulty navigating in densely populated environments due to limited situation awareness and narrow field-of-view, which reduces user acceptance and satisfaction. Shared autonomy in navigation has been studied primarily in static environments or in situations where only one pedestrian interacts with the robot. We present a multimodal shared autonomy approach, leveraging visual and haptic guidance, to provide navigation assistance for remote operators in densely-populated environments. It uses a modified form of reciprocal velocity obstacles for generating safe control inputs while taking social proxemics constraints into account. Two different visual guidance designs, as well as haptic force rendering, were proposed to convey safe control input. We conducted a user study to compare the merits and limitations of multimodal navigation assistance to haptic or visual assistance alone on a shared navigation task. The study involved 15 participants operating a virtual telepresence robot in a virtual hall with moving pedestrians, using the different assistance modalities. We evaluated navigation performance, transparency and cooperation, as well as user preferences. Our results showed that participants preferred multimodal assistance with a visual guidance trajectory over haptic or visual modalities alone, although it had no impact on navigation performance. Additionally, we found that visual guidance trajectories conveyed a higher degree of understanding and cooperation than equivalent haptic cues in a navigation task.

相關內容

Robot navigation in dynamic environments shared with humans is an important but challenging task, which suffers from performance deterioration as the crowd grows. In this paper, multi-subgoal robot navigation approach based on deep reinforcement learning is proposed, which can reason about more comprehensive relationships among all agents (robot and humans). Specifically, the next position point is planned for the robot by introducing history information and interactions in our work. Firstly, based on subgraph network, the history information of all agents is aggregated before encoding interactions through a graph neural network, so as to improve the ability of the robot to anticipate the future scenarios implicitly. Further consideration, in order to reduce the probability of unreliable next position points, the selection module is designed after policy network in the reinforcement learning framework. In addition, the next position point generated from the selection module satisfied the task requirements better than that obtained directly from the policy network. The experiments demonstrate that our approach outperforms state-of-the-art approaches in terms of both success rate and collision rate, especially in crowded human environments.

Safe and efficient co-planning of multiple robots in pedestrian participation environments is promising for applications. In this work, a novel multi-robot social-aware efficient cooperative planner that on the basis of off-policy multi-agent reinforcement learning (MARL) under partial dimension-varying observation and imperfect perception conditions is proposed. We adopt temporal-spatial graph (TSG)-based social encoder to better extract the importance of social relation between each robot and the pedestrians in its field of view (FOV). Also, we introduce K-step lookahead reward setting in multi-robot RL framework to avoid aggressive, intrusive, short-sighted, and unnatural motion decisions generated by robots. Moreover, we improve the traditional centralized critic network with multi-head global attention module to better aggregates local observation information among different robots to guide the process of individual policy update. Finally, multi-group experimental results verify the effectiveness of the proposed cooperative motion planner.

In an era of countless content offerings, recommender systems alleviate information overload by providing users with personalized content suggestions. Due to the scarcity of explicit user feedback, modern recommender systems typically optimize for the same fixed combination of implicit feedback signals across all users. However, this approach disregards a growing body of work highlighting that (i) implicit signals can be used by users in diverse ways, signaling anything from satisfaction to active dislike, and (ii) different users communicate preferences in different ways. We propose applying the recent Interaction Grounded Learning (IGL) paradigm to address the challenge of learning representations of diverse user communication modalities. Rather than taking a fixed, human-designed reward function, IGL is able to learn personalized reward functions for different users and then optimize directly for the latent user satisfaction. We demonstrate the success of IGL with experiments using simulations as well as with real-world production traces.

The Internet of Behavior is a research theme that aims to analyze human behavior data on the Internet from the perspective of behavioral psychology, obtain insights about human behavior, and better understand the intention behind the behavior. In this way, the Internet of Behavior can predict human behavioral trends in the future and even change human behavior, which can provide more convenience for human life. With the increasing prosperity of the Internet of Things, more and more behavior-related data is collected on the Internet by connected devices such as sensors. People and behavior are connected through the extension of the Internet of Things -- the Internet of Behavior. At present, the Internet of Behavior has gradually been applied to our lives, but it is still in its early stages, and many opportunities and challenges are emerging. This paper provides an in-depth overview of the fundamental aspects of the Internet of Behavior: (1) We introduce the development process and research status of the Internet of Behavior from the perspective of the Internet of Things. (2) We propose the characteristics of the Internet of Behavior and define its development direction in terms of three aspects: real-time, autonomy, and reliability. (3) We provide a comprehensive summary of the current applications of the Internet of Behavior, including specific discussions in five scenarios that give an overview of the application status of the Internet of Behavior. (4) We discuss the challenges of the Internet of Behavior's development and its future directions, which hopefully will bring some progress to the Internet of Behavior. To the best of our knowledge, this is the first survey paper on the Internet of Behavior. We hope that this in-depth review can provide some useful directions for more productive research in related fields.

Acquiring food items with a fork poses an immense challenge to a robot-assisted feeding system, due to the wide range of material properties and visual appearances present across food groups. Deformable foods necessitate different skewering strategies than firm ones, but inferring such characteristics for several previously unseen items on a plate remains nontrivial. Our key insight is to leverage visual and haptic observations during interaction with an item to rapidly and reactively plan skewering motions. We learn a generalizable, multimodal representation for a food item from raw sensory inputs which informs the optimal skewering strategy. Given this representation, we propose a zero-shot framework to sense visuo-haptic properties of a previously unseen item and reactively skewer it, all within a single interaction. Real-robot experiments with foods of varying levels of visual and textural diversity demonstrate that our multimodal policy outperforms baselines which do not exploit both visual and haptic cues or do not reactively plan. Across 6 plates of different food items, our proposed framework achieves 71\% success over 69 skewering attempts total. Supplementary material, datasets, code, and videos can be found on our $\href{//sites.google.com/view/hapticvisualnet-corl22/home}{website}$.

Stochastic human motion prediction (HMP) has generally been tackled with generative adversarial networks and variational autoencoders. Most prior works aim at predicting highly diverse movements in terms of the skeleton joints' dispersion. This has led to methods predicting fast and motion-divergent movements, which are often unrealistic and incoherent with past motion. Such methods also neglect contexts that need to anticipate diverse low-range behaviors, or actions, with subtle joint displacements. To address these issues, we present BeLFusion, a model that, for the first time, leverages latent diffusion models in HMP to sample from a latent space where behavior is disentangled from pose and motion. As a result, diversity is encouraged from a behavioral perspective. Thanks to our behavior coupler's ability to transfer sampled behavior to ongoing motion, BeLFusion's predictions display a variety of behaviors that are significantly more realistic than the state of the art. To support it, we introduce two metrics, the Area of the Cumulative Motion Distribution, and the Average Pairwise Distance Error, which are correlated to our definition of realism according to a qualitative study with 126 participants. Finally, we prove BeLFusion's generalization power in a new cross-dataset scenario for stochastic HMP.

Modern autonomous vehicles (AVs) often rely on vision, LIDAR, and even radar-based simultaneous localization and mapping (SLAM) frameworks for precise localization and navigation. However, modern SLAM frameworks often lead to unacceptably high levels of drift (i.e., localization error) when AVs observe few visually distinct features or encounter occlusions due to dynamic obstacles. This paper argues that minimizing drift must be a key desiderata in AV motion planning, which requires an AV to take active control decisions to move towards feature-rich regions while also minimizing conventional control cost. To do so, we first introduce a novel data-driven perception module that observes LIDAR point clouds and estimates which features/regions an AV must navigate towards for drift minimization. Then, we introduce an interpretable model predictive controller (MPC) that moves an AV toward such feature-rich regions while avoiding visual occlusions and gracefully trading off drift and control cost. Our experiments on challenging, dynamic scenarios in the state-of-the-art CARLA simulator indicate our method reduces drift up to 76.76% compared to benchmark approaches.

This paper presents a Mixed-Initiative (MI) framework for addressing the problem of control authority transfer between a remote human operator and an AI agent when cooperatively controlling a mobile robot. Our Hierarchical Expert-guided Mixed-Initiative Control Switcher (HierEMICS) leverages information on the human operator's state and intent. The control switching policies are based on a criticality hierarchy. An experimental evaluation was conducted in a high-fidelity simulated disaster response and remote inspection scenario, comparing HierEMICS with a state-of-the-art Expert-guided Mixed-Initiative Control Switcher (EMICS) in the context of mobile robot navigation. Results suggest that HierEMICS reduces conflicts for control between the human and the AI agent, which is a fundamental challenge in both the MI control paradigm and also in the related shared control paradigm. Additionally, we provide statistically significant evidence of improved, navigational safety (i.e., fewer collisions), LOA switching efficiency, and conflict for control reduction.

We propose a hybrid combination of active inference and behavior trees (BTs) for reactive action planning and execution in dynamic environments, showing how robotic tasks can be formulated as a free-energy minimization problem. The proposed approach allows handling partially observable initial states and improves the robustness of classical BTs against unexpected contingencies while at the same time reducing the number of nodes in a tree. In this work, we specify the nominal behavior offline, through BTs. However, in contrast to previous approaches, we introduce a new type of leaf node to specify the desired state to be achieved rather than an action to execute. The decision of which action to execute to reach the desired state is performed online through active inference. This results in continual online planning and hierarchical deliberation. By doing so, an agent can follow a predefined offline plan while still keeping the ability to locally adapt and take autonomous decisions at runtime, respecting safety constraints. We provide proof of convergence and robustness analysis, and we validate our method in two different mobile manipulators performing similar tasks, both in a simulated and real retail environment. The results showed improved runtime adaptability with a fraction of the hand-coded nodes compared to classical BTs.

This paper presents the development of a real-time simulator for the validation of controlling a large vehicle manipulator. The need for this development can be justified by the lack of such a simulator: There are neither open source projects nor commercial products, which would be suitable for testing cooperative control concepts. First, we present the nonlinear simulation model of the vehicle and the manipulator. For the modeling MATLAB/Simulink is used, which also enables a code generation into standalone C++ ROS-Nodes (Robot Operating System Nodes). The emerging challenges of the code generation are also discussed. Then, the obtained standalone C++ ROS-Nodes integrated in the simulator framework which includes a graphical user interface, a steering wheel and a joystick. This simulator can provide the real-time calculation of the overall system's motion enabling the interaction of human and automation. Furthermore, a qualitative validation of the model is given. Finally, the functionalities of the simulator is demonstrated in tests with a human operators.

北京阿比特科技有限公司