亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Many applications require robots to move through terrain with large obstacles, such as self-driving, search and rescue, and extraterrestrial exploration. Although robots are already excellent at avoiding sparse obstacles, they still struggle in traversing cluttered obstacles. Inspired by cockroaches that use and respond to physical interaction with obstacles in various ways to traverse grass-like beams with different stiffness, here we developed a physics model of a minimalistic robot capable of environmental force sensing propelled forward to traverse two beams to simulate and understand the traversal of cluttered obstacles. Beam properties like stiffness and deflection locations could be estimated from the noisy beam contact forces measured, whose fidelity increased with sensing time. Using these estimates, the model predicted the cost of traversal defined using potential energy barriers and used it to plan and control the robot to generate and track a trajectory to traverse with minimal cost. When encountering stiff beams, the simulation robot transitioned from a more costly pitch mode to a less costly roll mode to traverse. When encountering flimsy beams, it chose to push cross beams with less energy cost than avoiding beams. Finally, we developed a physical robot and demonstrated the usefulness of the estimation method.

相關內容

IFIP TC13 Conference on Human-Computer Interaction是人機交互領域的研究者和實踐者展示其工作的重要平臺。多年來,這些會議吸引了來自幾個國家和文化的研究人員。官網鏈接: · Neural Networks · 方差減小 · Networking · 學成 ·
2022 年 2 月 18 日

Physics-Informed Neural Network (PINN) has become a commonly used machine learning approach to solve partial differential equations (PDE). But, facing high-dimensional second-order PDE problems, PINN will suffer from severe scalability issues since its loss includes second-order derivatives, the computational cost of which will grow along with the dimension during stacked back-propagation. In this paper, we develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks. In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation. We further discuss the model capacity and provide variance reduction methods to address key limitations in the derivative estimation. Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.

Cyber human interaction is a broad term encompassing the range of interactions that humans can have with technology. While human interaction with fixed and mobile computers is well understood, the world is on the cusp of ubiquitous and sustained interactions between humans and robots. While robotic systems are intertwined with computing and computing technologies, the word robot here describes technologies that can physically affect and in turn be affected by their environments which includes humans. This chapter delves into issues of cyber human interaction from the perspective of humans interacting with a subset of robots known as assistive robots. Assistive robots are robots designed to assist individuals with mobility or capacity limitations in completing everyday activities, commonly called instrumental activities of daily living. These range from household chores, eating or drinking to any activity with which a user may need the daily assistance of a caregiver to complete. One common type of assistive robot is the wheelchair mounted robotic arm. This device is designed to attach to a user's wheelchair to allow him or her to complete their activities independently. In short, these devices have sensors that allow them to sense and process their environment with varying levels of autonomy to perform actions that benefit and improve the well-being of people with capability limitations or disabilities. While human robot interaction is a popular research topic, not much research has been dedicated with regard to individual with limitations. In this chapter, we provide an overview of assistive robotic devices, discuss common methods of user interaction, and the need for an adaptive compensation framework to support potential users in regaining their functional capabilities.

In this paper, we consider the motion energy minimization problem for a robot that uses millimeter-wave (mm-wave) communications assisted by an intelligent reflective surface (IRS). The robot must perform tasks within given deadlines and it is subject to uplink quality of service (QoS) constraints. This problem is crucial for fully automated factories that are governed by the binomial of autonomous robots and new generations of mobile communications, i.e., 5G and 6G. In this new context, robot energy efficiency and communication reliability remain fundamental problems that couple in optimizing robot trajectory and communication QoS. More precisely, to account for the mutual dependency between robot position and communication QoS, robot trajectory and beamforming at the IRS and access point all need to be optimized. We present a solution that can decouple the two problems by exploiting mm-wave channel characteristics. Then, a closed-form solution is obtained for the beamforming optimization problem, whereas the trajectory is optimized by a novel successive-convex optimization-based algorithm that can deal with abrupt line-of-sight (LOS) to non-line-of-sight (NLOS) transitions. Specifically, the algorithm uses a radio map to avoid collisions with obstacles and poorly covered areas. We prove that the algorithm can converge to a solution satisfying the Karush-Kuhn-Tucker conditions. The simulation results show a fast convergence rate of the algorithm and a dramatic reduction of the motion energy consumption with respect to methods that aim to find maximum-rate trajectories. Moreover, we show that the use of passive IRSs represents a powerful solution to improve the radio coverage and motion energy efficiency of robots.

Humans can robustly follow a visual trajectory defined by a sequence of images (i.e. a video) regardless of substantial changes in the environment or the presence of obstacles. We aim at endowing similar visual navigation capabilities to mobile robots solely equipped with a RGB fisheye camera. We propose a novel probabilistic visual navigation system that learns to follow a sequence of images with bidirectional visual predictions conditioned on possible navigation velocities. By predicting bidirectionally (from start towards goal and vice versa) our method extends its predictive horizon enabling the robot to go around unseen large obstacles that are not visible in the video trajectory. Learning how to react to obstacles and potential risks in the visual field is achieved by imitating human teleoperators. Since the human teleoperation commands are diverse, we propose a probabilistic representation of trajectories that we can sample to find the safest path. Integrated into our navigation system, we present a novel localization approach that infers the current location of the robot based on the virtual predicted trajectories required to reach different images in the visual trajectory. We evaluate our navigation system quantitatively and qualitatively in multiple simulated and real environments and compare to state-of-the-art baselines.Our approach outperforms the most recent visual navigation methods with a large margin with regard to goal arrival rate, subgoal coverage rate, and success weighted by path length (SPL). Our method also generalizes to new robot embodiments never used during training.

Social robots are increasingly introduced into children's lives as educational and social companions, yet little is known about how these products might best be introduced to their environments. The emergence of the "unboxing" phenomenon in media suggests that introduction is key to technology adoption where initial impressions are made. To better understand this phenomenon toward designing a positive unboxing experience in the context of social robots for children, we conducted three field studies with families of children aged 8 to 13: (1) an exploratory free-play activity ($n=12$); (2) a co-design session ($n=11$) that informed the development of a prototype box and a curated unboxing experience; and (3) a user study ($n=9$) that evaluated children's experiences. Our findings suggest the unboxing experience of social robots can be improved through the design of a creative aesthetic experience that engages the child socially to guide initial interactions and foster a positive child-robot relationship.

Understanding the global dynamics of a robot controller, such as identifying attractors and their regions of attraction (RoA), is important for safe deployment and synthesizing more effective hybrid controllers. This paper proposes a topological framework to analyze the global dynamics of robot controllers, even data-driven ones, in an effective and explainable way. It builds a combinatorial representation representing the underlying system's state space and non-linear dynamics, which is summarized in a directed acyclic graph, the Morse graph. The approach only probes the dynamics locally by forward propagating short trajectories over a state-space discretization, which needs to be a Lipschitz-continuous function. The framework is evaluated given either numerical or data-driven controllers for classical robotic benchmarks. It is compared against established analytical and recent machine learning alternatives for estimating the RoAs of such controllers. It is shown to outperform them in accuracy and efficiency. It also provides deeper insights as it describes the global dynamics up to the discretization's resolution. This allows to use the Morse graph to identify how to synthesize controllers to form improved hybrid solutions or how to identify the physical limitations of a robotic system.

Autonomous robots can benefit greatly from human-provided semantic characterizations of uncertain task environments and states. However, the development of integrated strategies which let robots model, communicate, and act on such soft data remains challenging. Here, a framework is presented for active semantic sensing and planning in human-robot teams which addresses these gaps by formally combining the benefits of online sampling-based POMDP policies, multi-modal semantic interaction, and Bayesian data fusion. This approach lets humans opportunistically impose model structure and extend the range of semantic soft data in uncertain environments by sketching and labeling arbitrary landmarks across the environment. Dynamic updating of the environment while searching for a mobile target allows robotic agents to actively query humans for novel and relevant semantic data, thereby improving beliefs of unknown environments and target states for improved online planning. Target search simulations show significant improvements in time and belief state estimates required for interception versus conventional planning based solely on robotic sensing. Human subject studies demonstrate a average doubling in dynamic target capture rate compared to the lone robot case, employing reasoning over a range of user characteristics and interaction modalities. Video of interaction can be found at //youtu.be/Eh-82ZJ1o4I.

Active inference is a unifying theory for perception and action resting upon the idea that the brain maintains an internal model of the world by minimizing free energy. From a behavioral perspective, active inference agents can be seen as self-evidencing beings that act to fulfill their optimistic predictions, namely preferred outcomes or goals. In contrast, reinforcement learning requires human-designed rewards to accomplish any desired outcome. Although active inference could provide a more natural self-supervised objective for control, its applicability has been limited because of the shortcomings in scaling the approach to complex environments. In this work, we propose a contrastive objective for active inference that strongly reduces the computational burden in learning the agent's generative model and planning future actions. Our method performs notably better than likelihood-based active inference in image-based tasks, while also being computationally cheaper and easier to train. We compare to reinforcement learning agents that have access to human-designed reward functions, showing that our approach closely matches their performance. Finally, we also show that contrastive methods perform significantly better in the case of distractors in the environment and that our method is able to generalize goals to variations in the background.

One of the key steps in Neural Architecture Search (NAS) is to estimate the performance of candidate architectures. Existing methods either directly use the validation performance or learn a predictor to estimate the performance. However, these methods can be either computationally expensive or very inaccurate, which may severely affect the search efficiency and performance. Moreover, as it is very difficult to annotate architectures with accurate performance on specific tasks, learning a promising performance predictor is often non-trivial due to the lack of labeled data. In this paper, we argue that it may not be necessary to estimate the absolute performance for NAS. On the contrary, we may need only to understand whether an architecture is better than a baseline one. However, how to exploit this comparison information as the reward and how to well use the limited labeled data remains two great challenges. In this paper, we propose a novel Contrastive Neural Architecture Search (CTNAS) method which performs architecture search by taking the comparison results between architectures as the reward. Specifically, we design and learn a Neural Architecture Comparator (NAC) to compute the probability of candidate architectures being better than a baseline one. Moreover, we present a baseline updating scheme to improve the baseline iteratively in a curriculum learning manner. More critically, we theoretically show that learning NAC is equivalent to optimizing the ranking over architectures. Extensive experiments in three search spaces demonstrate the superiority of our CTNAS over existing methods.

Tracking humans that are interacting with the other subjects or environment remains unsolved in visual tracking, because the visibility of the human of interests in videos is unknown and might vary over time. In particular, it is still difficult for state-of-the-art human trackers to recover complete human trajectories in crowded scenes with frequent human interactions. In this work, we consider the visibility status of a subject as a fluent variable, whose change is mostly attributed to the subject's interaction with the surrounding, e.g., crossing behind another object, entering a building, or getting into a vehicle, etc. We introduce a Causal And-Or Graph (C-AOG) to represent the causal-effect relations between an object's visibility fluent and its activities, and develop a probabilistic graph model to jointly reason the visibility fluent change (e.g., from visible to invisible) and track humans in videos. We formulate this joint task as an iterative search of a feasible causal graph structure that enables fast search algorithm, e.g., dynamic programming method. We apply the proposed method on challenging video sequences to evaluate its capabilities of estimating visibility fluent changes of subjects and tracking subjects of interests over time. Results with comparisons demonstrate that our method outperforms the alternative trackers and can recover complete trajectories of humans in complicated scenarios with frequent human interactions.

北京阿比特科技有限公司