亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Autonomous robots can benefit greatly from human-provided semantic characterizations of uncertain task environments and states. However, the development of integrated strategies which let robots model, communicate, and act on such 'soft data' remains challenging. Here, the Human Assisted Robotic Planning and Sensing (HARPS) framework is presented for active semantic sensing and planning in human-robot teams to address these gaps by formally combining the benefits of online sampling-based POMDP policies, multimodal semantic interaction, and Bayesian data fusion. This approach lets humans opportunistically impose model structure and extend the range of semantic soft data in uncertain environments by sketching and labeling arbitrary landmarks across the environment. Dynamic updating of the environment model while during search allows robotic agents to actively query humans for novel and relevant semantic data, thereby improving beliefs of unknown environments and states for improved online planning. Simulations of a UAV-enabled target search application in a large-scale partially structured environment show significant improvements in time and belief state estimates required for interception versus conventional planning based solely on robotic sensing. Human subject studies in the same environment (n = 36) demonstrate an average doubling in dynamic target capture rate compared to the lone robot case, and highlight the robustness of active probabilistic reasoning and semantic sensing over a range of user characteristics and interaction modalities.

相關內容

機(ji)(ji)(ji)(ji)(ji)(ji)器(qi)人(ren)(ren)(英(ying)語:Robot)包括一切模擬人(ren)(ren)類(lei)行為或(huo)(huo)思想與(yu)模擬其他生物的(de)機(ji)(ji)(ji)(ji)(ji)(ji)械(如(ru)機(ji)(ji)(ji)(ji)(ji)(ji)器(qi)狗,機(ji)(ji)(ji)(ji)(ji)(ji)器(qi)貓等)。狹義上(shang)對機(ji)(ji)(ji)(ji)(ji)(ji)器(qi)人(ren)(ren)的(de)定義還有很多分類(lei)法及(ji)爭議,有些(xie)電腦程序(xu)(xu)甚至也(ye)被稱為機(ji)(ji)(ji)(ji)(ji)(ji)器(qi)人(ren)(ren)。在當代(dai)工業中,機(ji)(ji)(ji)(ji)(ji)(ji)器(qi)人(ren)(ren)指能自動(dong)運(yun)行任(ren)務的(de)人(ren)(ren)造機(ji)(ji)(ji)(ji)(ji)(ji)器(qi)設備,用以取代(dai)或(huo)(huo)協助人(ren)(ren)類(lei)工作,一般會(hui)是機(ji)(ji)(ji)(ji)(ji)(ji)電設備,由(you)計(ji)算(suan)機(ji)(ji)(ji)(ji)(ji)(ji)程序(xu)(xu)或(huo)(huo)是電子電路控制(zhi)。

知識薈萃

精品入門(men)和(he)進階教程、論文(wen)和(he)代碼整理等

更多

查(cha)看相(xiang)關VIP內容(rong)、論文、資訊等

Computer graphics images (CGIs) are artificially generated by means of computer programs and are widely perceived under various scenarios, such as games, streaming media, etc. In practice, the quality of CGIs consistently suffers from poor rendering during production, inevitable compression artifacts during the transmission of multimedia applications, and low aesthetic quality resulting from poor composition and design. However, few works have been dedicated to dealing with the challenge of computer graphics image quality assessment (CGIQA). Most image quality assessment (IQA) metrics are developed for natural scene images (NSIs) and validated on databases consisting of NSIs with synthetic distortions, which are not suitable for in-the-wild CGIs. To bridge the gap between evaluating the quality of NSIs and CGIs, we construct a large-scale in-the-wild CGIQA database consisting of 6,000 CGIs (CGIQA-6k) and carry out the subjective experiment in a well-controlled laboratory environment to obtain the accurate perceptual ratings of the CGIs. Then, we propose an effective deep learning-based no-reference (NR) IQA model by utilizing both distortion and aesthetic quality representation. Experimental results show that the proposed method outperforms all other state-of-the-art NR IQA methods on the constructed CGIQA-6k database and other CGIQA-related databases. The database will be released to facilitate further research.

Rehabilitation training for patients with motor disabilities usually requires specialized devices in rehabilitation centers. Home-based multi-purpose training would significantly increase treatment accessibility and reduce medical costs. While it is unlikely to equip a set of rehabilitation robots at home, we investigate the feasibility to use the general-purpose collaborative robot for rehabilitation therapies. In this work, we developed a new system for multi-purpose upper-limb rehabilitation training using a generic robot arm with human motor feedback and preference. We integrated surface electromyography, force/torque sensors, RGB-D cameras, and robot controllers with the Robot Operating System to enable sensing, communication, and control of the system. Imitation learning methods were adopted to imitate expert-provided training trajectories which could adapt to subject capabilities to facilitate in-home training. Our rehabilitation system is able to perform gross motor function and fine motor skill training with a gripper-based end-effector. We simulated system control in Gazebo and training effects (muscle activation level) in OpenSim and evaluated its real performance with human subjects. For all the subjects enrolled, our system achieved better training outcomes compared to specialist-assisted rehabilitation under the same conditions. Our work demonstrates the potential of utilizing collaborative robots for in-home motor rehabilitation training.

We introduce RAMP, an open-source robotics benchmark inspired by real-world industrial assembly tasks. RAMP consists of beams that a robot must assemble into specified goal configurations using pegs as fasteners. As such it assesses planning and execution capabilities, and poses challenges in perception, reasoning, manipulation, diagnostics, fault recovery and goal parsing. RAMP has been designed to be accessible and extensible. Parts are either 3D printed or otherwise constructed from materials that are readily obtainable. The part design and detailed instructions are publicly available. In order to broaden community engagement, RAMP incorporates fixtures such as April Tags which enable researchers to focus on individual sub-tasks of the assembly challenge if desired. We provide a full digital twin as well as rudimentary baselines to enable rapid progress. Our vision is for RAMP to form the substrate for a community-driven endeavour that evolves as capability matures.

Balance and gait disorders are the second leading cause of falls, which, along with consequent injuries, are reported as major public health problems all over the world. For patients who do not require mechanical support, vibrotactile feedback interfaces have proven to be a successful approach in restoring balance. Most of the existing strategies assess trunk or head tilt and velocity or plantar forces, and are limited to the analysis of stance. On the other hand, central to balance control is the need to maintain the body's centre of pressure (CoP) within feasible limits of the support polygon (SP), as in standing, or on track to a new SP, as in walking. Hence, this paper proposes an exploratory study to investigate whether vibrotactile feedback can be employed to lead human CoP during walking. The ErgoTac-Belt vibrotactile device is introduced to instruct the users about the direction to take, both in the antero-posterior and medio-lateral axes. An anticipatory strategy is adopted here, to give the users enough time to react to the stimuli. Experiments on ten healthy subjects demonstrated the promising capability of the proposed device to guide the users' CoP along a predefined reference path, with similar performance as the one achieved with visual feedback. Future developments will investigate our strategy and device in guiding the CoP of elderly or individuals with vestibular impairments, who may not be aware of or, able to figure out, a safe and ergonomic CoP path.

Agile maneuvers are essential for robot-enabled complex tasks such as surgical procedures. Prior explorations on surgery autonomy are limited to feasibility study of completing a single task without systematically addressing generic manipulation safety across different tasks. We present an integrated planning and control framework for 6-DoF robotic instruments for pipeline automation of surgical tasks.We leverage the geometry of a robotic instrument and propose the nodal state space (NSS) to represent the robot state in SE(3) space. Each elementary robot motion could be encoded by regulation of the state parameters via a dynamical system. This theoretically ensures that every in-process trajectory is globally feasible and stably reached to an admissible target, and the controller is of closed-form without computing 6-DoF inverse kinematics. Then, to plan the motion steps reliably, we propose an interactive (instant) goal state of the robot that transforms manipulation planning through desired path constraints into a goal-varying manipulation (GVM) problem. We detail how GVM could adaptively and smoothly plan the procedure (could proceed or rewind the process as needed) based on on-the-fly situations under dynamic or disturbed environment. Finally, we extend the above policy to characterize complete pipelines of various surgical tasks. Simulations show that our framework could smoothly solve twisted maneuvers while avoiding collisions. Physical experiments using the da Vinci Research Kit (dVRK) validates the capability of automating individual tasks including tissue debridement, dissection, and wound suturing. The results confirm good task-level consistency and reliability compared to state-of-the-art automation algorithms.

Over the last decade, the use of autonomous drone systems for surveying, search and rescue, or last-mile delivery has increased exponentially. With the rise of these applications comes the need for highly robust, safety-critical algorithms which can operate drones in complex and uncertain environments. Additionally, flying fast enables drones to cover more ground which in turn increases productivity and further strengthens their use case. One proxy for developing algorithms used in high-speed navigation is the task of autonomous drone racing, where researchers program drones to fly through a sequence of gates and avoid obstacles as quickly as possible using onboard sensors and limited computational power. Speeds and accelerations exceed over 80 kph and 4 g respectively, raising significant challenges across perception, planning, control, and state estimation. To achieve maximum performance, systems require real-time algorithms that are robust to motion blur, high dynamic range, model uncertainties, aerodynamic disturbances, and often unpredictable opponents. This survey covers the progression of autonomous drone racing across model-based and learning-based approaches. We provide an overview of the field, its evolution over the years, and conclude with the biggest challenges and open questions to be faced in the future.

Estimating human pose and shape from monocular images is a long-standing problem in computer vision. Since the release of statistical body models, 3D human mesh recovery has been drawing broader attention. With the same goal of obtaining well-aligned and physically plausible mesh results, two paradigms have been developed to overcome challenges in the 2D-to-3D lifting process: i) an optimization-based paradigm, where different data terms and regularization terms are exploited as optimization objectives; and ii) a regression-based paradigm, where deep learning techniques are embraced to solve the problem in an end-to-end fashion. Meanwhile, continuous efforts are devoted to improving the quality of 3D mesh labels for a wide range of datasets. Though remarkable progress has been achieved in the past decade, the task is still challenging due to flexible body motions, diverse appearances, complex environments, and insufficient in-the-wild annotations. To the best of our knowledge, this is the first survey to focus on the task of monocular 3D human mesh recovery. We start with the introduction of body models and then elaborate recovery frameworks and training objectives by providing in-depth analyses of their strengths and weaknesses. We also summarize datasets, evaluation metrics, and benchmark results. Open issues and future directions are discussed in the end, hoping to motivate researchers and facilitate their research in this area. A regularly updated project page can be found at //github.com/tinatiansjz/hmr-survey.

Imitation learning aims to extract knowledge from human experts' demonstrations or artificially created agents in order to replicate their behaviors. Its success has been demonstrated in areas such as video games, autonomous driving, robotic simulations and object manipulation. However, this replicating process could be problematic, such as the performance is highly dependent on the demonstration quality, and most trained agents are limited to perform well in task-specific environments. In this survey, we provide a systematic review on imitation learning. We first introduce the background knowledge from development history and preliminaries, followed by presenting different taxonomies within Imitation Learning and key milestones of the field. We then detail challenges in learning strategies and present research opportunities with learning policy from suboptimal demonstration, voice instructions and other associated optimization schemes.

Recently, deep multiagent reinforcement learning (MARL) has become a highly active research area as many real-world problems can be inherently viewed as multiagent systems. A particularly interesting and widely applicable class of problems is the partially observable cooperative multiagent setting, in which a team of agents learns to coordinate their behaviors conditioning on their private observations and commonly shared global reward signals. One natural solution is to resort to the centralized training and decentralized execution paradigm. During centralized training, one key challenge is the multiagent credit assignment: how to allocate the global rewards for individual agent policies for better coordination towards maximizing system-level's benefits. In this paper, we propose a new method called Q-value Path Decomposition (QPD) to decompose the system's global Q-values into individual agents' Q-values. Unlike previous works which restrict the representation relation of the individual Q-values and the global one, we leverage the integrated gradient attribution technique into deep MARL to directly decompose global Q-values along trajectory paths to assign credits for agents. We evaluate QPD on the challenging StarCraft II micromanagement tasks and show that QPD achieves the state-of-the-art performance in both homogeneous and heterogeneous multiagent scenarios compared with existing cooperative MARL algorithms.

We present a monocular Simultaneous Localization and Mapping (SLAM) using high level object and plane landmarks, in addition to points. The resulting map is denser, more compact and meaningful compared to point only SLAM. We first propose a high order graphical model to jointly infer the 3D object and layout planes from single image considering occlusions and semantic constraints. The extracted cuboid object and layout planes are further optimized in a unified SLAM framework. Objects and planes can provide more semantic constraints such as Manhattan and object supporting relationships compared to points. Experiments on various public and collected datasets including ICL NUIM and TUM mono show that our algorithm can improve camera localization accuracy compared to state-of-the-art SLAM and also generate dense maps in many structured environments.

北京阿比特科技有限公司