亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This work presents the coordinated motion control and obstacle-crossing problem for the four wheel-leg independent motor-driven robotic systems via a model predictive control (MPC) approach based on an event-triggering mechanism. The modeling of a wheel-leg robotic control system with a dynamic supporting polygon is organized. The system dynamic model is 3 degrees of freedom (DOF) ignoring the pitch, roll and vertical motions. The single wheel dynamic is analyzed considering the characteristics of motor-driven and the Burckhardt nonlinear tire model. As a result, an over-actuated predictive model is proposed with the motor torques as inputs and the system states as outputs. As the supporting polygon is only adjusted at certain conditions, an event-based triggering mechanism is designed to save hardware resources and energy. The MPC controller is evaluated on a virtual prototype as well as a physical prototype. The simulation results guide the parameter tuning for the controller implementation in the physical prototype. The experimental results on these two prototypes verify the efficiency of the proposed approach.

相關內容

Limited process control can cause metallurgical defect formation and inhomogeneous relative density in laser powder bed fusion manufactured parts. In this study, cylindrical 15-5 PH stainless steel specimens are investigated by computer tomography; it shows an edge enhanced relative density profile. Additionally, the on axis monitoring signal, obtained from recording the thermal radiation of the melt pool, is considered. Analyzing data for the full duration of the building process results in a statistically increased melt pool signature close to the edge corresponding to the density profile. Edge specific patterns in the on axis signal are found by unsupervised times series clustering. The observations are interpreted using finite element method modeling: For exemplary points at the center and edge it shows different local thermal histories attributed to the chosen laser scan pattern. The results motivate a route towards future design of components with locally dependent material parameters.

This paper presents the arrangement of an in-pipe climbing robot that works using a clever differential part to explore complex associations of lines. Standard wheeled/continued in-pipe climbing robots are leaned to slip and take while exploring in pipe turns. The mechanism helps in achieving the first eventual outcome of clearing out slip and drag in the robot tracks during development. The proposed differential comprehends the down to earth limits of the standard two-yield differential, which is cultivated the underlying time for a differential with three outcomes. The mechanism definitively changes the track paces of the robot considering the powers applied on each track inside the line association, by clearing out the prerequisite for any unique control. The entertainment of the robot crossing in the line network in different bearings and in pipe-turns without slip shows the proposed arrangement's ampleness.

In this work, we introduce an adaptive control framework for human-robot collaborative transportation of objects with unknown deformation behaviour. The proposed framework takes as input the haptic information transmitted through the object, and the kinematic information of the human body obtained from a motion capture system to create reactive whole-body motions on a mobile collaborative robot. Moreover, the designed framework delivers an intuitive way to rotate the object by processing the human torso and hand movements. In order to validate our framework experimentally, we compared its performance with an admittance controller during a co-transportation task of a partially deformable object. We additionally demonstrate the potential of the framework while co-transporting rigid (aluminum rod) and deformable (rope) objects. A mobile manipulator which consists of an Omni-directional mobile base, a collaborative robotic arm, and a robotic hand is used as the robotic partner in the experiments. Quantitative and qualitative results of a 12-subjects experiment show that the proposed framework can effectively deal with objects of unknown deformability and provides intuitive assistance to human partners.

Millimeter-wave self-backhauled small cells are a key component of next-generation wireless networks. Their dense deployment will increase data rates, reduce latency, and enable efficient data transport between the access and backhaul networks, providing greater flexibility not previously possible with optical fiber. Despite their high potential, operating dense self-backhauled networks optimally is an open challenge, particularly for radio resource management (RRM). This paper presents, RadiOrchestra, a holistic RRM framework that models and optimizes beamforming, rate selection as well as user association and admission control for self-backhauled networks. The framework is designed to account for practical challenges such as hardware limitations of base stations (e.g., computational capacity, discrete rates), the need for adaptability of backhaul links, and the presence of interference. Our framework is formulated as a nonconvex mixed-integer nonlinear program, which is challenging to solve. To approach this problem, we propose three algorithms that provide a trade-off between complexity and optimality. Furthermore, we derive upper and lower bounds to characterize the performance limits of the system. We evaluate the developed strategies in various scenarios, showing the feasibility of deploying practical self-backhauling in future networks.

Humans communicate non-verbally by sharing physical rhythms, such as nodding and gestures, to involve each other. This sharing of physicality creates a sense of unity and makes humans feel involved with others. In this paper, we developed a new body motion generation system based on the free-energy principle (FEP), which not only responds passively but also prompts human actions. The proposed system consists of two modules, the sampling module, and the motion selection module. We conducted a subjective experiment to evaluate the "feeling of interacting with the agent" of the FEP based behavior. The results suggested that FEP based behaviors show more "feeling of interacting with the agent". Furthermore, we confirmed that the agent's gestures elicited subject gestures. This result not only reinforces the impression of feeling interaction but could also realization of agents that encourage people to change their behavior.

We present an approach to learn an object-centric forward model, and show that this allows us to plan for sequences of actions to achieve distant desired goals. We propose to model a scene as a collection of objects, each with an explicit spatial location and implicit visual feature, and learn to model the effects of actions using random interaction data. Our model allows capturing the robot-object and object-object interactions, and leads to more sample-efficient and accurate predictions. We show that this learned model can be leveraged to search for action sequences that lead to desired goal configurations, and that in conjunction with a learned correction module, this allows for robust closed loop execution. We present experiments both in simulation and the real world, and show that our approach improves over alternate implicit or pixel-space forward models. Please see our project page (//judyye.github.io/ocmpc/) for result videos.

This paper presents an upgraded, real world application oriented version of gym-gazebo, the Robot Operating System (ROS) and Gazebo based Reinforcement Learning (RL) toolkit, which complies with OpenAI Gym. The content discusses the new ROS 2 based software architecture and summarizes the results obtained using Proximal Policy Optimization (PPO). Ultimately, the output of this work presents a benchmarking system for robotics that allows different techniques and algorithms to be compared using the same virtual conditions. We have evaluated environments with different levels of complexity of the Modular Articulated Robotic Arm (MARA), reaching accuracies in the millimeter scale. The converged results show the feasibility and usefulness of the gym-gazebo 2 toolkit, its potential and applicability in industrial use cases, using modular robots.

Collecting training data from the physical world is usually time-consuming and even dangerous for fragile robots, and thus, recent advances in robot learning advocate the use of simulators as the training platform. Unfortunately, the reality gap between synthetic and real visual data prohibits direct migration of the models trained in virtual worlds to the real world. This paper proposes a modular architecture for tackling the virtual-to-real problem. The proposed architecture separates the learning model into a perception module and a control policy module, and uses semantic image segmentation as the meta representation for relating these two modules. The perception module translates the perceived RGB image to semantic image segmentation. The control policy module is implemented as a deep reinforcement learning agent, which performs actions based on the translated image segmentation. Our architecture is evaluated in an obstacle avoidance task and a target following task. Experimental results show that our architecture significantly outperforms all of the baseline methods in both virtual and real environments, and demonstrates a faster learning curve than them. We also present a detailed analysis for a variety of variant configurations, and validate the transferability of our modular architecture.

This paper introduces a novel neural network-based reinforcement learning approach for robot gaze control. Our approach enables a robot to learn and to adapt its gaze control strategy for human-robot interaction neither with the use of external sensors nor with human supervision. The robot learns to focus its attention onto groups of people from its own audio-visual experiences, independently of the number of people, of their positions and of their physical appearances. In particular, we use a recurrent neural network architecture in combination with Q-learning to find an optimal action-selection policy; we pre-train the network using a simulated environment that mimics realistic scenarios that involve speaking/silent participants, thus avoiding the need of tedious sessions of a robot interacting with people. Our experimental evaluation suggests that the proposed method is robust against parameter estimation, i.e. the parameter values yielded by the method do not have a decisive impact on the performance. The best results are obtained when both audio and visual information is jointly used. Experiments with the Nao robot indicate that our framework is a step forward towards the autonomous learning of socially acceptable gaze behavior.

Although reinforcement learning methods can achieve impressive results in simulation, the real world presents two major challenges: generating samples is exceedingly expensive, and unexpected perturbations can cause proficient but narrowly-learned policies to fail at test time. In this work, we propose to learn how to quickly and effectively adapt online to new situations as well as to perturbations. To enable sample-efficient meta-learning, we consider learning online adaptation in the context of model-based reinforcement learning. Our approach trains a global model such that, when combined with recent data, the model can be be rapidly adapted to the local context. Our experiments demonstrate that our approach can enable simulated agents to adapt their behavior online to novel terrains, to a crippled leg, and in highly-dynamic environments.

北京阿比特科技有限公司