亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Model-based control usually relies on an accurate model, which is often obtained from CAD and actuator models. The more accurate the model the better the control performance. However, in bipedal robots that demonstrate high agility actions, such as running and hopping, the robot hardware will suffer from impacts with the environment and deform in vulnerable parts, which invalidates the predefined model. Thus, it is desired to have an adaptable kinematic structure that takes deformation into consideration. To account for this we propose an approach that models all of the robotic joints as 6-DOF joints and develop an algorithm that can identify the kinematic structure from motion capture data. We evaluate the algorithm's performance both in simulation - a three link pendulum, and on a bipedal robot - ATRIAS. In the simulated case the algorithm produces a result that has a 3.6% error compared to the ground truth, and on the real life bipedal robot the algorithm's result confirms our prior assumption where the joint deforms on out-of-plane degrees of freedom. In addition our algorithm is able to predict torques and forces using the reconstructed joint mode.

相關內容

We present the design, implementation, and experimental evaluation of a 3 DOF robotic platform to treat the balance disorder of the patients with MS. The robotic platform is designed to allow angular motion of the ankle based on the anthropomorphic freedom in the space. That being said, such a robot forces patients to keep their balance by changing the angular position of the platform in three directions. The difficulty level of the tasks are determined based on the data gathered from the upper and lower platforms responsible for patients' reaction time against the unexpected perturbations. The upper platform instantaneously provides pressure distribution of each foot, whereas the lower platform simultaneously shares the center of mass of the patient. In this study, the kinematic and dynamic analyses, and simulation of the 3 DOF parallel manipulator is successfully implemented. The control of the proof of concept design is carried out by means of PID control. The working principle of the upper and lower platforms are verified by set of experiments.

External contact force is one of the most significant information for the robots to model, control, and safely interact with external objects. For continuum robots, it is possible to estimate the contact force based on the measurements of robot configurations, which addresses the difficulty of implementing the force sensor feedback on the robot body with strict dimension constraints. In this paper, we use local curvatures measured from fiber Bragg grating sensors (FBGS) to estimate the magnitude and location of single or multiple external contact forces. A simplified mechanics model is derived from Cosserat rod theory to compute continuum robot curvatures. Least-square optimization is utilized to estimate the forces by minimizing errors between computed curvatures and measured curvatures. The results show that the proposed method is able to accurately estimate the contact force magnitude (error: 5.25\% -- 12.87\%) and locations (error: 1.02\% -- 2.19\%). The calculation speed of the proposed method is validated in MATLAB. The results indicate that our approach is 29.0 -- 101.6 times faster than the conventional methods. These results indicate that the proposed method is accurate and efficient for contact force estimations.

This work addresses the inverse kinematics of serial robots using conformal geometric algebra. Classical approaches include either the use of homogeneous matrices, which entails high computational cost and execution time or the development of particular geometric strategies that cannot be generalized to arbitrary serial robots. In this work, we present a compact, elegant and intuitive formulation of robot kinematics based on conformal geometric algebra that provides a suitable framework for the closed-form resolution of the inverse kinematic problem for manipulators with a spherical wrist. For serial robots of this kind, the inverse kinematics problem can be split in two subproblems: the position and orientation problems. The latter is solved by appropriately splitting the rotor that defines the target orientation into three simpler rotors, while the former is solved by developing a geometric strategy for each combination of prismatic and revolute joints that forms the position part of the robot. Finally, the inverse kinematics of 7 DoF redundant manipulators with a spherical wrist is solved by extending the geometric solutions obtained in the non-redundant case.

Variable impedance control in operation-space is a promising approach to learning contact-rich manipulation behaviors. One of the main challenges with this approach is producing a manipulation behavior that ensures the safety of the arm and the environment. Such behavior is typically implemented via a reward function that penalizes unsafe actions (e.g. obstacle collision, joint limit extension), but that approach is not always effective and does not result in behaviors that can be reused in slightly different environments. We show how to combine Riemannian Motion Policies, a class of policies that dynamically generate motion in the presence of safety and collision constraints, with variable impedance operation-space control to learn safer contact-rich manipulation behaviors.

We present a robot kinematic calibration method that combines complementary calibration approaches: self-contact, planar constraints, and self-observation. We analyze the estimation of the end effector parameters, joint offsets of the manipulators, and calibration of the complete kinematic chain (DH parameters). The results are compared with ground truth measurements provided by a laser tracker. Our main findings are: (1) When applying the complementary calibration approaches in isolation, the self-contact approach yields the best and most stable results. (2) All combinations of more than one approach were always superior to using any single approach in terms of calibration errors and the observability of the estimated parameters. Combining more approaches delivers robot parameters that better generalize to the workspace parts not used for the calibration. (3) Sequential calibration, i.e. calibrating cameras first and then robot kinematics, is more effective than simultaneous calibration of all parameters. In real experiments, we employ two industrial manipulators mounted on a common base. The manipulators are equipped with force/torque sensors at their wrists, with two cameras attached to the robot base, and with special end effectors with fiducial markers. We collect a new comprehensive dataset for robot kinematic calibration and make it publicly available. The dataset and its analysis provide quantitative and qualitative insights that go beyond the specific manipulators used in this work and apply to self-contained robot kinematic calibration in general.

We present a passive stereo depth system that produces dense and accurate point clouds optimized for human environments, including dark, textureless, thin, reflective and specular surfaces and objects, at 2560x2048 resolution, with 384 disparities, in 30 ms. The system consists of an algorithm combining learned stereo matching with engineered filtering, a training and data-mixing methodology, and a sensor hardware design. Our architecture is 15x faster than approaches that perform similarly on the Middlebury and Flying Things Stereo Benchmarks. To effectively supervise the training of this model, we combine real data labelled using off-the-shelf depth sensors, as well as a number of different rendered, simulated labeled datasets. We demonstrate the efficacy of our system by presenting a large number of qualitative results in the form of depth maps and point-clouds, experiments validating the metric accuracy of our system and comparisons to other sensors on challenging objects and scenes. We also show the competitiveness of our algorithm compared to state-of-the-art learned models using the Middlebury and FlyingThings datasets.

Soft robots are made of compliant and deformable materials and can perform tasks challenging for conventional rigid robots. The inherent compliance of soft robots makes them more suitable and adaptable for interactions with humans and the environment. However, this preeminence comes at a cost: their continuum nature makes it challenging to develop robust model-based control strategies. Specifically, an adaptive control approach addressing this challenge has not yet been applied to physical soft robotic arms. This work presents a reformulation of dynamics for a soft continuum manipulator using the Euler-Lagrange method. The proposed model eliminates the simplifying assumption made in previous works and provides a more accurate description of the robot's inertia. Based on our model, we introduce a task-space adaptive control scheme. This controller is robust against model parameter uncertainties and unknown input disturbances. The controller is implemented on a physical soft continuum arm. A series of experiments were carried out to validate the effectiveness of the controller in task-space trajectory tracking under different payloads. The controller outperforms the state-of-the-art method both in terms of accuracy and robustness. Moreover, the proposed model-based control design is flexible and can be generalized to any continuum robotic arm with an arbitrary number of continuum segments.

Most Deep Reinforcement Learning (Deep RL) algorithms require a prohibitively large number of training samples for learning complex tasks. Many recent works on speeding up Deep RL have focused on distributed training and simulation. While distributed training is often done on the GPU, simulation is not. In this work, we propose using GPU-accelerated RL simulations as an alternative to CPU ones. Using NVIDIA Flex, a GPU-based physics engine, we show promising speed-ups of learning various continuous-control, locomotion tasks. With one GPU and CPU core, we are able to train the Humanoid running task in less than 20 minutes, using 10-1000x fewer CPU cores than previous works. We also demonstrate the scalability of our simulator to multi-GPU settings to train more challenging locomotion tasks.

eCommerce transaction frauds keep changing rapidly. This is the major issue that prevents eCommerce merchants having a robust machine learning model for fraudulent transactions detection. The root cause of this problem is that rapid changing fraud patterns alters underlying data generating system and causes the performance deterioration for machine learning models. This phenomenon in statistical modeling is called "Concept Drift". To overcome this issue, we propose an approach which adds dynamic risk features as model inputs. Dynamic risk features are a set of features built on entity profile with fraud feedback. They are introduced to quantify the fluctuation of probability distribution of risk features from certain entity profile caused by concept drift. In this paper, we also illustrate why this strategy can successfully handle the effect of concept drift under statistical learning framework. We also validate our approach on multiple businesses in production and have verified that the proposed dynamic model has a superior ROC curve than a static model built on the same data and training parameters.

Collecting training data from the physical world is usually time-consuming and even dangerous for fragile robots, and thus, recent advances in robot learning advocate the use of simulators as the training platform. Unfortunately, the reality gap between synthetic and real visual data prohibits direct migration of the models trained in virtual worlds to the real world. This paper proposes a modular architecture for tackling the virtual-to-real problem. The proposed architecture separates the learning model into a perception module and a control policy module, and uses semantic image segmentation as the meta representation for relating these two modules. The perception module translates the perceived RGB image to semantic image segmentation. The control policy module is implemented as a deep reinforcement learning agent, which performs actions based on the translated image segmentation. Our architecture is evaluated in an obstacle avoidance task and a target following task. Experimental results show that our architecture significantly outperforms all of the baseline methods in both virtual and real environments, and demonstrates a faster learning curve than them. We also present a detailed analysis for a variety of variant configurations, and validate the transferability of our modular architecture.

北京阿比特科技有限公司