亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This work proposes a force control strategy with prescribed transient performance for the legs of a wheel-legged robotic system to realize the posture adjustment on uneven roads. A dynamic model of the robotic system is established with the body postures as inputs and the leg forces as outputs, such that the desired forces for the wheel-legs are calculated by the posture reference and feedback. Based on the funnel control scheme, the legs realize force tracking with prescribed transient performance. To improve the robustness of the force control system, an event-based mechanism is designed for the online segment of the funnel function. As a result, the force tracking error of the wheel-leg evolves inside the performance funnel with proved convergence. The absence of Zeno behavior for the event-triggering condition is also guaranteed. The proposed control scheme is applied to the wheel-legged physical prototype for the performance of force tracking and posture adjustment. Multiple comparative experimental results are presented to validate the stability and effectiveness of the proposed methodology.

相關內容

The tactile sensing capabilities of human hands are essential in performing daily activities. Simultaneously perceiving normal and shear forces via the mechanoreceptors integrated into the hands enables humans to achieve daily tasks like grasping delicate objects. In this paper, we design and fabricate a novel biomimetic tactile sensor with skin-like heterogeneity that perceives normal and shear contact forces simultaneously. It mimics the multilayers of mechanoreceptors by combining an extrinsic layer (piezoresistive sensors) and an intrinsic layer (a Hall sensor) so that it can perform estimation of contact force directions, locations, and joint-level torque. By integrating our sensors, a robotic gripper can obtain contact force feedback at fingertips; accordingly, robots can perform challenging tasks, such as tweezers usage, and egg grasping. This insightful sensor design can be customized and applied in different areas of robots and provide them with heterogeneous force sensing, potentially supporting robotics in acquiring skin-like tactile feedback.

Exoskeletons and orthoses are wearable mobile systems providing mechanical benefits to the users. Despite significant improvements in the last decades, the technology is not fully mature to be adopted for strenuous and non-programmed tasks. To accommodate this insufficiency, different aspects of this technology need to be analysed and improved. Numerous studies have been trying to address some aspects of exoskeletons, e.g. mechanism design, intent prediction, and control scheme. However, most works have focused on a specific element of design or application without providing a comprehensive review framework. This study aims to analyse and survey the contributing aspects to the improvement and broad adoption of this technology. To address this, after introducing assistive devices and exoskeletons, the main design criteria will be investigated from a physical Human-Robot Interface (HRI) perspective. The study will be further developed by outlining several examples of known assistive devices in different categories. In order to establish an intelligent HRI strategy and enabling intuitive control for users, cognitive HRI will be investigated. Various approaches to this strategy will be reviewed, and a model for intent prediction will be proposed. This model is utilised to predict the gate phase from a single Electromyography (EMG) channel input. The outcomes of modelling show the potential use of single-channel input in low-power assistive devices. Furthermore, the proposed model can provide redundancy in devices with a complex control strategy.

This paper studies capturability and push recovery for quadrupedal locomotion. Despite the rich literature on capturability analysis and push recovery control for legged robots, existing tools are developed mainly for bipeds or humanoids. Distinct quadrupedal features such as point contacts and multiple swinging legs prevent direct application of these methods. To address this gap, we propose a switched systems model for quadruped dynamics, and instantiate the abstract viability concept for quadrupedal locomotion with a time-based gait. Capturability is characterized through a novel specification of dynamically balanced states that addresses the time-varying nature of quadrupedal locomotion and balance. A linear inverted pendulum (LIP) model is adopted to demonstrate the theory and show how the newly developed quadrupedal capturability can be used in motion planning for quadrupedal push recovery. We formulate and solve an explicit model predictive control (EMPC) problem whose optimal solution fully characterizes quadrupedal capturability with the LIP. Given this analysis, an optimization-based planning scheme is devised for determining footsteps and center of mass references during push recovery. To validate the effectiveness of the overall framework, we conduct numerous simulation and hardware experiments. Simulation results illustrate the necessity of considering dynamic balance for quadrupedal capturability, and the significant improvement in disturbance rejection with the proposed strategy. Experimental validations on a replica of the Mini Cheetah quadruped demonstrate an up to 100% improvement as compared with state-of-the-art.

Multi-access edge computing (MEC) is a key enabler to reduce the latency of vehicular network. Due to the vehicles mobility, their requested services (e.g., infotainment services) should frequently be migrated across different MEC servers to guarantee their stringent quality of service requirements. In this paper, we study the problem of service migration in a MEC-enabled vehicular network in order to minimize the total service latency and migration cost. This problem is formulated as a nonlinear integer program and is linearized to help obtaining the optimal solution using off-the-shelf solvers. Then, to obtain an efficient solution, it is modeled as a multi-agent Markov decision process and solved by leveraging deep Q learning (DQL) algorithm. The proposed DQL scheme performs a proactive services migration while ensuring their continuity under high mobility constraints. Finally, simulations results show that the proposed DQL scheme achieves close-to-optimal performance.

We introduce a numerical technique for controlling the location and stability properties of Hopf bifurcations in dynamical systems. The algorithm consists of solving an optimization problem constrained by an extended system of nonlinear partial differential equations that characterizes Hopf bifurcation points. The flexibility and robustness of the method allows us to advance or delay a Hopf bifurcation to a target value of the bifurcation parameter, as well as controlling the oscillation frequency with respect to a parameter of the system or the shape of the domain on which solutions are defined. Numerical applications are presented in systems arising from biology and fluid dynamics, such as the FitzHugh-Nagumo model, Ginzburg-Landau equation, Rayleigh-B\'enard convection problem, and Navier-Stokes equations, where the control of the location and oscillation frequency of periodic solutions is of high interest.

Efficient usage of in-device storage and computation capabilities are key solutions to support data-intensive applications such as immersive digital experience. This paper proposes a location-dependent multi-antenna coded caching -based content delivery scheme tailored specifically for wireless immersive viewing applications. First, a memory assignment phase is performed where the content relevant to the identified wireless bottleneck areas are incentivized. As a result, unequal fractions of location-dependent multimedia content are cached at each user. Then, a novel packet generation process is carried out given asymmetric cache placement. During the subsequent delivery phase, the number of packets transmitted to each user is the same, while the sizes of the packets are proportional to the corresponding location-dependent cache ratios. Finally, each user is served with location-specific content using joint multicast beamforming and multi-rate modulation scheme that simultaneously benefits from global caching and spatial multiplexing gains. Numerical experiments and mathematical analysis demonstrate significant performance gains compared to the state-of-the-art.

Non-verbal communication plays a particularly important role in a wide range of scenarios in Human-Robot Interaction (HRI). Accordingly, this work addresses the problem of human gesture recognition. In particular, we focus on head and eye gestures, and adopt an egocentric (first-person) perspective using eyewear cameras. We argue that this egocentric view offers a number of conceptual and technical benefits over scene- or robot-centric perspectives. A motion-based recognition approach is proposed, which operates at two temporal granularities. Locally, frame-to-frame homographies are estimated with a convolutional neural network (CNN). The output of this CNN is input to a long short-term memory (LSTM) to capture longer-term temporal visual relationships, which are relevant to characterize gestures. Regarding the configuration of the network architecture, one particularly interesting finding is that using the output of an internal layer of the homography CNN increases the recognition rate with respect to using the homography matrix itself. While this work focuses on action recognition, and no robot or user study has been conducted yet, the system has been de signed to meet real-time constraints. The encouraging results suggest that the proposed egocentric perspective is viable, and this proof-of-concept work provides novel and useful contributions to the exciting area of HRI.

Real-time semantic segmentation plays an important role in practical applications such as self-driving and robots. Most research working on semantic segmentation focuses on accuracy with little consideration for efficiency. Several existing studies that emphasize high-speed inference often cannot produce high-accuracy segmentation results. In this paper, we propose a novel convolutional network named Efficient Dense modules with Asymmetric convolution (EDANet), which employs an asymmetric convolution structure incorporating the dilated convolution and the dense connectivity to attain high efficiency at low computational cost, inference time, and model size. Compared to FCN, EDANet is 11 times faster and has 196 times fewer parameters, while it achieves a higher the mean of intersection-over-union (mIoU) score without any additional decoder structure, context module, post-processing scheme, and pretrained model. We evaluate EDANet on Cityscapes and CamVid datasets to evaluate its performance and compare it with the other state-of-art systems. Our network can run on resolution 512x1024 inputs at the speed of 108 and 81 frames per second on a single GTX 1080Ti and Titan X, respectively.

This paper introduces a novel neural network-based reinforcement learning approach for robot gaze control. Our approach enables a robot to learn and to adapt its gaze control strategy for human-robot interaction neither with the use of external sensors nor with human supervision. The robot learns to focus its attention onto groups of people from its own audio-visual experiences, independently of the number of people, of their positions and of their physical appearances. In particular, we use a recurrent neural network architecture in combination with Q-learning to find an optimal action-selection policy; we pre-train the network using a simulated environment that mimics realistic scenarios that involve speaking/silent participants, thus avoiding the need of tedious sessions of a robot interacting with people. Our experimental evaluation suggests that the proposed method is robust against parameter estimation, i.e. the parameter values yielded by the method do not have a decisive impact on the performance. The best results are obtained when both audio and visual information is jointly used. Experiments with the Nao robot indicate that our framework is a step forward towards the autonomous learning of socially acceptable gaze behavior.

This paper presents a safety-aware learning framework that employs an adaptive model learning method together with barrier certificates for systems with possibly nonstationary agent dynamics. To extract the dynamic structure of the model, we use a sparse optimization technique, and the resulting model will be used in combination with control barrier certificates which constrain feedback controllers only when safety is about to be violated. Under some mild assumptions, solutions to the constrained feedback-controller optimization are guaranteed to be globally optimal, and the monotonic improvement of a feedback controller is thus ensured. In addition, we reformulate the (action-)value function approximation to make any kernel-based nonlinear function estimation method applicable. We then employ a state-of-the-art kernel adaptive filtering technique for the (action-)value function approximation. The resulting framework is verified experimentally on a brushbot, whose dynamics is unknown and highly complex.

北京阿比特科技有限公司