亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Legged robot locomotion is a challenging task due to a myriad of sub-problems, such as the hybrid dynamics of foot contact and the effects of the desired gait on the terrain. Accurate and efficient state estimation of the floating base and the feet joints can help alleviate much of these issues by providing feedback information to robot controllers. Current state estimation methods are highly reliant on a conjunction of visual and inertial measurements to provide real-time estimates, thus being handicapped in perceptually poor environments. In this work, we show that by leveraging the kinematic chain model of the robot via a factor graph formulation, we can perform state estimation of the base and the leg joints using primarily proprioceptive inertial data. We perform state estimation using a combination of preintegrated IMU measurements, forward kinematic computations, and contact detections in a factor-graph based framework, allowing our state estimate to be constrained by the robot model. Experimental results in simulation and on hardware show that our approach out-performs current proprioceptive state estimation methods by 27% on average, while being generalizable to a variety of legged robot platforms. We demonstrate our results both quantitatively and qualitatively on a wide variety of trajectories.

相關內容

 根據可獲取的量測數據估算動態系統內部狀態的方法。對系統的輸入和輸出進行量測而得到的數據只能反映系統的外部特性,而系統的動態規律需要用內部(通常無法直接測量)狀態變量來描述。因此狀態估計對于了解和控制一個系統具有重要意義。

Non-mydriatic retinal color fundus photography (CFP) is widely available due to the advantage of not requiring pupillary dilation, however, is prone to poor quality due to operators, systemic imperfections, or patient-related causes. Optimal retinal image quality is mandated for accurate medical diagnoses and automated analyses. Herein, we leveraged the Optimal Transport (OT) theory to propose an unpaired image-to-image translation scheme for mapping low-quality retinal CFPs to high-quality counterparts. Furthermore, to improve the flexibility, robustness, and applicability of our image enhancement pipeline in the clinical practice, we generalized a state-of-the-art model-based image reconstruction method, regularization by denoising, by plugging in priors learned by our OT-guided image-to-image translation network. We named it as regularization by enhancing (RE). We validated the integrated framework, OTRE, on three publicly available retinal image datasets by assessing the quality after enhancement and their performance on various downstream tasks, including diabetic retinopathy grading, vessel segmentation, and diabetic lesion segmentation. The experimental results demonstrated the superiority of our proposed framework over some state-of-the-art unsupervised competitors and a state-of-the-art supervised method.

Software is a great enabler for a number of projects that otherwise would be impossible to perform. Such projects include Space Exploration, Weather Modeling, Genome Projects, and many others. It is critical that software aiding these projects does what it is expected to do. In the terminology of software engineering, software that corresponds to requirements, that is does what it is expected to do is called correct. Checking the correctness of software has been the focus of a great deal of research in the area of software engineering. Practitioners in the field in which software is applied quite often do not assign much value to checking this correctness. Yet, as software systems become larger, potentially combined with distributed subsystems written by different authors, such verification becomes even more important. Concurrent, distributed systems are prone to dangerous errors due to different speeds of execution of their components such as deadlocks, race conditions, or violation of project-specific properties. This project describes an application of a static analysis method called model checking to verification of a distributed system for the Bioinformatics process. In it, we evaluate the efficiency of the model checking approach to the verification of combined processes with an increasing number of concurrently executed steps. We show that our experimental results correspond to analytically derived expectations. We also highlight the importance of static analysis to combined processes in the Bioinformatics field.

Most legged robots are built with leg structures from serially mounted links and actuators and are controlled through complex controllers and sensor feedback. In comparison, animals developed multi-segment legs, mechanical coupling between joints, and multi-segmented feet. They run agile over all terrains, arguably with simpler locomotion control. Here we focus on developing foot mechanisms that resist slipping and sinking also in natural terrain. We present first results of multi-segment feet mounted to a bird-inspired robot leg with multi-joint mechanical tendon coupling. Our one- and two-segment, mechanically adaptive feet show increased viable horizontal forces on multiple soft and hard substrates before starting to slip. We also observe that segmented feet reduce sinking on soft substrates compared to ball-feet and cylinder-feet. We report how multi-segmented feet provide a large range of viable centre of pressure points well suited for bipedal robots, but also for quadruped robots on slopes and natural terrain. Our results also offer a functional understanding of segmented feet in animals like ratite birds.

State of the art reinforcement learning has enabled training agents on tasks of ever increasing complexity. However, the current paradigm tends to favor training agents from scratch on every new task or on collections of tasks with a view towards generalizing to novel task configurations. The former suffers from poor data efficiency while the latter is difficult when test tasks are out-of-distribution. Agents that can effectively transfer their knowledge about the world pose a potential solution to these issues. In this paper, we investigate transfer learning in the context of model-based agents. Specifically, we aim to understand when exactly environment models have an advantage and why. We find that a model-based approach outperforms controlled model-free baselines for transfer learning. Through ablations, we show that both the policy and dynamics model learnt through exploration matter for successful transfer. We demonstrate our results across three domains which vary in their requirements for transfer: in-distribution procedural (Crafter), in-distribution identical (RoboDesk), and out-of-distribution (Meta-World). Our results show that intrinsic exploration combined with environment models present a viable direction towards agents that are self-supervised and able to generalize to novel reward functions.

Novel photo-realistic texture synthesis is an important task for generating novel scenes, including asset generation for 3D simulations. However, to date, these methods predominantly generate textured objects in 2D space. If we rely on 2D object generation, then we need to make a computationally expensive forward pass each time we change the camera viewpoint or lighting. Recent work that can generate textures in 3D requires 3D component segmentation that is expensive to acquire. In this work, we present a novel conditional generative architecture that we call a graph generative adversarial network (GGAN) that can generate textures in 3D by learning object component information in an unsupervised way. In this framework, we do not need an expensive forward pass whenever the camera viewpoint or lighting changes, and we do not need expensive 3D part information for training, yet the model can generalize to unseen 3D meshes and generate appropriate novel 3D textures. We compare this approach against state-of-the-art texture generation methods and demonstrate that the GGAN obtains significantly better texture generation quality (according to Frechet inception distance). We release our model source code as open source.

The generative adversarial networks (GANs) have recently been applied to estimating the distribution of independent and identically distributed data, and have attracted a lot of research attention. In this paper, we use the blocking technique to demonstrate the effectiveness of GANs for estimating the distribution of stationary time series. Theoretically, we derive a non-asymptotic error bound for the Deep Neural Network (DNN)-based GANs estimator for the stationary distribution of the time series. Based on our theoretical analysis, we propose an algorithm for estimating the change point in time series distribution. The two main results are verified by two Monte Carlo experiments respectively, one is to estimate the joint stationary distribution of $5$-tuple samples of a 20 dimensional AR(3) model, the other is about estimating the change point at the combination of two different stationary time series. A real world empirical application to the human activity recognition dataset highlights the potential of the proposed methods.

This paper proposes a novel deep learning approach for approximating evolution operators and modeling unknown autonomous dynamical systems using time series data collected at varied time lags. It is a sequel to the previous works [T. Qin, K. Wu, and D. Xiu, J. Comput. Phys., 395:620--635, 2019], [K. Wu and D. Xiu, J. Comput. Phys., 408:109307, 2020], and [Z. Chen, V. Churchill, K. Wu, and D. Xiu, J. Comput. Phys., 449:110782, 2022], which focused on learning single evolution operator with a fixed time step. This paper aims to learn a family of evolution operators with variable time steps, which constitute a semigroup for an autonomous system. The semigroup property is very crucial and links the system's evolutionary behaviors across varying time scales, but it was not considered in the previous works. We propose for the first time a framework of embedding the semigroup property into the data-driven learning process, through a novel neural network architecture and new loss functions. The framework is very feasible, can be combined with any suitable neural networks, and is applicable to learning general autonomous ODEs and PDEs. We present the rigorous error estimates and variance analysis to understand the prediction accuracy and robustness of our approach, showing the remarkable advantages of semigroup awareness in our model. Moreover, our approach allows one to arbitrarily choose the time steps for prediction and ensures that the predicted results are well self-matched and consistent. Extensive numerical experiments demonstrate that embedding the semigroup property notably reduces the data dependency of deep learning models and greatly improves the accuracy, robustness, and stability for long-time prediction.

The rapid development of social robots has challenged robotics and cognitive sciences to understand humans' perception of the appearance of robots. In this study, robot-associated words spontaneously generated by humans were analyzed to semantically reveal the body image of 30 robots that have been developed over the past decades. The analyses took advantage of word affect scales and embedding vectors, and provided a series of evidence for links between human perception and body image. It was found that the valence and dominance of the body image reflected humans' attitude towards the general concept of robots; that the user bases and usages of the robots were among the primary factors influencing humans' impressions towards individual robots; and that there was a relationship between the robots' affects and semantic distances to the word ``person''. According to the results, building body image for robots was an effective paradigm to investigate which features were appreciated by people and what influenced people's feelings towards robots.

Ultrasound is an adjunct tool to mammography that can quickly and safely aid physicians with diagnosing breast abnormalities. Clinical ultrasound often assumes a constant sound speed to form B-mode images for diagnosis. However, the various types of breast tissue, such as glandular, fat, and lesions, differ in sound speed. These differences can degrade the image reconstruction process. Alternatively, sound speed can be a powerful tool for identifying disease. To this end, we propose a deep-learning approach for sound speed estimation from in-phase and quadrature ultrasound signals. First, we develop a large-scale simulated ultrasound dataset that generates quasi-realistic breast tissue by modeling breast gland, skin, and lesions with varying echogenicity and sound speed. We developed a fully convolutional neural network architecture trained on a simulated dataset to produce an estimated sound speed map from inputting three complex-value in-phase and quadrature ultrasound images formed from plane-wave transmissions at separate angles. Furthermore, thermal noise augmentation is used during model optimization to enhance generalizability to real ultrasound data. We evaluate the model on simulated, phantom, and in-vivo breast ultrasound data, demonstrating its ability to accurately estimate sound speeds consistent with previously reported values in the literature. Our simulated dataset and model will be publicly available to provide a step towards accurate and generalizable sound speed estimation for pulse-echo ultrasound imaging.

We study how to generate captions that are not only accurate in describing an image but also discriminative across different images. The problem is both fundamental and interesting, as most machine-generated captions, despite phenomenal research progresses in the past several years, are expressed in a very monotonic and featureless format. While such captions are normally accurate, they often lack important characteristics in human languages - distinctiveness for each caption and diversity for different images. To address this problem, we propose a novel conditional generative adversarial network for generating diverse captions across images. Instead of estimating the quality of a caption solely on one image, the proposed comparative adversarial learning framework better assesses the quality of captions by comparing a set of captions within the image-caption joint space. By contrasting with human-written captions and image-mismatched captions, the caption generator effectively exploits the inherent characteristics of human languages, and generates more discriminative captions. We show that our proposed network is capable of producing accurate and diverse captions across images.

北京阿比特科技有限公司