亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We investigated the human capacity to acquire multiple visuomotor mappings for de novo skills. Using a grid navigation paradigm, we tested whether contextual cues implemented as different "grid worlds", allow participants to learn two distinct key-mappings more efficiently. Our results indicate that when contextual information is provided, task performance is significantly better. The same held true for meta-reinforcement learning agents that differed in whether or not they receive contextual information when performing the task. We evaluated their accuracy in predicting human performance in the task and analyzed their internal representations. The results indicate that contextual cues allow the formation of separate representations in space and time when using different visuomotor mappings, whereas the absence of them favors sharing one representation. While both strategies can allow learning of multiple visuomotor mappings, we showed contextual cues provide a computational advantage in terms of how many mappings can be learned.

相關內容

We present a fast learning-based inertial parameters estimation framework capable of understanding the dynamics of an unknown object to enable a humanoid (or manipulator) to more safely and accurately interact with its surrounding environments. Unlike most relevant literature, our framework doesn't require to use of a force/torque sensor, vision system, and a long-horizon trajectory. To achieve fast inertia parameter estimation, a time-series data-driven regression model is utilized rather than solving a constrained optimization problem. Due to the challenge of obtaining a large number of the ground truth of inertia parameters in the real world, we acquire a reliable dataset in a high-fidelity simulation that is developed using a real-to-sim adaptation. The adaptation method we introduced consists of two components: 1) \textit{Robot System Identification} and 2) \textit{Gaussian Processes}. We demonstrate our method with a 4-DOF single manipulator of a wheeled humanoid robot, SATYRR. Results show that our method can identify the inertial parameters of various unknown objects quickly while maintaining sufficient accuracy compared to other methods. Manipulation and locomotion experiments were also carried out to show the benefit of using the estimated inertia parameters from control perspective.

The capability to generate simulation-ready garment models from 3D shapes of clothed humans will significantly enhance the interpretability of captured geometry of real garments, as well as their faithful reproduction in the virtual world. This will have notable impact on fields like shape capture in social VR, and virtual try-on in the fashion industry. To align with the garment modeling process standardized by the fashion industry as well as cloth simulation softwares, it is required to recover 2D patterns. This involves an inverse garment design problem, which is the focus of our work here: Starting with an arbitrary target garment geometry, our system estimates an animatable garment model by automatically adjusting its corresponding 2D template pattern, along with the material parameters of the physics-based simulation (PBS). Built upon a differentiable cloth simulator, the optimization process is directed towards minimizing the deviation of the simulated garment shape from the target geometry. Moreover, our produced patterns meet manufacturing requirements such as left-to-right-symmetry, making them suited for reverse garment fabrication. We validate our approach on examples of different garment types, and show that our method faithfully reproduces both the draped garment shape and the sewing pattern.

We propose EmoDistill, a novel speech emotion recognition (SER) framework that leverages cross-modal knowledge distillation during training to learn strong linguistic and prosodic representations of emotion from speech. During inference, our method only uses a stream of speech signals to perform unimodal SER thus reducing computation overhead and avoiding run-time transcription and prosodic feature extraction errors. During training, our method distills information at both embedding and logit levels from a pair of pre-trained Prosodic and Linguistic teachers that are fine-tuned for SER. Experiments on the IEMOCAP benchmark demonstrate that our method outperforms other unimodal and multimodal techniques by a considerable margin, and achieves state-of-the-art performance of 77.49% unweighted accuracy and 78.91% weighted accuracy. Detailed ablation studies demonstrate the impact of each component of our method.

The recursive Neville algorithm allows one to calculate interpolating functions recursively. Upon a judicious choice of the abscissas used for the interpolation (and extrapolation), this algorithm leads to a method for convergence acceleration. For example, one can use the Neville algorithm in order to successively eliminate inverse powers of the upper limit of the summation from the partial sums of a given, slowly convergent input series. Here, we show that, for a particular choice of the abscissas used for the extrapolation, one can replace the recursive Neville scheme by a simple one-step transformation, while also obtaining access to subleading terms for the transformed series after convergence acceleration. The matrix-based, unified formulas allow one to estimate the rate of convergence of the partial sums of the input series to their limit. In particular, Bethe logarithms for hydrogen are calculated to 100 decimal digits.

Programming a robotic is a complex task, as it demands the user to have a good command of specific programming languages and awareness of the robot's physical constraints. We propose a framework that simplifies robot deployment by allowing direct communication using natural language. It uses large language models (LLM) for prompt processing, workspace understanding, and waypoint generation. It also employs Augmented Reality (AR) to provide visual feedback of the planned outcome. We showcase the effectiveness of our framework with a simple pick-and-place task, which we implement on a real robot. Moreover, we present an early concept of expressive robot behavior and skill generation that can be used to communicate with the user and learn new skills (e.g., object grasping).

We prove impossibility results for adaptivity in non-smooth stochastic convex optimization. Given a set of problem parameters we wish to adapt to, we define a "price of adaptivity" (PoA) that, roughly speaking, measures the multiplicative increase in suboptimality due to uncertainty in these parameters. When the initial distance to the optimum is unknown but a gradient norm bound is known, we show that the PoA is at least logarithmic for expected suboptimality, and double-logarithmic for median suboptimality. When there is uncertainty in both distance and gradient norm, we show that the PoA must be polynomial in the level of uncertainty. Our lower bounds nearly match existing upper bounds, and establish that there is no parameter-free lunch.

This study investigates a strongly-coupled system of partial differential equations (PDE) governing heat transfer in a copper rod, longitudinal vibrations, and total charge accumulation at electrodes within a magnetizable piezoelectric beam. Conducted within the transmission line framework, the analysis reveals profound interactions between traveling electromagnetic and mechanical waves in magnetizable piezoelectric beams, despite disparities in their velocities. Findings suggest that in the open-loop scenario, the interaction of heat and beam dynamics lacks exponential stability solely considering thermal effects. To confront this challenge, two types of boundary-type state feedback controllers are proposed: (i) employing static feedback controllers entirely and (ii) adopting a hybrid approach wherein the electrical controller dynamically enhances system dynamics. In both cases, solutions of the PDE systems demonstrate exponential stability through meticulously formulated Lyapunov functions with diverse multipliers. The proposed proof technique establishes a robust foundation for demonstrating the exponential stability of Finite-Difference-based model reductions as the discretization parameter approaches zero.

In the rapidly advancing realm of visual generation, diffusion models have revolutionized the landscape, marking a significant shift in capabilities with their impressive text-guided generative functions. However, relying solely on text for conditioning these models does not fully cater to the varied and complex requirements of different applications and scenarios. Acknowledging this shortfall, a variety of studies aim to control pre-trained text-to-image (T2I) models to support novel conditions. In this survey, we undertake a thorough review of the literature on controllable generation with T2I diffusion models, covering both the theoretical foundations and practical advancements in this domain. Our review begins with a brief introduction to the basics of denoising diffusion probabilistic models (DDPMs) and widely used T2I diffusion models. We then reveal the controlling mechanisms of diffusion models, theoretically analyzing how novel conditions are introduced into the denoising process for conditional generation. Additionally, we offer a detailed overview of research in this area, organizing it into distinct categories from the condition perspective: generation with specific conditions, generation with multiple conditions, and universal controllable generation. For an exhaustive list of the controllable generation literature surveyed, please refer to our curated repository at \url{//github.com/PRIV-Creation/Awesome-Controllable-T2I-Diffusion-Models}.

Conventional methods for object detection typically require a substantial amount of training data and preparing such high-quality training data is very labor-intensive. In this paper, we propose a novel few-shot object detection network that aims at detecting objects of unseen categories with only a few annotated examples. Central to our method are our Attention-RPN, Multi-Relation Detector and Contrastive Training strategy, which exploit the similarity between the few shot support set and query set to detect novel objects while suppressing false detection in the background. To train our network, we contribute a new dataset that contains 1000 categories of various objects with high-quality annotations. To the best of our knowledge, this is one of the first datasets specifically designed for few-shot object detection. Once our few-shot network is trained, it can detect objects of unseen categories without further training or fine-tuning. Our method is general and has a wide range of potential applications. We produce a new state-of-the-art performance on different datasets in the few-shot setting. The dataset link is //github.com/fanq15/Few-Shot-Object-Detection-Dataset.

Automatically creating the description of an image using any natural languages sentence like English is a very challenging task. It requires expertise of both image processing as well as natural language processing. This paper discuss about different available models for image captioning task. We have also discussed about how the advancement in the task of object recognition and machine translation has greatly improved the performance of image captioning model in recent years. In addition to that we have discussed how this model can be implemented. In the end, we have also evaluated the performance of model using standard evaluation matrices.

北京阿比特科技有限公司