亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Robotic automation in life science research is a paradigm that has gained increasing relevance in recent years. Current solutions in this area often have limited scope, such as pick-and-place tasks for a specific object. Thus, each new process requires a separate toolset, which prevents the realization of more complex workflows and reduces the acceptance of robotic automation tools. Here, we present a novel finger system for a parallel gripper for biolaboratory automation that can handle a wide range of liquid containers. This flexibility is enabled by developing the fingers as a dual-extrusion 3D print. The coating with a soft material from the second extruder in one seamless print and the fingertip design are key features to enhance grasping capabilities. By employing a passive compliant mechanism that was previously presented in a finger called ``PaCoMe'', a simple actuation system and a low weight are maintained. The ability to resist chemicals and high temperatures and the integration with a tool exchange system make the fingers usable for daily laboratory use and complex workflows. We present their task suitability in several experiments showing the wide range of vessels that can be handled as well as their tolerance against displacements and grasp stability.

相關內容

Automator是蘋果公司為他們的Mac OS X系統開發的一款軟件。 只要通過點擊拖拽鼠標等操作就可以將一系列動作組合成一個工作流,從而幫助你自動的(可重復的)完成一些復雜的工作。Automator還能橫跨很多不同種類的程序,包括:查找器、Safari網絡瀏覽器、iCal、地址簿或者其他的一些程序。它還能和一些第三方的程序一起工作,如微軟的Office、Adobe公司的Photoshop或者Pixelmator等。

Neural implicit fields are powerful for representing 3D scenes and generating high-quality novel views, but it remains challenging to use such implicit representations for creating a 3D human avatar with a specific identity and artistic style that can be easily animated. Our proposed method, AvatarCraft, addresses this challenge by using diffusion models to guide the learning of geometry and texture for a neural avatar based on a single text prompt. We carefully design the optimization framework of neural implicit fields, including a coarse-to-fine multi-bounding box training strategy, shape regularization, and diffusion-based constraints, to produce high-quality geometry and texture. Additionally, we make the human avatar animatable by deforming the neural implicit field with an explicit warping field that maps the target human mesh to a template human mesh, both represented using parametric human models. This simplifies animation and reshaping of the generated avatar by controlling pose and shape parameters. Extensive experiments on various text descriptions show that AvatarCraft is effective and robust in creating human avatars and rendering novel views, poses, and shapes. Our project page is: \url{//avatar-craft.github.io/}.

Recently, transformer-based methods have gained significant success in sequential 2D-to-3D lifting human pose estimation. As a pioneering work, PoseFormer captures spatial relations of human joints in each video frame and human dynamics across frames with cascaded transformer layers and has achieved impressive performance. However, in real scenarios, the performance of PoseFormer and its follow-ups is limited by two factors: (a) The length of the input joint sequence; (b) The quality of 2D joint detection. Existing methods typically apply self-attention to all frames of the input sequence, causing a huge computational burden when the frame number is increased to obtain advanced estimation accuracy, and they are not robust to noise naturally brought by the limited capability of 2D joint detectors. In this paper, we propose PoseFormerV2, which exploits a compact representation of lengthy skeleton sequences in the frequency domain to efficiently scale up the receptive field and boost robustness to noisy 2D joint detection. With minimum modifications to PoseFormer, the proposed method effectively fuses features both in the time domain and frequency domain, enjoying a better speed-accuracy trade-off than its precursor. Extensive experiments on two benchmark datasets (i.e., Human3.6M and MPI-INF-3DHP) demonstrate that the proposed approach significantly outperforms the original PoseFormer and other transformer-based variants. Code is released at \url{//github.com/QitaoZhao/PoseFormerV2}.

Humans coordinate the abundant degrees of freedom (DoFs) of hands to dexterously perform tasks in everyday life. We imitate human strategies to advance the dexterity of multi-DoF robotic hands. Specifically, we enable a robot hand to grasp multiple objects by exploiting its kinematic redundancy, referring to all its controllable DoFs. We propose a human-like grasp synthesis algorithm to generate grasps using pairwise contacts on arbitrary opposing hand surface regions, no longer limited to fingertips or hand inner surface. To model the available space of the hand for grasp, we construct a reachability map, consisting of reachable spaces of all finger phalanges and the palm. It guides the formulation of a constrained optimization problem, solving for feasible and stable grasps. We formulate an iterative process to empower robotic hands to grasp multiple objects in sequence. Moreover, we propose a kinematic efficiency metric and an associated strategy to facilitate exploiting kinematic redundancy. We validated our approaches by generating grasps of single and multiple objects using various hand surface regions. Such grasps can be successfully replicated on a real robotic hand.

Intuitive robot programming through use of tracked smart input devices relies on fixed, external tracking systems, most often employing infra-red markers. Such an approach is frequently combined with projector-based augmented reality for better visualisation and interface. The combined system, although providing an intuitive programming platform with short cycle times even for inexperienced users, is immobile, expensive and requires extensive calibration. When faced with a changing environment and large number of robots it becomes sorely impractical. Here we present our work on infra-red marker tracking using the Microsoft HoloLens head-mounted display. The HoloLens can map the environment, register the robot on-line, and track smart devices equipped with infra-red markers in the robot coordinate system. We envision our work to provide the basis to transfer many of the paradigms developed over the years for systems requiring a projector and a tracked input device into a highly-portable system that does not require any calibration or special set-up. We test the quality of the marker-tracking in an industrial robot cell and compare our tracking with a ground truth obtained via an ART-3 tracking system.

We present a new multi-sensor dataset for multi-view 3D surface reconstruction. It includes registered RGB and depth data from sensors of different resolutions and modalities: smartphones, Intel RealSense, Microsoft Kinect, industrial cameras, and structured-light scanner. The scenes are selected to emphasize a diverse set of material properties challenging for existing algorithms. We provide around 1.4 million images of 107 different scenes acquired from 100 viewing directions under 14 lighting conditions. We expect our dataset will be useful for evaluation and training of 3D reconstruction algorithms and for related tasks. The dataset is available at skoltech3d.appliedai.tech.

Safe and efficient collaboration among multiple robots in unstructured environments is increasingly critical in the era of Industry 4.0. However, achieving robust and autonomous collaboration among humans and other robots requires modern robotic systems to have effective proximity perception and reactive obstacle avoidance. In this paper, we propose a novel methodology for reactive whole-body obstacle avoidance that ensures conflict-free robot-robot interactions even in dynamic environment. Unlike existing approaches based on Jacobian-type, sampling based or geometric techniques, our methodology leverages the latest deep learning advances and topological manifold learning, enabling it to be readily generalized to other problem settings with high computing efficiency and fast graph traversal techniques. Our approach allows a robotic arm to proactively avoid obstacles of arbitrary 3D shapes without direct contact, a significant improvement over traditional industrial cobot settings. To validate our approach, we implement it on a robotic platform consisting of dual 6-DoF robotic arms with optimized proximity sensor placement, capable of working collaboratively with varying levels of interference. Specifically, one arm performs reactive whole-body obstacle avoidance while achieving its pre-determined objective, while the other arm emulates the presence of a human collaborator with independent and potentially adversarial movements. Our methodology provides a robust and effective solution for safe human-robot collaboration in non-stationary environments.

DAMON leverages manifold learning and variational autoencoding to achieve obstacle avoidance, allowing for motion planning through adaptive graph traversal in a pre-learned low-dimensional hierarchically-structured manifold graph that captures intricate motion dynamics between a robotic arm and its obstacles. This versatile and reusable approach is applicable to various collaboration scenarios. The primary advantage of DAMON is its ability to embed information in a low-dimensional graph, eliminating the need for repeated computation required by current sampling-based methods. As a result, it offers faster and more efficient motion planning with significantly lower computational overhead and memory footprint. In summary, DAMON is a breakthrough methodology that addresses the challenge of dynamic obstacle avoidance in robotic systems and offers a promising solution for safe and efficient human-robot collaboration. Our approach has been experimentally validated on a 7-DoF robotic manipulator in both simulation and physical settings. DAMON enables the robot to learn and generate skills for avoiding previously-unseen obstacles while achieving predefined objectives. We also optimize DAMON's design parameters and performance using an analytical framework. Our approach outperforms mainstream methodologies, including RRT, RRT*, Dynamic RRT*, L2RRT, and MpNet, with 40\% more trajectory smoothness and over 65\% improved latency performance, on average.

Physics-based deep learning frameworks have shown to be effective in accurately modeling the dynamics of complex physical systems with generalization capability across problem inputs. However, time-independent problems pose the challenge of requiring long-range exchange of information across the computational domain for obtaining accurate predictions. In the context of graph neural networks (GNNs), this calls for deeper networks, which, in turn, may compromise or slow down the training process. In this work, we present two GNN architectures to overcome this challenge - the Edge Augmented GNN and the Multi-GNN. We show that both these networks perform significantly better (by a factor of 1.5 to 2) than baseline methods when applied to time-independent solid mechanics problems. Furthermore, the proposed architectures generalize well to unseen domains, boundary conditions, and materials. Here, the treatment of variable domains is facilitated by a novel coordinate transformation that enables rotation and translation invariance. By broadening the range of problems that neural operators based on graph neural networks can tackle, this paper provides the groundwork for their application to complex scientific and industrial settings.

The key challenge of image manipulation detection is how to learn generalizable features that are sensitive to manipulations in novel data, whilst specific to prevent false alarms on authentic images. Current research emphasizes the sensitivity, with the specificity overlooked. In this paper we address both aspects by multi-view feature learning and multi-scale supervision. By exploiting noise distribution and boundary artifact surrounding tampered regions, the former aims to learn semantic-agnostic and thus more generalizable features. The latter allows us to learn from authentic images which are nontrivial to be taken into account by current semantic segmentation network based methods. Our thoughts are realized by a new network which we term MVSS-Net. Extensive experiments on five benchmark sets justify the viability of MVSS-Net for both pixel-level and image-level manipulation detection.

Knowledge graph (KG) embeddings learn low-dimensional representations of entities and relations to predict missing facts. KGs often exhibit hierarchical and logical patterns which must be preserved in the embedding space. For hierarchical data, hyperbolic embedding methods have shown promise for high-fidelity and parsimonious representations. However, existing hyperbolic embedding methods do not account for the rich logical patterns in KGs. In this work, we introduce a class of hyperbolic KG embedding models that simultaneously capture hierarchical and logical patterns. Our approach combines hyperbolic reflections and rotations with attention to model complex relational patterns. Experimental results on standard KG benchmarks show that our method improves over previous Euclidean- and hyperbolic-based efforts by up to 6.1% in mean reciprocal rank (MRR) in low dimensions. Furthermore, we observe that different geometric transformations capture different types of relations while attention-based transformations generalize to multiple relations. In high dimensions, our approach yields new state-of-the-art MRRs of 49.6% on WN18RR and 57.7% on YAGO3-10.

北京阿比特科技有限公司