亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Inflated-beam soft robots, such as tip-everting vine robots, can control curvature by contracting one beam side via pneumatic actuation. This work develops a general finite element modeling approach to characterize their bending. The model is validated across four pneumatic actuator types (series, compression, embedded, and fabric pneumatic artificial muscles), and can be extended to other designs. These actuators employ two bending mechanisms: geometry-based contraction and material-based contraction. The model accounts for intricate nonlinear effects of buckling and anisotropy. Experimental validation includes three working pressures (10, 20, and 30 kPa) for each actuator type. Geometry-based contraction yields significant deformation (92.1% accuracy) once the buckling pattern forms, reducing slightly to 80.7% accuracy at lower pressures due to stress singularities during buckling. Material-based contraction achieves smaller bending angles but remains at least 96.7% accurate. The open source models available at //www.vinerobots.org support designing inflated-beam robots like tip-everting vine robots, contributing to waste reduction by optimizing designs based on material properties and stress distribution for effective bending and stress management.

相關內容

The learn-from-observation (LfO) paradigm is a human-inspired mode for a robot to learn to perform a task simply by watching it being performed. LfO can facilitate robot integration on factory floors by minimizing disruption and reducing tedious programming. A key component of the LfO pipeline is a transformation of the depth camera frames to the corresponding task state and action pairs, which are then relayed to learning techniques such as imitation or inverse reinforcement learning for understanding the task parameters. While several existing computer vision models analyze videos for activity recognition, SA-Net specifically targets robotic LfO from RGB-D data. However, SA-Net and many other models analyze frame data captured from a single viewpoint. Their analysis is therefore highly sensitive to occlusions of the observed task, which are frequent in deployments. An obvious way of reducing occlusions is to simultaneously observe the task from multiple viewpoints and synchronously fuse the multiple streams in the model. Toward this, we present multi-view SA-Net, which generalizes the SA-Net model to allow the perception of multiple viewpoints of the task activity, integrate them, and better recognize the state and action in each frame. Performance evaluations on two distinct domains establish that MVSA-Net recognizes the state-action pairs under occlusion more accurately compared to single-view MVSA-Net and other baselines. Our ablation studies further evaluate its performance under different ambient conditions and establish the contribution of the architecture components. As such, MVSA-Net offers a significantly more robust and deployable state-action trajectory generation compared to previous methods.

High absolute accuracy is an essential prerequisite for a humanoid robot to autonomously and robustly perform manipulation tasks while avoiding obstacles. We present for the first time a kinematic model for a humanoid upper body incorporating joint and transversal elasticities. These elasticities lead to significant deformations due to the robot's own weight, and the resulting model is implicitly defined via a torque equilibrium. We successfully calibrate this model for DLR's humanoid Agile Justin, including all Denavit-Hartenberg parameters and elasticities. The calibration is formulated as a combined least-squares problem with priors and based on measurements of the end effector positions of both arms via an external tracking system. The absolute position error is massively reduced from 21mm to 3.1mm on average in the whole workspace. Using this complex and implicit kinematic model in motion planning is challenging. We show that for optimization-based path planning, integrating the iterative solution of the implicit model into the optimization loop leads to an elegant and highly efficient solution. For mildly elastic robots like Agile Justin, there is no performance impact, and even for a simulated highly flexible robot with 20 times higher elasticities, the runtime increases by only 30%.

Masking is a well-known and provably secure countermeasure against side-channel attacks. However, due to additional redundant computations, integrating masking schemes is expensive in terms of performance. The performance overhead of integrating masking countermeasures is heavily influenced by the design choices of a cryptographic algorithm and is often not considered during the design phase. In this work, we deliberate on the effect of design choices on integrating masking techniques into lattice-based cryptography. We select Scabbard, a suite of three lattice-based post-quantum key-encapsulation mechanisms (KEM), namely Florete, Espada, and Sable. We provide arbitrary-order masked implementations of all the constituent KEMs of the Scabbard suite by exploiting their specific design elements. We show that the masked implementations of Florete, Espada, and Sable outperform the masked implementations of Kyber in terms of speed for any order masking. Masked Florete exhibits a $73\%$, $71\%$, and $70\%$ performance improvement over masked Kyber corresponding to the first-, second-, and third-order. Similarly, Espada exhibits $56\%$, $59\%$, and $60\%$ and Sable exhibits $75\%$, $74\%$, and $73\%$ enhanced performance for first-, second-, and third-order masking compared to Kyber respectively. Our results show that the design decisions have a significant impact on the efficiency of integrating masking countermeasures into lattice-based cryptography.

Compared to "black-box" models, like random forests and deep neural networks, explainable boosting machines (EBMs) are considered "glass-box" models that can be competitively accurate while also maintaining a higher degree of transparency and explainability. However, EBMs become readily less transparent and harder to interpret in high-dimensional settings with many predictor variables; they also become more difficult to use in production due to increases in scoring time. We propose a simple solution based on the least absolute shrinkage and selection operator (LASSO) that can help introduce sparsity by reweighting the individual model terms and removing the less relevant ones, thereby allowing these models to maintain their transparency and relatively fast scoring times in higher-dimensional settings. In short, post-processing a fitted EBM with many (i.e., possibly hundreds or thousands) of terms using the LASSO can help reduce the model's complexity and drastically improve scoring time. We illustrate the basic idea using two real-world examples with code.

The advent of tactile sensors in robotics has sparked many ideas on how robots can leverage direct contact measurements of their environment interactions to improve manipulation tasks. An important line of research in this regard is that of grasp force control, which aims to manipulate objects safely by limiting the amount of force exerted on the object. While prior works have either hand-modeled their force controllers, employed model-based approaches, or have not shown sim-to-real transfer, we propose a model-free deep reinforcement learning approach trained in simulation and then transferred to the robot without further fine-tuning. We therefore present a simulation environment that produces realistic normal forces, which we use to train continuous force control policies. An evaluation in which we compare against a baseline and perform an ablation study shows that our approach outperforms the hand-modeled baseline and that our proposed inductive bias and domain randomization facilitate sim-to-real transfer. Code, models, and supplementary videos are available on //sites.google.com/view/rl-force-ctrl

Sensor arrays play a significant role in direction of arrival (DOA) estimation. Specifically, arrays with low redundancy and reduced mutual coupling are desirable. In this paper, we investigate a sensor array configuration that has a restricted sensor spacing and propose a closed-form expression. We also propose several classes of low redundancy (LR) arrays. Interestingly, compared with super nested arrays (SNA) and maximum inter-element spacing constraint (MISC) arrays, one of the proposed arrays has a significant reduction in both redundancy ratio and mutual coupling. Numerical simulations are also conducted to verify the superiority of the proposed array over the known sparse arrays in terms of weight functions, mutual coupling matrices as well and DOA estimation performance.

Effectively specifying and implementing robotic missions poses a set of challenges to software engineering for robotic systems. These challenges stem from the need to formalize and execute a robot's high-level tasks while considering various application scenarios and conditions, also known as contexts, in real-world operational environments. Writing correct mission specifications that explicitly account for multiple contexts can be tedious and error-prone. Furthermore, as the number of contexts, and consequently the complexity of the specification, increases, generating a correct-by-construction implementation (e.g., by using synthesis methods) can become intractable. A viable approach to address these issues is to decompose the mission specification into smaller, manageable sub-missions, with each sub-mission tailored to a specific context. Nevertheless, this compositional approach introduces its own set of challenges in ensuring the overall mission's correctness. In this paper, we propose a novel compositional framework for specifying and implementing contextual robotic missions using assume-guarantee contracts. The mission specification is structured in a hierarchical and modular fashion, allowing for each sub-mission to be synthesized as an independent robot controller. We address the problem of dynamically switching between sub-mission controllers while ensuring correctness under predefined conditions.

In today's industrial challenges, it can be observed that the trends point in the direction of agile, wireless connected robots where elements of intelligence and control are implemented in the edge cloud. This paper outlines the roles of three key participants in the value chain of an industrial process: the network provider, the robot operator, and the customer. It proposes a scheme where the Quality of Service (QoS) parameters of the robot are fed into the network to inform network resource management. A sanding process use case is simulated to demonstrate the relationship between QoS and Quality of Experience (QoE) for each participant, quantitatively.

Reinforcement Learning (RL) is an emerging approach to control many dynamical systems for which classical control approaches are not applicable or insufficient. However, the resultant policies may not generalize to variations in the parameters that the system may exhibit. This paper presents a powerful yet simple algorithm in which collaboration is facilitated between RL agents that are trained independently to perform the same task but with different system parameters. The independency among agents allows the exploitation of multi-core processing to perform parallel training. Two examples are provided to demonstrate the effectiveness of the proposed technique. The main demonstration is performed on a quadrotor with slung load tracking problem in a real-time experimental setup. It is shown that integrating the developed algorithm outperforms individual policies by reducing the RMSE tracking error. The robustness of the ensemble is also verified against wind disturbance.

Sampling methods (e.g., node-wise, layer-wise, or subgraph) has become an indispensable strategy to speed up training large-scale Graph Neural Networks (GNNs). However, existing sampling methods are mostly based on the graph structural information and ignore the dynamicity of optimization, which leads to high variance in estimating the stochastic gradients. The high variance issue can be very pronounced in extremely large graphs, where it results in slow convergence and poor generalization. In this paper, we theoretically analyze the variance of sampling methods and show that, due to the composite structure of empirical risk, the variance of any sampling method can be decomposed into \textit{embedding approximation variance} in the forward stage and \textit{stochastic gradient variance} in the backward stage that necessities mitigating both types of variance to obtain faster convergence rate. We propose a decoupled variance reduction strategy that employs (approximate) gradient information to adaptively sample nodes with minimal variance, and explicitly reduces the variance introduced by embedding approximation. We show theoretically and empirically that the proposed method, even with smaller mini-batch sizes, enjoys a faster convergence rate and entails a better generalization compared to the existing methods.

北京阿比特科技有限公司