亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Numerical methods for Inverse Kinematics (IK) employ iterative, linear approximations of the IK until the end-effector is brought from its initial pose to the desired final pose. These methods require the computation of the Jacobian of the Forward Kinematics (FK) and its inverse in the linear approximation of the IK. Despite all the successful implementations reported in the literature, Jacobian-based IK methods can still fail to preserve certain useful properties if an improper matrix inverse, e.g. Moore-Penrose (MP), is employed for incommensurate robotic systems. In this paper, we propose a systematic, robust and accurate numerical solution for the IK problem using the Mixed (MX) Generalized Inverse (GI) applied to any type of Jacobians (e.g., analytical, numerical or geometric) derived for any commensurate and incommensurate robot. This approach is robust to whether the system is under-determined (less than 6 DoF) or over-determined (more than 6 DoF). We investigate six robotics manipulators with various Degrees of Freedom (DoF) to demonstrate that commonly used GI's fail to guarantee the same system behaviors when the units are varied for incommensurate robotics manipulators. In addition, we evaluate the proposed methodology as a global IK solver and compare against well-known IK methods for redundant manipulators. Based on the experimental results, we conclude that the right choice of GI is crucial in preserving certain properties of the system (i.e. unit-consistency).

相關內容

Graphics Interface conference第46屆年會將于2020年5月21-22日在多倫多大學舉行。GI一直吸引著高質量的關于計算機圖形學、人機交互(HCI)和可視化的最新進展的提交。官網鏈接:

Substantial experiments have validated the success of Batch Normalization (BN) Layer in benefiting convergence and generalization. However, BN requires extra memory and float-point calculation. Moreover, BN would be inaccurate on micro-batch, as it depends on batch statistics. In this paper, we address these problems by simplifying BN regularization while keeping two fundamental impacts of BN layers, i.e., data decorrelation and adaptive learning rate. We propose a novel normalization method, named MimicNorm, to improve the convergence and efficiency in network training. MimicNorm consists of only two light operations, including modified weight mean operations (subtract mean values from weight parameter tensor) and one BN layer before loss function (last BN layer). We leverage the neural tangent kernel (NTK) theory to prove that our weight mean operation whitens activations and transits network into the chaotic regime like BN layer, and consequently, leads to an enhanced convergence. The last BN layer provides autotuned learning rates and also improves accuracy. Experimental results show that MimicNorm achieves similar accuracy for various network structures, including ResNets and lightweight networks like ShuffleNet, with a reduction of about 20% memory consumption. The code is publicly available at //github.com/Kid-key/MimicNorm.

We present a comprehensive evaluation of the robustness and explainability of ResNet-like models in the context of Unintended Radiated Emission (URE) classification and suggest a new approach leveraging Neural Stochastic Differential Equations (SDEs) to address identified limitations. We provide an empirical demonstration of the fragility of ResNet-like models to Gaussian noise perturbations, where the model performance deteriorates sharply and its F1-score drops to near insignificance at 0.008 with a Gaussian noise of only 0.5 standard deviation. We also highlight a concerning discrepancy where the explanations provided by ResNet-like models do not reflect the inherent periodicity in the input data, a crucial attribute in URE detection from stable devices. In response to these findings, we propose a novel application of Neural SDEs to build models for URE classification that are not only robust to noise but also provide more meaningful and intuitive explanations. Neural SDE models maintain a high F1-score of 0.93 even when exposed to Gaussian noise with a standard deviation of 0.5, demonstrating superior resilience to ResNet models. Neural SDE models successfully recover the time-invariant or periodic horizontal bands from the input data, a feature that was conspicuously missing in the explanations generated by ResNet-like models. This advancement presents a small but significant step in the development of robust and interpretable models for real-world URE applications where data is inherently noisy and assurance arguments demand interpretable machine learning predictions.

We propose a Deep Operator Network~(DeepONet) framework to learn the dynamic response of continuous-time nonlinear control systems from data. To this end, we first construct and train a DeepONet that approximates the control system's local solution operator. Then, we design a numerical scheme that recursively uses the trained DeepONet to simulate the control system's long/medium-term dynamic response for given control inputs and initial conditions. We accompany the proposed scheme with an estimate for the error bound of the associated cumulative error. Furthermore, we design a data-driven Runge-Kutta~(RK) explicit scheme that uses the DeepONet forward pass and automatic differentiation to better approximate the system's response when the numerical scheme's step size is sufficiently small. Numerical experiments on the predator-prey, pendulum, and cart pole systems confirm that our DeepONet framework learns to approximate the dynamic response of nonlinear control systems effectively.

Artificial neural networks (ANNs) with recurrence and self-attention have been shown to be Turing-complete (TC). However, existing work has shown that these ANNs require multiple turns or unbounded computation time, even with unbounded precision in weights, in order to recognize TC grammars. However, under constraints such as fixed or bounded precision neurons and time, ANNs without memory are shown to struggle to recognize even context-free languages. In this work, we extend the theoretical foundation for the $2^{nd}$-order recurrent network ($2^{nd}$ RNN) and prove there exists a class of a $2^{nd}$ RNN that is Turing-complete with bounded time. This model is capable of directly encoding a transition table into its recurrent weights, enabling bounded time computation and is interpretable by design. We also demonstrate that $2$nd order RNNs, without memory, under bounded weights and time constraints, outperform modern-day models such as vanilla RNNs and gated recurrent units in recognizing regular grammars. We provide an upper bound and a stability analysis on the maximum number of neurons required by $2$nd order RNNs to recognize any class of regular grammar. Extensive experiments on the Tomita grammars support our findings, demonstrating the importance of tensor connections in crafting computationally efficient RNNs. Finally, we show $2^{nd}$ order RNNs are also interpretable by extraction and can extract state machines with higher success rates as compared to first-order RNNs. Our results extend the theoretical foundations of RNNs and offer promising avenues for future explainable AI research.

Low Rank Parity Check (LRPC) codes form a class of rank-metric error-correcting codes that was purposely introduced to design public-key encryption schemes. An LRPC code is defined from a parity check matrix whose entries belong to a relatively low dimensional vector subspace of a large finite field. This particular algebraic feature can then be exploited to correct with high probability rank errors when the parameters are appropriately chosen. In this paper, we present theoretical upper-bounds on the probability that the LRPC decoding algorithm fails.

This research study explores the applicability of Deep Reinforcement Learning (DRL) for thermal control based on Computational Fluid Dynamics. To accomplish that, the forced convection on a hot plate prone to a pulsating cooling jet with variable velocity has been investigated. We begin with evaluating the efficiency and viability of a vanilla Deep Q-Network (DQN) method for thermal control. Subsequently, a comprehensive comparison between different variants of DRL is conducted. Soft Double and Duel DQN achieved better thermal control performance among all the variants due to their efficient learning and action prioritization capabilities. Results demonstrate that the soft Double DQN outperforms the hard Double DQN. Moreover, soft Double and Duel can maintain the temperature in the desired threshold for more than 98% of the control cycle. These findings demonstrate the promising potential of DRL in effectively addressing thermal control systems.

Current state-of-the-art crowd navigation approaches are mainly deep reinforcement learning (DRL)-based. However, DRL-based methods suffer from the issues of generalization and scalability. To overcome these challenges, we propose a method that includes a Collision Probability (CP) in the observation space to give the robot a sense of the level of danger of the moving crowd to help the robot navigate safely through crowds with unseen behaviors. We studied the effects of changing the number of moving obstacles to pay attention during navigation. During training, we generated local waypoints to increase the reward density and improve the learning efficiency of the system. Our approach was developed using deep reinforcement learning (DRL) and trained using the Gazebo simulator in a non-cooperative crowd environment with obstacles moving at randomized speeds and directions. We then evaluated our model on four different crowd-behavior scenarios. The results show that our method achieved a 100% success rate in all test settings. We compared our approach with a current state-of-the-art DRL-based approach, and our approach has performed significantly better, especially in terms of social safety. Importantly, our method can navigate in different crowd behaviors and requires no fine-tuning after being trained once. We further demonstrated the crowd navigation capability of our model in real-world tests.

Prognostics and Health Management (PHM) is a discipline focused on predicting the point at which systems or components will cease to perform as intended, typically measured as Remaining Useful Life (RUL). RUL serves as a vital decision-making tool for contingency planning, guiding the timing and nature of system maintenance. Historically, PHM has primarily been applied to hardware systems, with its application to software only recently explored. In a recent study we introduced a methodology and demonstrated how changes in software can impact the RUL of software. However, in practical software development, real-time performance is also influenced by various environmental attributes, including operating systems, clock speed, processor performance, RAM, machine core count and others. This research extends the analysis to assess how changes in environmental attributes, such as operating system and clock speed, affect RUL estimation in software. Findings are rigorously validated using real performance data from controlled test beds and compared with predictive model-generated data. Statistical validation, including regression analysis, supports the credibility of the results. The controlled test bed environment replicates and validates faults from real applications, ensuring a standardized assessment platform. This exploration yields actionable knowledge for software maintenance and optimization strategies, addressing a significant gap in the field of software health management.

The estimation of cumulative distribution functions (CDF) and probability density functions (PDF) is a fundamental practice in applied statistics. However, challenges often arise when dealing with data arranged in grouped intervals. In this paper, we discuss a suitable and highly flexible non-parametric density estimation approach for binned distributions, based on cubic monotonicity-preserving splines - known as cubic spline interpolation. Results from simulation studies demonstrate that this approach outperforms many widely used heuristic methods. Additionally, the application of this method to a dataset of train delays in Germany and micro census data on distance and travel time to work yields both meaningful but also some questionable results.

The dominating NLP paradigm of training a strong neural predictor to perform one task on a specific dataset has led to state-of-the-art performance in a variety of applications (eg. sentiment classification, span-prediction based question answering or machine translation). However, it builds upon the assumption that the data distribution is stationary, ie. that the data is sampled from a fixed distribution both at training and test time. This way of training is inconsistent with how we as humans are able to learn from and operate within a constantly changing stream of information. Moreover, it is ill-adapted to real-world use cases where the data distribution is expected to shift over the course of a model's lifetime. The first goal of this thesis is to characterize the different forms this shift can take in the context of natural language processing, and propose benchmarks and evaluation metrics to measure its effect on current deep learning architectures. We then proceed to take steps to mitigate the effect of distributional shift on NLP models. To this end, we develop methods based on parametric reformulations of the distributionally robust optimization framework. Empirically, we demonstrate that these approaches yield more robust models as demonstrated on a selection of realistic problems. In the third and final part of this thesis, we explore ways of efficiently adapting existing models to new domains or tasks. Our contribution to this topic takes inspiration from information geometry to derive a new gradient update rule which alleviate catastrophic forgetting issues during adaptation.

北京阿比特科技有限公司