亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper investigates the development and optimization of control algorithms for mobile robotics, with a keen focus on their implementation in Field-Programmable Gate Arrays (FPGAs). It delves into both classical control approaches such as PID and modern techniques including deep learning, addressing their application in sectors ranging from industrial automation to medical care. The study highlights the practical challenges and advancements in embedding these algorithms into FPGAs, which offer significant benefits for mobile robotics due to their high-speed processing and parallel computation capabilities. Through an analysis of various control strategies, the paper showcases the improvements in robot performance, particularly in navigation and obstacle avoidance. It emphasizes the critical role of FPGAs in enhancing the efficiency and adaptability of control algorithms in dynamic environments. Additionally, the research discusses the difficulties in benchmarking and evaluating the performance of these algorithms in real-world applications, suggesting a need for standardized evaluation criteria. The contribution of this work lies in its comprehensive examination of control algorithms' potential in FPGA-based mobile robotics, offering insights into future research directions for improving robotic autonomy and operational efficiency.

相關內容

Recent advances unveiled physical neural networks as promising machine learning platforms, offering faster and more energy-efficient information processing. Compared with extensively-studied optical neural networks, the development of mechanical neural networks (MNNs) remains nascent and faces significant challenges, including heavy computational demands and learning with approximate gradients. Here, we introduce the mechanical analogue of in situ backpropagation to enable highly efficient training of MNNs. We demonstrate that the exact gradient can be obtained locally in MNNs, enabling learning through their immediate vicinity. With the gradient information, we showcase the successful training of MNNs for behavior learning and machine learning tasks, achieving high accuracy in regression and classification. Furthermore, we present the retrainability of MNNs involving task-switching and damage, demonstrating the resilience. Our findings, which integrate the theory for training MNNs and experimental and numerical validations, pave the way for mechanical machine learning hardware and autonomous self-learning material systems.

This note continues and extends the study from Spokoiny (2023a) about estimation for parametric models with possibly large or even infinite parameter dimension. We consider a special class of stochastically linear smooth (SLS) models satisfying three major conditions: the stochastic component of the log-likelihood is linear in the model parameter, while the expected log-likelihood is a smooth and concave function. For the penalized maximum likelihood estimators (pMLE), we establish several finite sample bounds about its concentration and large deviations as well as the Fisher and Wilks expansions and risk bounds. In all results, the remainder is given explicitly and can be evaluated in terms of the effective sample size $ n $ and effective parameter dimension $ \mathbb{p} $ which allows us to identify the so-called \emph{critical parameter dimension}. The results are also dimension and coordinate-free. Despite generality, all the presented bounds are nearly sharp and the classical asymptotic results can be obtained as simple corollaries. Our results indicate that the use of advanced fourth-order expansions allows to relax the critical dimension condition $ \mathbb{p}^{3} \ll n $ from Spokoiny (2023a) to $ \mathbb{p}^{3/2} \ll n $. Examples for classical models like logistic regression, log-density and precision matrix estimation illustrate the applicability of general results.

In order to clear the world of the threat posed by landmines and other explosive devices, robotic systems can play an important role. However, the development of such field robots that need to operate in hazardous conditions requires the careful consideration of multiple aspects related to the perception, mobility, and collaboration capabilities of the system. In the framework of a European challenge, the Artificial Intelligence for Detection of Explosive Devices - eXtended (AIDEDeX) project proposes to design a heterogeneous multi-robot system with advanced sensor fusion algorithms. This system is specifically designed to detect and classify improvised explosive devices, explosive ordnances, and landmines. This project integrates specialised sensors, including electromagnetic induction, ground penetrating radar, X-Ray backscatter imaging, Raman spectrometers, and multimodal cameras, to achieve comprehensive threat identification and localisation. The proposed system comprises a fleet of unmanned ground vehicles and unmanned aerial vehicles. This article details the operational phases of the AIDEDeX system, from rapid terrain exploration using unmanned aerial vehicles to specialised detection and classification by unmanned ground vehicles equipped with a robotic manipulator. Initially focusing on a centralised approach, the project will also explore the potential of a decentralised control architecture, taking inspiration from swarm robotics to provide a robust, adaptable, and scalable solution for explosive detection.

We present our ongoing research on applying a participatory design approach to using social robots for elderly care. Our approach involves four different groups of stakeholders: the elderly, (non-professional) caregivers, medical professionals, and psychologists. We focus on card sorting and storyboarding techniques to elicit the concerns of the stakeholders towards deploying social robots for elderly care. This is followed by semi-structured interviews to assess their attitudes towards social robots individually. Then we are conducting two-stage workshops with different elderly groups to understand how to engage them with the technology and to identify the challenges in this task.

Test-negative designs are widely used for post-market evaluation of vaccine effectiveness, particularly in cases where randomization is not feasible. Differing from classical test-negative designs where only healthcare-seekers with symptoms are included, recent test-negative designs have involved individuals with various reasons for testing, especially in an outbreak setting. While including these data can increase sample size and hence improve precision, concerns have been raised about whether they introduce bias into the current framework of test-negative designs, thereby demanding a formal statistical examination of this modified design. In this article, using statistical derivations, causal graphs, and numerical simulations, we show that the standard odds ratio estimator may be biased if various reasons for testing are not accounted for. To eliminate this bias, we identify three categories of reasons for testing, including symptoms, disease-unrelated reasons, and case contact tracing, and characterize associated statistical properties and estimands. Based on our characterization, we show how to consistently estimate each estimand via stratification. Furthermore, we describe when these estimands correspond to the same vaccine effectiveness parameter, and, when appropriate, propose a stratified estimator that can incorporate multiple reasons for testing and improve precision. The performance of our proposed method is demonstrated through simulation studies.

This paper investigates the structural changes in the parameters of first-order autoregressive models by analyzing the edge eigenvalues of the precision matrices. Specifically, edge eigenvalues in the precision matrix are observed if and only if there is a structural change in the autoregressive coefficients. We demonstrate that these edge eigenvalues correspond to the zeros of some determinantal equation. Additionally, we propose a consistent estimator for detecting outliers within the panel time series framework, supported by numerical experiments.

Soft materials play an integral part in many aspects of modern life including autonomy, sustainability, and human health, and their accurate modeling is critical to understand their unique properties and functions. Today's finite element analysis packages come with a set of pre-programmed material models, which may exhibit restricted validity in capturing the intricate mechanical behavior of these materials. Regrettably, incorporating a modified or novel material model in a finite element analysis package requires non-trivial in-depth knowledge of tensor algebra, continuum mechanics, and computer programming, making it a complex task that is prone to human error. Here we design a universal material subroutine, which automates the integration of novel constitutive models of varying complexity in non-linear finite element packages, with no additional analytical derivations and algorithmic implementations. We demonstrate the versatility of our approach to seamlessly integrate innovative constituent models from the material point to the structural level through a variety of soft matter case studies: a frontal impact to the brain; reconstructive surgery of the scalp; diastolic loading of arteries and the human heart; and the dynamic closing of the tricuspid valve. Our universal material subroutine empowers all users, not solely experts, to conduct reliable engineering analysis of soft matter systems. We envision that this framework will become an indispensable instrument for continued innovation and discovery within the soft matter community at large.

Average calibration of the prediction uncertainties of machine learning regression tasks can be tested in two ways: one is to estimate the calibration error (CE) as the difference between the mean absolute error (MSE) and the mean variance (MV) or mean squared uncertainty; the alternative is to compare the mean squared z-scores (ZMS) or scaled errors to 1. The problem is that both approaches might lead to different conclusions, as illustrated in this study for an ensemble of datasets from the recent machine learning uncertainty quantification (ML-UQ) literature. It is shown that the estimation of MV, MSE and their confidence intervals can become unreliable for heavy-tailed uncertainty and error distributions, which seems to be a common issue for ML-UQ datasets. By contrast, the ZMS statistic is less sensitive and offers the most reliable approach in this context. Unfortunately, the same problem affects also conditional calibrations statistics, such as the popular ENCE, and very likely post-hoc calibration methods based on similar statistics. As not much can be done to relieve this issue, except for a change of paradigm to intervals- or distribution-based UQ metrics, robust tailedness metrics are proposed to detect the potentially problematic datasets.

Most of the current studies on autonomous vehicle decision-making and control tasks based on reinforcement learning are conducted in simulated environments. The training and testing of these studies are carried out under rule-based microscopic traffic flow, with little consideration of migrating them to real or near-real environments to test their performance. It may lead to a degradation in performance when the trained model is tested in more realistic traffic scenes. In this study, we propose a method to randomize the driving style and behavior of surrounding vehicles by randomizing certain parameters of the car-following model and the lane-changing model of rule-based microscopic traffic flow in SUMO. We trained policies with deep reinforcement learning algorithms under the domain randomized rule-based microscopic traffic flow in freeway and merging scenes, and then tested them separately in rule-based microscopic traffic flow and high-fidelity microscopic traffic flow. Results indicate that the policy trained under domain randomization traffic flow has significantly better success rate and calculative reward compared to the models trained under other microscopic traffic flows.

Nowadays, the Convolutional Neural Networks (CNNs) have achieved impressive performance on many computer vision related tasks, such as object detection, image recognition, image retrieval, etc. These achievements benefit from the CNNs outstanding capability to learn the input features with deep layers of neuron structures and iterative training process. However, these learned features are hard to identify and interpret from a human vision perspective, causing a lack of understanding of the CNNs internal working mechanism. To improve the CNN interpretability, the CNN visualization is well utilized as a qualitative analysis method, which translates the internal features into visually perceptible patterns. And many CNN visualization works have been proposed in the literature to interpret the CNN in perspectives of network structure, operation, and semantic concept. In this paper, we expect to provide a comprehensive survey of several representative CNN visualization methods, including Activation Maximization, Network Inversion, Deconvolutional Neural Networks (DeconvNet), and Network Dissection based visualization. These methods are presented in terms of motivations, algorithms, and experiment results. Based on these visualization methods, we also discuss their practical applications to demonstrate the significance of the CNN interpretability in areas of network design, optimization, security enhancement, etc.

北京阿比特科技有限公司