亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Reliability of SLAM systems is considered one of the critical requirements in many modern autonomous systems. This directed the efforts to developing many state-of-the-art systems, creating challenging datasets, and introducing rigorous metrics to measure SLAM system performance. However, the link between datasets and performance in the robustness/resilience context has rarely been explored. In order to fill this void, characterization the operating conditions of SLAM systems is essential in order to provide an environment for quantitative measurement of robustness and resilience. In this paper, we argue that for proper evaluation of SLAM performance, the characterization of SLAM datasets serves as a critical first step. The study starts by reviewing previous efforts for quantitative characterization of SLAM datasets. Then, the problem of perturbations characterization is discussed and the linkage to SLAM robustness/resilience is established. After that, we propose a novel, generic and extendable framework for quantitative analysis and comparison of SLAM datasets. Additionally, a description of different characterization parameters is provided. Finally, we demonstrate the application of our framework by presenting the characterization results of three SLAM datasets: KITTI, EuroC-MAV, and TUM-VI highlighting the level of insights achieved by the proposed framework.

相關內容

即時(shi)定(ding)位與地(di)圖(tu)(tu)構建(jian)(SLAM或Simultaneouslocalizationandmapping)是這(zhe)樣(yang)一種技術(shu):使(shi)得機(ji)器人和自動駕駛(shi)汽(qi)車等設(she)(she)備能(neng)在(zai)未知(zhi)環(huan)境(沒有先驗知(zhi)識的前提下)建(jian)立地(di)圖(tu)(tu),或者(zhe)在(zai)已(yi)知(zhi)環(huan)境(已(yi)給出該地(di)圖(tu)(tu)的先驗知(zhi)識)中能(neng)更新地(di)圖(tu)(tu),并保證這(zhe)些設(she)(she)備能(neng)在(zai)同(tong)時(shi)追(zhui)蹤它們的當(dang)前位置(zhi)。

The problems of frictional contacts are the key to the investigation of mechanical performances of composite materials under varying service environments. The paper considers a linear elasticity system with strongly heterogeneous coefficients and quasistatic Tresca's friction law, and we study the homogenization theories under the frameworks of H-convergence and small $\epsilon$-periodicity. The qualitative result is based on H-convergence, which shows the original oscillating solutions will converge weakly to the homogenized solution, while our quantitative result provides an estimate of asymptotic errors in the $H^1$ norm for the periodic homogenization. We also design several numerical experiments to validate the convergence rates in the quantitative analysis.

Videos are accessible media for analyzing sports postures and providing feedback to athletes. Existing video-based coaching systems often present feedback on the correctness of poses by augmenting videos with visual markers either manually by a coach or automatically by computing key parameters from poses. However, previewing and augmenting videos limit the analysis and visualization of human poses due to the fixed viewpoints, which confine the observation of captured human movements and cause ambiguity in the augmented feedback. Besides, existing sport-specific systems with embedded bespoke pose attributes can hardly generalize to new attributes; directly overlaying two poses might not clearly visualize the key differences that viewers would like to pursue. To address these issues, we analyze and visualize human pose data with customizable viewpoints and attributes in the context of common biomechanics of running poses, such as joint angles and step distances. Based on existing literature and a formative study, we have designed and implemented a system, VCoach, to provide feedback on running poses for amateurs. VCoach provides automatic low-level comparisons of the running poses between a novice and an expert, and visualizes the pose differences as part-based 3D animations on a human model. Meanwhile, it retains the users' controllability and customizability in high-level functionalities, such as navigating the viewpoint for previewing feedback and defining their own pose attributes through our interface. We conduct a user study to verify our design components and conduct expert interviews to evaluate the usefulness of the system.

We survey a number of data visualization techniques for analyzing Computer Vision (CV) datasets. These techniques help us understand properties and latent patterns in such data, by applying dataset-level analysis. We present various examples of how such analysis helps predict the potential impact of the dataset properties on CV models and informs appropriate mitigation of their shortcomings. Finally, we explore avenues for further visualization techniques of different modalities of CV datasets as well as ones that are tailored to support specific CV tasks and analysis needs.

Recent advancements in location-aware analytics have created novel opportunities in different domains. In the area of process mining, enriching process models with geolocation helps to gain a better understanding of how the process activities are executed in practice. In this paper, we introduce our idea of geo-enabled process modeling and report on our industrial experience. To this end, we present a real-world case study to describe the importance of considering the location in process mining. Then we discuss the shortcomings of currently available process mining tools and propose our novel approach for modeling geo-enabled processes focusing on 1) increasing process interpretability through geo-visualization, 2) incorporating location-related metadata into process analysis, and 3) using location-based measures for the assessment of process performance. Finally, we conclude the paper by future research directions.

When subjected to a sudden, unanticipated threat, human groups characteristically self-organize to identify the threat, determine potential responses, and act to reduce its impact. Central to this process is the challenge of coordinating information sharing and response activity within a disrupted environment. In this paper, we consider coordination in the context of responses to the 2001 World Trade Center disaster. Using records of communications among 17 organizational units, we examine the mechanisms driving communication dynamics, with an emphasis on the emergence of coordinating roles. We employ relational event models (REMs) to identify the mechanisms shaping communications in each unit, finding a consistent pattern of behavior across units with very different characteristics. Using a simulation-based "knock-out" study, we also probe the importance of different mechanisms for hub formation. Our results suggest that, while preferential attachment and pre-disaster role structure generally contribute to the emergence of hub structure, temporally local conversational norms play a much larger role. We discuss broader implications for the role of microdynamics in driving macroscopic outcomes, and for the emergence of coordination in other settings.

The dynamic response of the legged robot locomotion is non-Lipschitz and can be stochastic due to environmental uncertainties. To test, validate, and characterize the safety performance of legged robots, existing solutions on observed and inferred risk can be incomplete and sampling inefficient. Some formal verification methods suffer from the model precision and other surrogate assumptions. In this paper, we propose a scenario sampling based testing framework that characterizes the overall safety performance of a legged robot by specifying (i) where (in terms of a set of states) the robot is potentially safe, and (ii) how safe the robot is within the specified set. The framework can also help certify the commercial deployment of the legged robot in real-world environment along with human and compare safety performance among legged robots with different mechanical structures and dynamic properties. The proposed framework is further deployed to evaluate a group of state-of-the-art legged robot locomotion controllers from various model-based, deep neural network involved, and reinforcement learning based methods in the literature. Among a series of intended work domains of the studied legged robots (e.g. tracking speed on sloped surface, with abrupt changes on demanded velocity, and against adversarial push-over disturbances), we show that the method can adequately capture the overall safety characterization and the subtle performance insights. Many of the observed safety outcomes, to the best of our knowledge, have never been reported by the existing work in the legged robot literature.

Blockchain and smart contract technology are novel approaches to data and code management that facilitate trusted computing by allowing for development in a distributed and decentralized manner. Testing smart contracts comes with its own set of challenges which have not yet been fully identified and explored. Although existing tools can identify and discover known vulnerabilities and their interactions on the Ethereum blockchain through random search or symbolic execution, these tools generally do not produce test suites suitable for human oracles. In this paper, we present AGSOLT (Automated Generator of Solidity Test Suites). We demonstrate its efficiency by implementing two search algorithms to automatically generate test suites for stand-alone Solidity smart contracts, taking into account some of the blockchain-specific challenges. To test AGSOLT, we compared a random search algorithm and a genetic algorithm on a set of 36 real-world smart contracts. We found that AGSOLT is capable of achieving high branch coverage with both approaches and even discovered some errors in some of the most popular Solidity smart contracts on Github.

Adversarial training (i.e., training on adversarially perturbed input data) is a well-studied method for making neural networks robust to potential adversarial attacks during inference. However, the improved robustness does not come for free but rather is accompanied by a decrease in overall model accuracy and performance. Recent work has shown that, in practical robot learning applications, the effects of adversarial training do not pose a fair trade-off but inflict a net loss when measured in holistic robot performance. This work revisits the robustness-accuracy trade-off in robot learning by systematically analyzing if recent advances in robust training methods and theory in conjunction with adversarial robot learning can make adversarial training suitable for real-world robot applications. We evaluate a wide variety of robot learning tasks ranging from autonomous driving in a high-fidelity environment amenable to sim-to-real deployment, to mobile robot gesture recognition. Our results demonstrate that, while these techniques make incremental improvements on the trade-off on a relative scale, the negative side-effects caused by adversarial training still outweigh the improvements by an order of magnitude. We conclude that more substantial advances in robust learning methods are necessary before they can benefit robot learning tasks in practice.

Leveraging line features to improve localization accuracy of point-based visual-inertial SLAM (VINS) is gaining interest as they provide additional constraints on scene structure. However, real-time performance when incorporating line features in VINS has not been addressed. This paper presents PL-VINS, a real-time optimization-based monocular VINS method with point and line features, developed based on the state-of-the-art point-based VINS-Mono \cite{vins}. We observe that current works use the LSD \cite{lsd} algorithm to extract line features; however, LSD is designed for scene shape representation instead of the pose estimation problem, which becomes the bottleneck for the real-time performance due to its high computational cost. In this paper, a modified LSD algorithm is presented by studying a hidden parameter tuning and length rejection strategy. The modified LSD can run at least three times as fast as LSD. Further, by representing space lines with the Pl\"{u}cker coordinates, the residual error in line estimation is modeled in terms of the point-to-line distance, which is then minimized by iteratively updating the minimum four-parameter orthonormal representation of the Pl\"{u}cker coordinates. Experiments in a public benchmark dataset show that the localization error of our method is 12-16\% less than that of VINS-Mono at the same pose update frequency. %For the benefit of the community, The source code of our method is available at: //github.com/cnqiangfu/PL-VINS.

Deep Learning has implemented a wide range of applications and has become increasingly popular in recent years. The goal of multimodal deep learning is to create models that can process and link information using various modalities. Despite the extensive development made for unimodal learning, it still cannot cover all the aspects of human learning. Multimodal learning helps to understand and analyze better when various senses are engaged in the processing of information. This paper focuses on multiple types of modalities, i.e., image, video, text, audio, body gestures, facial expressions, and physiological signals. Detailed analysis of past and current baseline approaches and an in-depth study of recent advancements in multimodal deep learning applications has been provided. A fine-grained taxonomy of various multimodal deep learning applications is proposed, elaborating on different applications in more depth. Architectures and datasets used in these applications are also discussed, along with their evaluation metrics. Last, main issues are highlighted separately for each domain along with their possible future research directions.

北京阿比特科技有限公司