The miniaturization of inertial measurement units (IMUs) facilitates their widespread use in a growing number of application domains. Orientation estimation is a prerequisite for most further data processing steps in inertial motion tracking, such as position/velocity estimation, joint angle estimation, and 3D visualization. Errors in the estimated orientations severely affect all further processing steps. Recent systematic comparisons of existing algorithms show that out-of-the-box accuracy is often low and that application-specific tuning is required to obtain high accuracy. In the present work, we propose and extensively evaluate a quaternion-based orientation estimation algorithm that is based on a novel approach of filtering the acceleration measurements in an almost-inertial frame and that includes extensions for gyroscope bias estimation and magnetic disturbance rejection, as well as a variant for offline data processing. In contrast to all existing work, we perform an extensive evaluation, using a large collection of publicly available datasets and eight literature methods for comparison. The proposed method consistently outperforms all literature methods and achieves an average RMSE of 2.9{\deg}, while the errors obtained with literature methods range from 5.3{\deg} to 16.7{\deg}. Since the evaluation was performed with one single fixed parametrization across a very diverse dataset collection, we conclude that the proposed method provides unprecedented out-of-the-box performance for a broad range of motions, sensor hardware, and environmental conditions. This gain in orientation estimation accuracy is expected to advance the field of IMU-based motion analysis and provide performance benefits in numerous applications. The provided open-source implementation makes it easy to employ the proposed method.
A general numerical method using sum of squares programming is proposed to address the problem of estimating the region of attraction (ROA) of an asymptotically stable equilibrium point of a nonlinear polynomial system. The method is based on Lyapunov theory, and a shape function is defined to enlarge the provable subset of a local Lyapunov function. In contrast with existing methods with a shape function centered at the equilibrium point, the proposed method utilizes a shifted shape function (SSF) with its center shifted iteratively towards the boundary of the newly obtained invariant subset to improve ROA estimation. A set of shifting centers with corresponding SSFs is generated to produce proven subsets of the exact ROA and then a composition method, namely R-composition, is employed to express these independent sets in a compact form by just a single but richer-shaped level set. The proposed method denoted as RcomSSF brings a significant improvement for general ROA estimation problems, especially for non-symmetric or unbounded ROA, while keeping the computational burden at a reasonable level. Its effectiveness and advantages are demonstrated by several benchmark examples from literature.
The paper addresses the problem of time offset synchronization in the presence of temperature variations, which lead to a non-Gaussian environment. In this context, regular Kalman filtering reveals to be suboptimal. A functional optimization approach is developed in order to approximate optimal estimation of the clock offset between master and slave. A numerical approximation is provided to this aim, based on regular neural network training. Other heuristics are provided as well, based on spline regression. An extensive performance evaluation highlights the benefits of the proposed techniques, which can be easily generalized to several clock synchronization protocols and operating environments.
Stochastic precipitation generators (SPGs) are a class of statistical models which generate synthetic data that can simulate dry and wet rainfall stretches for long durations. Generated precipitation time series data are used in climate projections, impact assessment of extreme weather events, and water resource and agricultural management. We construct an SPG for daily precipitation data that is specified as a semi-continuous distribution at every location, with a point mass at zero for no precipitation and a mixture of two exponential distributions for positive precipitation. Our generators are obtained as hidden Markov models (HMMs) where the underlying climate conditions form the states. We fit a 3-state HMM to daily precipitation data for the Chesapeake Bay watershed in the Eastern coast of the USA for the wet season months of July to September from 2000--2019. Data is obtained from the GPM-IMERG remote sensing dataset, and existing work on variational HMMs is extended to incorporate semi-continuous emission distributions. In light of the high spatial dimension of the data, a stochastic optimization implementation allows for computational speedup. The most likely sequence of underlying states is estimated using the Viterbi algorithm, and we identify the differences in the weather regimes associated with the states of the proposed model. Synthetic data generated from the HMM can reproduce monthly precipitation statistics as well as spatial dependency present in the historical GPM-IMERG data.
Estimating the entropy rate of discrete time series is a challenging problem with important applications in numerous areas including neuroscience, genomics, image processing and natural language processing. A number of approaches have been developed for this task, typically based either on universal data compression algorithms, or on statistical estimators of the underlying process distribution. In this work, we propose a fully-Bayesian approach for entropy estimation. Building on the recently introduced Bayesian Context Trees (BCT) framework for modelling discrete time series as variable-memory Markov chains, we show that it is possible to sample directly from the induced posterior on the entropy rate. This can be used to estimate the entire posterior distribution, providing much richer information than point estimates. We develop theoretical results for the posterior distribution of the entropy rate, including proofs of consistency and asymptotic normality. The practical utility of the method is illustrated on both simulated and real-world data, where it is found to outperform state-of-the-art alternatives.
In a complex urban environment, due to the unavoidable interruption of GNSS positioning signals and the accumulation of errors during vehicle driving, the collected vehicle trajectory data is likely to be inaccurate and incomplete. A weighted trajectory reconstruction algorithm based on a bidirectional RNN deep network is proposed. GNSS/OBD trajectory acquisition equipment is used to collect vehicle trajectory information, and multi-source data fusion is used to realize bidirectional weighted trajectory reconstruction. At the same time, the neural arithmetic logic unit (NALU) is introduced into the trajectory reconstruction model to strengthen the extrapolation ability of the deep network and ensure the accuracy of trajectory prediction, which can improve the robustness of the algorithm in trajectory reconstruction when dealing with complex urban road sections. The actual urban road section was selected for testing experiments, and a comparative analysis was carried out with existing methods. Through root mean square error (RMSE, root-mean-square error) and using Google Earth to visualize the reconstructed trajectory, the experimental results demonstrate the effectiveness and reliability of the proposed algorithm.
Currently, mobile robots are developing rapidly and are finding numerous applications in the industry. However, several problems remain related to their practical use, such as the need for expensive hardware and high power consumption levels. In this study, we build a low-cost indoor mobile robot platform that does not include a LiDAR or a GPU. Then, we design an autonomous navigation architecture that guarantees real-time performance on our platform with an RGB-D camera and a low-end off-the-shelf single board computer. The overall system includes SLAM, global path planning, ground segmentation, and motion planning. The proposed ground segmentation approach extracts a traversability map from raw depth images for the safe driving of low-body mobile robots. We apply both rule-based and learning-based navigation policies using the traversability map. Running sensor data processing and other autonomous driving components simultaneously, our navigation policies perform rapidly at a refresh rate of 18 Hz for control command, whereas other systems have slower refresh rates. Our methods show better performances than current state-of-the-art navigation approaches within limited computation resources as shown in 3D simulation tests. In addition, we demonstrate the applicability of our mobile robot system through successful autonomous driving in an indoor environment.
6D object pose estimation has been a research topic in the field of computer vision and robotics. Many modern world applications like robot grasping, manipulation, autonomous navigation etc, require the correct pose of objects present in a scene to perform their specific task. It becomes even harder when the objects are placed in a cluttered scene and the level of occlusion is high. Prior works have tried to overcome this problem but could not achieve accuracy that can be considered reliable in real-world applications. In this paper, we present an architecture that, unlike prior work, is context-aware. It utilizes the context information available to us about the objects. Our proposed architecture treats the objects separately according to their types i.e; symmetric and non-symmetric. A deeper estimator and refiner network pair is used for non-symmetric objects as compared to symmetric due to their intrinsic differences. Our experiments show an enhancement in the accuracy of about 3.2% over the LineMOD dataset, which is considered a benchmark for pose estimation in the occluded and cluttered scenes, against the prior state-of-the-art DenseFusion. Our results also show that the inference time we got is sufficient for real-time usage.
Good models require good training data. For overparameterized deep models, the causal relationship between training data and model predictions is increasingly opaque and poorly understood. Influence analysis partially demystifies training's underlying interactions by quantifying the amount each training instance alters the final model. Measuring the training data's influence exactly can be provably hard in the worst case; this has led to the development and use of influence estimators, which only approximate the true influence. This paper provides the first comprehensive survey of training data influence analysis and estimation. We begin by formalizing the various, and in places orthogonal, definitions of training data influence. We then organize state-of-the-art influence analysis methods into a taxonomy; we describe each of these methods in detail and compare their underlying assumptions, asymptotic complexities, and overall strengths and weaknesses. Finally, we propose future research directions to make influence analysis more useful in practice as well as more theoretically and empirically sound. A curated, up-to-date list of resources related to influence analysis is available at //github.com/ZaydH/influence_analysis_papers.
This work addresses a novel and challenging problem of estimating the full 3D hand shape and pose from a single RGB image. Most current methods in 3D hand analysis from monocular RGB images only focus on estimating the 3D locations of hand keypoints, which cannot fully express the 3D shape of hand. In contrast, we propose a Graph Convolutional Neural Network (Graph CNN) based method to reconstruct a full 3D mesh of hand surface that contains richer information of both 3D hand shape and pose. To train networks with full supervision, we create a large-scale synthetic dataset containing both ground truth 3D meshes and 3D poses. When fine-tuning the networks on real-world datasets without 3D ground truth, we propose a weakly-supervised approach by leveraging the depth map as a weak supervision in training. Through extensive evaluations on our proposed new datasets and two public datasets, we show that our proposed method can produce accurate and reasonable 3D hand mesh, and can achieve superior 3D hand pose estimation accuracy when compared with state-of-the-art methods.
Image segmentation is still an open problem especially when intensities of the interested objects are overlapped due to the presence of intensity inhomogeneity (also known as bias field). To segment images with intensity inhomogeneities, a bias correction embedded level set model is proposed where Inhomogeneities are Estimated by Orthogonal Primary Functions (IEOPF). In the proposed model, the smoothly varying bias is estimated by a linear combination of a given set of orthogonal primary functions. An inhomogeneous intensity clustering energy is then defined and membership functions of the clusters described by the level set function are introduced to rewrite the energy as a data term of the proposed model. Similar to popular level set methods, a regularization term and an arc length term are also included to regularize and smooth the level set function, respectively. The proposed model is then extended to multichannel and multiphase patterns to segment colourful images and images with multiple objects, respectively. It has been extensively tested on both synthetic and real images that are widely used in the literature and public BrainWeb and IBSR datasets. Experimental results and comparison with state-of-the-art methods demonstrate that advantages of the proposed model in terms of bias correction and segmentation accuracy.