We provide an ice friction model for vehicle dynamics of a two-man bobsled which can be used for driver evaluation and in a driver-in-the-loop simulator. Longitudinal friction is modeled by combining experimental results with finite element simulations to yield a correlation between contact pressure and friction. To model lateral friction, we collect data from 44 bobsleigh runs using special sensors. Non-linear regression is used to fit a bob-specific one-track vehicle dynamics model to the data. It is applied in driving simulation and enables a novel method for bob driver evaluation. Bob drivers with various levels of experience are investigated. It shows that a similar performance of the top drivers results from different driving styles.
The recent framework of performative prediction is aimed at capturing settings where predictions influence the target/outcome they want to predict. In this paper, we introduce a natural multi-agent version of this framework, where multiple decision makers try to predict the same outcome. We showcase that such competition can result in interesting phenomena by proving the possibility of phase transitions from stability to instability and eventually chaos. Specifically, we present settings of multi-agent performative prediction where under sufficient conditions their dynamics lead to global stability and optimality. In the opposite direction, when the agents are not sufficiently cautious in their learning/updates rates, we show that instability and in fact formal chaos is possible. We complement our theoretical predictions with simulations showcasing the predictive power of our results.
An automated driving system should have the ability to supervise its own performance and to request human driver to take over when necessary. In the lane keeping scenario, the prediction of vehicle future trajectory is the key to realize safe and trustworthy driving automation. Previous studies on vehicle trajectory prediction mainly fall into two categories, i.e. physics-based and manoeuvre-based methods. Using a physics-based methodology, this paper proposes a lane departure prediction algorithm based on closed-loop vehicle dynamics model. We use extended Kalman filter to estimate the current vehicle states based on sensing module outputs. Then a Kalman Predictor with actual lane keeping control law is used to predict steering actions and vehicle states in the future. A lane departure assessment module evaluates the probabilistic distribution of vehicle corner positions and decides whether to initiate a human takeover request. The prediction algorithm is capable to describe the stochastic characteristics of future vehicle pose, which is preliminarily proved in simulated tests. Finally, the on-road tests at speeds of 15 to 50 km/h further show that the pro-posed method can accurately predict vehicle future trajectory. It may work as a promising solution to lane departure risk assessment for automated lane keeping functions.
Generally, humanoid robots usually suffer significant impact force when walking or running in a non-predefined environment that could easily damage the actuators due to high stiffness. In recent years, the usages of passive, compliant series elastic actuators (SEA) for driving humanoid's joints have proved the capability in many aspects so far. However, despite being widely applied in the biped robot research field, the stable control problem for a humanoid powered by the SEAs, especially in the walking process, is still a challenge. This paper proposes a model reference adaptive control (MRAC) combined with the backstepping algorithm to deal with the parameter uncertainties in a humanoid's lower limb driven by the SEA system. This is also an extension of our previous research (Lanh et al.,2021). Firstly, a dynamic model of SEA is obtained. Secondly, since there are unknown and uncertain parameters in the SEA model, a model reference adaptive controller (MRAC) is employed to guarantee the robust performance of the humanoid's lower limb. Finally, an experiment is carried out to evaluate the effectiveness of the proposed controller and the SEA mechanism.
To improve diagnostic accuracy of breast cancer detection, several researchers have used the wavelet-based tools, which provide additional insight and information for aiding diagnostic decisions. The accuracy of such diagnoses, however, can be improved. This paper introduces a wavelet-based technique, non-decimated wavelet transform (NDWT)-based scaling estimation, that improves scaling parameter estimation over the traditional methods. One distinctive feature of NDWT is that it does not decimate wavelet coefficients at multiscale levels resulting in redundant outputs which are used to lower the variance of scaling estimators. Another interesting feature of the proposed methodology is the freedom of dyadic constraints for inputs, typical for standard wavelet-based approaches. To compare the estimation performance of the NDWT method to a conventional orthogonal wavelet transform-based method, we use simulation to estimate the Hurst exponent in two-dimensional fractional Brownian fields. The results of the simulation show that the proposed method improves the conventional estimators of scaling and yields estimators with smaller mean-squared errors. We apply the NDWT method to classification of mammograms as cancer or control and, for publicly available mammogram images from the database at the University of South Florida, find the the diagnostic accuracy in excess of 80%.
Neural networks have been increasingly employed in Model Predictive Controller (MPC) to control nonlinear dynamic systems. However, MPC still poses a problem that an achievable update rate is insufficient to cope with model uncertainty and external disturbances. In this paper, we present a novel control scheme that can design an optimal tracking controller using the neural network dynamics of the MPC, making it possible to be applied as a plug-and-play extension for any existing model-based feedforward controller. We also describe how our method handles a neural network containing historical information, which does not follow a general form of dynamics. The proposed method is evaluated by its performance in classical control benchmarks with external disturbances. We also extend our control framework to be applied in an aggressive autonomous driving task with unknown friction. In all experiments, our method outperformed the compared methods by a large margin. Our controller also showed low control chattering levels, demonstrating that our feedback controller does not interfere with the optimal command of MPC.
Time series modeling has attracted extensive research efforts; however, achieving both reliable efficiency and interpretability from a unified model still remains a challenging problem. Among the literature, shapelets offer interpretable and explanatory insights in the classification tasks, while most existing works ignore the differing representative power at different time slices, as well as (more importantly) the evolution pattern of shapelets. In this paper, we propose to extract time-aware shapelets by designing a two-level timing factor. Moreover, we define and construct the shapelet evolution graph, which captures how shapelets evolve over time and can be incorporated into the time series embeddings by graph embedding algorithms. To validate whether the representations obtained in this way can be applied effectively in various scenarios, we conduct experiments based on three public time series datasets, and two real-world datasets from different domains. Experimental results clearly show the improvements achieved by our approach compared with 17 state-of-the-art baselines.
Lane detection, the process of identifying lane markings as approximated curves, is widely used for lane departure warning and adaptive cruise control in autonomous vehicles. The popular pipeline that solves it in two steps -- feature extraction plus post-processing, while useful, is too inefficient and flawed in learning the global context and lanes' long and thin structures. To tackle these issues, we propose an end-to-end method that directly outputs parameters of a lane shape model, using a network built with a transformer to learn richer structures and context. The lane shape model is formulated based on road structures and camera pose, providing physical interpretation for parameters of network output. The transformer models non-local interactions with a self-attention mechanism to capture slender structures and global context. The proposed method is validated on the TuSimple benchmark and shows state-of-the-art accuracy with the most lightweight model size and fastest speed. Additionally, our method shows excellent adaptability to a challenging self-collected lane detection dataset, showing its powerful deployment potential in real applications. Codes are available at //github.com/liuruijin17/LSTR.
We train a recurrent neural network language model using a distributed, on-device learning framework called federated learning for the purpose of next-word prediction in a virtual keyboard for smartphones. Server-based training using stochastic gradient descent is compared with training on client devices using the Federated Averaging algorithm. The federated algorithm, which enables training on a higher-quality dataset for this use case, is shown to achieve better prediction recall. This work demonstrates the feasibility and benefit of training language models on client devices without exporting sensitive user data to servers. The federated learning environment gives users greater control over their data and simplifies the task of incorporating privacy by default with distributed training and aggregation across a population of client devices.
Autonomous urban driving navigation with complex multi-agent dynamics is under-explored due to the difficulty of learning an optimal driving policy. The traditional modular pipeline heavily relies on hand-designed rules and the pre-processing perception system while the supervised learning-based models are limited by the accessibility of extensive human experience. We present a general and principled Controllable Imitative Reinforcement Learning (CIRL) approach which successfully makes the driving agent achieve higher success rates based on only vision inputs in a high-fidelity car simulator. To alleviate the low exploration efficiency for large continuous action space that often prohibits the use of classical RL on challenging real tasks, our CIRL explores over a reasonably constrained action space guided by encoded experiences that imitate human demonstrations, building upon Deep Deterministic Policy Gradient (DDPG). Moreover, we propose to specialize adaptive policies and steering-angle reward designs for different control signals (i.e. follow, straight, turn right, turn left) based on the shared representations to improve the model capability in tackling with diverse cases. Extensive experiments on CARLA driving benchmark demonstrate that CIRL substantially outperforms all previous methods in terms of the percentage of successfully completed episodes on a variety of goal-directed driving tasks. We also show its superior generalization capability in unseen environments. To our knowledge, this is the first successful case of the learned driving policy through reinforcement learning in the high-fidelity simulator, which performs better-than supervised imitation learning.
In this work, we present a method for tracking and learning the dynamics of all objects in a large scale robot environment. A mobile robot patrols the environment and visits the different locations one by one. Movable objects are discovered by change detection, and tracked throughout the robot deployment. For tracking, we extend the Rao-Blackwellized particle filter of previous work with birth and death processes, enabling the method to handle an arbitrary number of objects. Target births and associations are sampled using Gibbs sampling. The parameters of the system are then learnt using the Expectation Maximization algorithm in an unsupervised fashion. The system therefore enables learning of the dynamics of one particular environment, and of its objects. The algorithm is evaluated on data collected autonomously by a mobile robot in an office environment during a real-world deployment. We show that the algorithm automatically identifies and tracks the moving objects within 3D maps and infers plausible dynamics models, significantly decreasing the modeling bias of our previous work. The proposed method represents an improvement over previous methods for environment dynamics learning as it allows for learning of fine grained processes.