Traction parameters, that characterize the ground-wheel contact dynamics, are the central factor in the energy efficiency of vehicles. To optimize fuel consumption, reduce wear of tires, increase productivity etc., knowledge of current traction parameters is unavoidable. Unfortunately, these parameters are difficult to measure and require expensive force and torque sensors. An alternative way is to use system identification to determine them. In this work, we validate such a method in field experiments with a mobile robot. The method is based on an adaptive Kalman filter. We show how it estimates the traction parameters online, during the motion on the field, and compare them to their values determined via a 6-directional force-torque sensor installed for verification. Data of adhesion slip ratio curves is recorded and compared to curves from literature for additional validation of the method. The results can establish a foundation for a number of optimal traction methods.
Accurate product information is critical for e-commerce stores to allow customers to browse, filter, and search for products. Product data quality is affected by missing or incorrect information resulting in poor customer experience. While machine learning can be used to correct inaccurate or missing information, achieving high performance on fashion image classification tasks requires large amounts of annotated data, but it is expensive to generate due to labeling costs. One solution can be to generate synthetic data which requires no manual labeling. However, training a model with a dataset of solely synthetic images can lead to poor generalization when performing inference on real-world data because of the domain shift. We introduce a new unsupervised domain adaptation technique that converts images from the synthetic domain into the real-world domain. Our approach combines a generative neural network and a classifier that are jointly trained to produce realistic images while preserving the synthetic label information. We found that using real-world pseudo-labels during training helps the classifier to generalize in the real-world domain, reducing the synthetic bias. We successfully train a visual pattern classification model in the fashion domain without real-world annotations. Experiments show that our method outperforms other unsupervised domain adaptation algorithms.
Physics-inspired neural networks are proven to be an effective modeling method by giving more physically plausible results with less data dependency. However, their application in robotics is limited due to the non-conservative nature of robot dynamics and the difficulty in friction modeling. Moreover, these physics-inspired neural networks do not account for complex input matrices, such as those found in underactuated soft robots. This paper solves these problems by extending Lagrangian and Hamiltonian neural networks by including dissipation and a simplified input matrix. Additionally, the loss function is processed using the Runge-Kutta algorithm, circumventing the inaccuracies and environmental susceptibility inherent in direct acceleration measurements. First, the effectiveness of the proposed method is validated via simulations of soft and rigid robots. Then, the proposed approach is validated experimentally in a tendon-driven soft robot and a Panda robot. The simulations and experimental results show that the modified neural networks can model different robots while the learned model enables decent anticipatory control.
This dataset contains 10,000 fluid flow and heat transfer simulations in U-bend shapes. Each of them is described by 28 design parameters, which are processed with the help of Computational Fluid Dynamics methods. The dataset provides a comprehensive benchmark for investigating various problems and methods from the field of design optimization. For these investigations supervised, semi-supervised and unsupervised deep learning approaches can be employed. One unique feature of this dataset is that each shape can be represented by three distinct data types including design parameter and objective combinations, five different resolutions of 2D images from the geometry and the solution variables of the numerical simulation, as well as a representation using the cell values of the numerical mesh. This third representation enables considering the specific data structure of numerical simulations for deep learning approaches. The source code and the container used to generate the data are published as part of this work.
To fully leverage the advantages of large-scale pre-trained language models (PLMs) on downstream tasks, it has become a ubiquitous adaptation paradigm to fine-tune the entire parameters of PLMs. However, this paradigm poses issues of inefficient updating and resource over-consuming for fine-tuning in data-scarce and resource-limited scenarios, because of the large scale of parameters in PLMs. To alleviate these concerns, in this paper, we propose a parameter-efficient fine-tuning method HiFi, that is, only the highly informative and strongly correlated attention heads for the specific task are fine-tuned. To search for those significant attention heads, we develop a novel framework to analyze the effectiveness of heads. Specifically, we first model the relationship between heads into a graph from two perspectives of information richness and correlation, and then apply PageRank algorithm to determine the relative importance of each head. Extensive experiments on the GLUE benchmark demonstrate the effectiveness of our method, and show that HiFi obtains state-of-the-art performance over the prior baselines.
Achieving high-quality semantic segmentation predictions using only image-level labels enables a new level of real-world applicability. Although state-of-the-art networks deliver reliable predictions, the amount of handcrafted pixel-wise annotations to enable these results are not feasible in many real-world applications. Hence, several works have already targeted this bottleneck, using classifier-based networks like Class Activation Maps~\cite{CAM} (CAMs) as a base. Addressing CAM's weaknesses of fuzzy borders and incomplete predictions, state-of-the-art approaches rely only on adding regulations to the classifier loss or using pixel-similarity-based refinement after the fact. We propose a framework that introduces an additional module using object perimeters for improved saliency. We define object perimeter information as the line separating the object and background. Our new PerimeterFit module will be applied to pre-refine the CAM predictions before using the pixel-similarity-based network. In this way, our PerimeterFit increases the quality of the CAM prediction while simultaneously improving the false negative rate. We investigated a wide range of state-of-the-art unsupervised semantic segmentation networks and edge detection techniques to create useful perimeter maps, which enable our framework to predict object locations with sharper perimeters. We achieved up to 1.5% improvement over frameworks without our PerimeterFit module. We conduct an exhaustive analysis to illustrate that SILOP enhances existing state-of-the-art frameworks for image-level-based semantic segmentation. The framework is open-source and accessible online at //github.com/ErikOstrowski/SILOP.
Motivated by the mathematical modeling of tumor invasion in healthy tissues, we propose a generalized compressible diphasic Navier-Stokes Cahn-Hilliard model that we name G-NSCH. We assume that the two phases of the fluid represent two different populations of cells: cancer cells and healthy tissue. We include in our model possible friction and proliferation effects. The model aims to be as general as possible to study the possible mechanical effects playing a role in the invasive growth of a tumor. In the present work, we focus on the analysis and numerical simulation of the G-NSCH model. Our G-NSCH system is derived rigorously and satisfies the basic mechanics of fluids and the thermodynamics of particles. Under simplifying assumptions, we prove the existence of global weak solutions. We also propose a structure-preserving numerical scheme based on the scalar auxiliary variable method to simulate our system and present some numerical simulations validating the properties of the numerical scheme and illustrating the solutions of the G-NSCH model.
Modern radio telescopes will daily generate data sets on the scale of exabytes for systems like the Square Kilometre Array (SKA). Massive data sets are a source of unknown and rare astrophysical phenomena that lead to discoveries. Nonetheless, this is only plausible with the exploitation of intensive machine intelligence to complement human-aided and traditional statistical techniques. Recently, there has been a surge in scientific publications focusing on the use of artificial intelligence in radio astronomy, addressing challenges such as source extraction, morphological classification, and anomaly detection. This study presents a succinct, but comprehensive review of the application of machine intelligence techniques on radio images with emphasis on the morphological classification of radio galaxies. It aims to present a detailed synthesis of the relevant papers summarizing the literature based on data complexity, data pre-processing, and methodological novelty in radio astronomy. The rapid advancement and application of computer intelligence in radio astronomy has resulted in a revolution and a new paradigm shift in the automation of daunting data processes. However, the optimal exploitation of artificial intelligence in radio astronomy, calls for continued collaborative efforts in the creation of annotated data sets. Additionally, in order to quickly locate radio galaxies with similar or dissimilar physical characteristics, it is necessary to index the identified radio sources. Nonetheless, this issue has not been adequately addressed in the literature, making it an open area for further study.
The full deployment of autonomous driving systems on a worldwide scale requires that the self-driving vehicle be operated in a provably safe manner, i.e., the vehicle must be able to avoid collisions in any possible traffic situation. In this paper, we propose a framework based on Model Predictive Control (MPC) that endows the self-driving vehicle with the necessary safety guarantees. In particular, our framework ensures constraint satisfaction at all times, while tracking the reference trajectory as close as obstacles allow, resulting in a safe and comfortable driving behavior. To discuss the performance and real-time capability of our framework, we provide first an illustrative simulation example, and then we demonstrate the effectiveness of our framework in experiments with a real test vehicle.
Modern shock-capturing schemes often suffer from numerical shock anomalies if the flow field contains strong shocks, which may limit their further application in hypersonic flow computations. In the current study, we devote our efforts to exploring the primary numerical characteristics and the underlying mechanism of shock instability for second-order finite-volume schemes. To this end, we, for the first time, develop the matrix stability analysis method for the finite-volume MUSCL approach. Such a linearized analysis method allows to investigate the shock instability problem of the finite-volume shock-capturing schemes in a quantitative and efficient manner. Results of the stability analysis demonstrate that the shock stability of second-order scheme is strongly related to the Riemann solver, Mach number, limiter function, numerical shock structure, and computational grid. Unique stability characteristics associated with these factors for second-order methods are revealed quantitatively with the established method. Source location of instability is also clarified by the matrix stability analysis method. Results show that the shock instability originates from the numerical shock structure. Such conclusions pave the way to better understand the shock instability problem and may shed new light on developing more reliable shock-capturing methods for compressible flows with high Mach number.
Due to their inherent capability in semantic alignment of aspects and their context words, attention mechanism and Convolutional Neural Networks (CNNs) are widely applied for aspect-based sentiment classification. However, these models lack a mechanism to account for relevant syntactical constraints and long-range word dependencies, and hence may mistakenly recognize syntactically irrelevant contextual words as clues for judging aspect sentiment. To tackle this problem, we propose to build a Graph Convolutional Network (GCN) over the dependency tree of a sentence to exploit syntactical information and word dependencies. Based on it, a novel aspect-specific sentiment classification framework is raised. Experiments on three benchmarking collections illustrate that our proposed model has comparable effectiveness to a range of state-of-the-art models, and further demonstrate that both syntactical information and long-range word dependencies are properly captured by the graph convolution structure.