This study proposes a method for qualitatively evaluating and designing human-like driver models for autonomous vehicles. While most existing research on human-likeness has been focused on quantitative evaluation, it is crucial to consider qualitative measures to accurately capture human perception. To this end, we conducted surveys utilizing both video study and human experience-based study. The findings of this research can significantly contribute to the development of naturalistic and human-like driver models for autonomous vehicles, enabling them to safely and efficiently coexist with human-driven vehicles in diverse driving scenarios.
Large foundation models, known for their strong zero-shot generalization, have excelled in visual and language applications. However, applying them to medical image segmentation, a domain with diverse imaging types and target labels, remains an open challenge. Current approaches, such as adapting interactive segmentation models like Segment Anything Model (SAM), require user prompts for each sample during inference. Alternatively, transfer learning methods like few/one-shot models demand labeled samples, leading to high costs. This paper introduces a new paradigm toward the universal medical image segmentation, termed 'One-Prompt Segmentation.' One-Prompt Segmentation combines the strengths of one-shot and interactive methods. In the inference stage, with just \textbf{one prompted sample}, it can adeptly handle the unseen task in a single forward pass. We train One-Prompt Model on 64 open-source medical datasets, accompanied by the collection of over 3,000 clinician-labeled prompts. Tested on 14 previously unseen datasets, the One-Prompt Model showcases superior zero-shot segmentation capabilities, outperforming a wide range of related methods. The code and data is released as //github.com/KidsWithTokens/one-prompt.
Public acceptance of conditionally automated vehicles is a crucial step in the realization of smart cities. Prior research in Europe has shown that the factors of hedonic motivation, social influence, and performance expectancy, in decreasing order of importance, influence acceptance. Moreover, a generally positive acceptance of the technology was reported. However, there is a lack of information regarding the public acceptance of conditionally automated vehicles in the United States. In this study, we carried out a web-based experiment where participants were provided information regarding the technology and then completed a questionnaire on their perceptions. The collected data was analyzed using PLS-SEM to examine the factors that may lead to public acceptance of the technology in the United States. Our findings showed that social influence, performance expectancy, effort expectancy, hedonic motivation, and facilitating conditions determine conditionally automated vehicle acceptance. Additionally, certain factors were found to influence the perception of how useful the technology is, the effort required to use it, and the facilitating conditions for its use. By integrating the insights gained from this study, stakeholders can better facilitate the adoption of autonomous vehicle technology, contributing to safer, more efficient, and user-friendly transportation systems in the future that help realize the vision of the smart city.
Recently, prototype learning has emerged in semi-supervised medical image segmentation and achieved remarkable performance. However, the scarcity of labeled data limits the expressiveness of prototypes in previous methods, potentially hindering the complete representation of prototypes for class embedding. To address this problem, we propose the Mixed Prototype Consistency Learning (MPCL) framework, which includes a Mean Teacher and an auxiliary network. The Mean Teacher generates prototypes for labeled and unlabeled data, while the auxiliary network produces additional prototypes for mixed data processed by CutMix. Through prototype fusion, mixed prototypes provide extra semantic information to both labeled and unlabeled prototypes. High-quality global prototypes for each class are formed by fusing two enhanced prototypes, optimizing the distribution of hidden embeddings used in consistency learning. Extensive experiments on the left atrium and type B aortic dissection datasets demonstrate MPCL's superiority over previous state-of-the-art approaches, confirming the effectiveness of our framework. The code will be released soon.
Snake robots offer considerable potential for endoscopic interventions due to their ability to follow curvilinear paths. Telemanipulation is an open problem due to hyper-redundancy, as input devices only allow a specification of six degrees of freedom. Our work addresses this by presenting a unified telemanipulation strategy which enables follow-the-leader locomotion and reorientation keeping the shape change as small as possible. The basis for this is a novel shape-fitting approach for solving the inverse kinematics in only a few milliseconds. Shape fitting is performed by maximizing the similarity of two curves using Fr\'echet distance while simultaneously specifying the position and orientation of the end effector. Telemanipulation performance is investigated in a study in which 14 participants controlled a simulated snake robot to locomote into the target area. In a final validation, pivot reorientation within the target area is addressed.
Variable stiffness actuator (VSA) designs are manifold. Conventional model-based control of these nonlinear systems is associated with high effort and design-dependent assumptions. In contrast, machine learning offers a promising alternative as models are trained on real measured data and nonlinearities are inherently taken into account. Our work presents a universal, learning-based approach for position and stiffness control of soft actuators. After introducing a soft pneumatic VSA, the model is learned with input-output data. For this purpose, a test bench was set up which enables automated measurement of the variable joint stiffness. During control, Gaussian processes are used to predict pressures for achieving desired position and stiffness. The feedforward error is on average 11.5% of the total pressure range and is compensated by feedback control. Experiments with the soft actuator show that the learning-based approach allows continuous adjustment of position and stiffness without model knowledge.
Image recognition techniques heavily rely on abundant labeled data, particularly in medical contexts. Addressing the challenges associated with obtaining labeled data has led to the prominence of self-supervised learning and semi-supervised learning, especially in scenarios with limited annotated data. In this paper, we proposed an innovative approach by integrating self-supervised learning into semi-supervised models to enhance medical image recognition. Our methodology commences with pre-training on unlabeled data utilizing the BYOL method. Subsequently, we merge pseudo-labeled and labeled datasets to construct a neural network classifier, refining it through iterative fine-tuning. Experimental results on three different datasets demonstrate that our approach optimally leverages unlabeled data, outperforming existing methods in terms of accuracy for medical image recognition.
Efficient inference in high-dimensional models remains a central challenge in machine learning. This paper introduces the Gaussian Ensemble Belief Propagation (GEnBP) algorithm, a fusion of the Ensemble Kalman filter and Gaussian belief propagation (GaBP) methods. GEnBP updates ensembles by passing low-rank local messages in a graphical model structure. This combination inherits favourable qualities from each method. Ensemble techniques allow GEnBP to handle high-dimensional states, parameters and intricate, noisy, black-box generation processes. The use of local messages in a graphical model structure ensures that the approach is suited to distributed computing and can efficiently handle complex dependence structures. GEnBP is particularly advantageous when the ensemble size is considerably smaller than the inference dimension. This scenario often arises in fields such as spatiotemporal modelling, image processing and physical model inversion. GEnBP can be applied to general problem structures, including jointly learning system parameters, observation parameters, and latent state variables.
We present VeriX, a first step towards verified explainability of machine learning models in safety-critical applications. Specifically, our sound and optimal explanations can guarantee prediction invariance against bounded perturbations. We utilise constraint solving techniques together with feature sensitivity ranking to efficiently compute these explanations. We evaluate our approach on image recognition benchmarks and a real-world scenario of autonomous aircraft taxiing.
We study the problem of multi-agent control of a dynamical system with known dynamics and adversarial disturbances. Our study focuses on optimal control without centralized precomputed policies, but rather with adaptive control policies for the different agents that are only equipped with a stabilizing controller. We give a reduction from any (standard) regret minimizing control method to a distributed algorithm. The reduction guarantees that the resulting distributed algorithm has low regret relative to the optimal precomputed joint policy. Our methodology involves generalizing online convex optimization to a multi-agent setting and applying recent tools from nonstochastic control derived for a single agent. We empirically evaluate our method on a model of an overactuated aircraft. We show that the distributed method is robust to failure and to adversarial perturbations in the dynamics.
Human-in-the-loop aims to train an accurate prediction model with minimum cost by integrating human knowledge and experience. Humans can provide training data for machine learning applications and directly accomplish some tasks that are hard for computers in the pipeline with the help of machine-based approaches. In this paper, we survey existing works on human-in-the-loop from a data perspective and classify them into three categories with a progressive relationship: (1) the work of improving model performance from data processing, (2) the work of improving model performance through interventional model training, and (3) the design of the system independent human-in-the-loop. Using the above categorization, we summarize major approaches in the field, along with their technical strengths/ weaknesses, we have simple classification and discussion in natural language processing, computer vision, and others. Besides, we provide some open challenges and opportunities. This survey intends to provide a high-level summarization for human-in-the-loop and motivates interested readers to consider approaches for designing effective human-in-the-loop solutions.