We present DAVE Aquatic Virtual Environment (DAVE), an open source simulation stack for underwater robots, sensors, and environments. Conventional robotics simulators are not designed to address unique challenges that come with the marine environment, including but not limited to environment conditions that vary spatially and temporally, impaired or challenging perception, and the unavailability of data in a generally unexplored environment. Given the variety of sensors and platforms, wheels are often reinvented for specific use cases that inevitably resist wider adoption. Building on existing simulators, we provide a framework to help speed up the development and evaluation of algorithms that would otherwise require expensive and time-consuming operations at sea. The framework includes basic building blocks (e.g., new vehicles, water-tracking Doppler Velocity Logger, physics-based multibeam sonar) as well as development tools (e.g., dynamic bathymetry spawning, ocean currents), which allows the user to focus on methodology rather than software infrastructure. We demonstrate usage through example scenarios, bathymetric data import, user interfaces for data inspection and motion planning for manipulation, and visualizations.
Understanding and modelling children's cognitive processes and their behaviour in the context of their interaction with robots and social artificial intelligence systems is a fundamental prerequisite for meaningful and effective robot interventions. However, children's development involve complex faculties such as exploration, creativity and curiosity which are challenging to model. Also, often children express themselves in a playful way which is different from a typical adult behaviour. Different children also have different needs, and it remains a challenge in the current state of the art that those of neurodiverse children are under-addressed. With this workshop, we aim to promote a common ground among different disciplines such as developmental sciences, artificial intelligence and social robotics and discuss cutting-edge research in the area of user modelling and adaptive systems for children.
The necessity to deal with uncertain data is a major challenge in decision making. Robust optimization emerged as one of the predominant paradigms to produce solutions that hedge against uncertainty. In order to obtain an even more realistic description of the underlying problem where the decision maker can react to newly disclosed information, multistage models can be used. However, due to their computational difficulty, multistage problems beyond two stages have received less attention and are often only addressed using approximation rather than optimization schemes. Even less attention is paid to the consideration of decision-dependent uncertainty in a multistage setting. We explore multistage robust optimization via quantified linear programs, which are linear programs with ordered variables that are either existentially or universally quantified. Building upon a (mostly) discrete setting where the uncertain parameters -- the universally quantified variables -- are only restricted by their bounds, we present an augmented version that allows stating the discrete uncertainty set via a linear constraint system that also can be affected by decision variables. We present a general search-based solution approach and introduce our solver Yasol that is able to deal with multistage robust linear discrete optimization problems, with final mixed-integer recourse actions and a discrete uncertainty set, which even can be decision-dependent. In doing so, we provide a convenient model-and-run approach, that can serve as baseline for computational experiments in the field of multistage robust optimization, providing optimal solutions for problems with an arbitrary number of decision stages.
Reinforcement learning (RL) is already widely applied to applications such as robotics, but it is only sparsely used in sensor management. In this paper, we apply the popular Proximal Policy Optimization (PPO) approach to a multi-agent UAV tracking scenario. While recorded data of real scenarios can accurately reflect the real world, the required amount of data is not always available. Simulation data, however, is typically cheap to generate, but the utilized target behavior is often naive and only vaguely represents the real world. In this paper, we utilize multi-agent RL to jointly generate protagonistic and antagonistic policies and overcome the data generation problem, as the policies are generated on-the-fly and adapt continuously. This way, we are able to clearly outperform baseline methods and robustly generate competitive policies. In addition, we investigate explainable artificial intelligence (XAI) by interpreting feature saliency and generating an easy-to-read decision tree as a simplified policy.
Model-based and learning-based methods are two major types of methodologies to model car following behaviors. Model-based methods describe the car-following behaviors with explicit mathematical equations, while learning-based methods focus on getting a mapping between inputs and outputs. Both types of methods have advantages and weaknesses. Meanwhile, most car-following models are generative and only consider the inputs of the speed, position, and acceleration of the last time step. To address these issues, this study proposes a novel framework called IDM-Follower that can generate a sequence of following vehicle trajectory by a recurrent autoencoder informed by a physical car-following model, the Intelligent Driving Model (IDM).We implement a novel structure with two independent encoders and a self-attention decoder that could sequentially predict the following trajectories. A loss function considering the discrepancies between predictions and labeled data integrated with discrepancies from model-based predictions is implemented to update the neural network parameters. Numerical experiments with multiple settings on simulation and NGSIM datasets show that the IDM-Follower can improve the prediction performance compared to the model-based or learning-based methods alone. Analysis on different noise levels also shows good robustness of the model.
We present RCareWorld, a human-centric simulation world for physical and social robotic caregiving designed with inputs from stakeholders, including care recipients, caregivers, occupational therapists, and roboticists. RCareWorld has realistic human models of care recipients with mobility limitations and caregivers, home environments with multiple levels of accessibility and assistive devices, and robots commonly used for caregiving. It interfaces with various physics engines to model diverse material types necessary for simulating caregiving scenarios, and provides the capability to plan, control, and learn both human and robot control policies by integrating with state-of-the-art external planning and learning libraries, and VR devices. We propose a set of realistic caregiving tasks in RCareWorld as a benchmark for physical robotic caregiving and provide baseline control policies for them. We illustrate the high-fidelity simulation capabilities of RCareWorld by demonstrating the execution of a policy learnt in simulation for one of these tasks on a real-world setup. Additionally, we perform a real-world social robotic caregiving experiment using behaviors modeled in RCareWorld. Robotic caregiving, though potentially impactful towards enhancing the quality of life of care recipients and caregivers, is a field with many barriers to entry due to its interdisciplinary facets. RCareWorld takes the first step towards building a realistic simulation world for robotic caregiving that would enable researchers worldwide to contribute to this impactful field. Demo videos and supplementary materials can be found at: //emprise.cs.cornell.edu/rcareworld/.
In real life, the decoration of 3D indoor scenes through designing furniture layout provides a rich experience for people. In this paper, we explore the furniture layout task as a Markov decision process (MDP) in virtual reality, which is solved by hierarchical reinforcement learning (HRL). The goal is to produce a proper two-furniture layout in the virtual reality of the indoor scenes. In particular, we first design a simulation environment and introduce the HRL formulation for a two-furniture layout. We then apply a hierarchical actor-critic algorithm with curriculum learning to solve the MDP. We conduct our experiments on a large-scale real-world interior layout dataset that contains industrial designs from professional designers. Our numerical results demonstrate that the proposed model yields higher-quality layouts as compared with the state-of-art models.
The primary aim of this study intends the perception of students towards online learning in the covid-19 pandemic period. The pandemic has changed the traditional concepts of the education system and broken the functions of the educational institutions. But, they give it an opportunity to change pedagogy. The research paper discussed the students opinions on online learning and virtual classroom learning. This study applied a qualitative approach and prepared a systematic questionnaire for data collection. The researcher collected the data from 258 students from different places in India and also, the disproportionate sampling used for data collection. The research mainly focused on the students perception, the comfort and discomfort of e-learning, using electronic devices for communication, the virtual learning is a pleasure or pressure to the students, the digital skills of the students and their active performance. The study revealed that over 50 percent of the students are having excellent knowledge of digital skills. The students are attending online classes through their personal computers or laptops and phones. The teachers are allowing the students to ask questions and clear the doubt of the students. The study found that the students are losing social interaction with teachers, friends and cannot access the library because of online classes. Finally, the students felt that online learning is a pressure instead of pleasure.
Quantum software systems are emerging software engineering (SE) genre that exploit principles of quantum bits (Qubit) and quantum gates (Qgates) to solve complex computing problems that today classic computers can not effectively do in a reasonable time. According to its proponents, agile software development practices have the potential to address many of the problems endemic to the development of quantum software. However, there is a dearth of evidence confirming if agile practices suit and can be adopted by software teams as they are in the context of quantum software development. To address this lack, we conducted an empirical study to investigate the needs and challenges of using agile practices to develop quantum software. While our semi-structured interviews with 26 practitioners across 10 countries highlighted the applicability of agile practices in this domain, the interview findings also revealed new challenges impeding the effective incorporation of these practices. Our research findings provide a springboard for further contextualization and seamless integration of agile practices with developing the next generation of quantum software.
This article presents a novel telepresence system for advancing aerial manipulation in dynamic and unstructured environments. The proposed system not only features a haptic device, but also a virtual reality (VR) interface that provides real-time 3D displays of the robot's workspace as well as a haptic guidance to its remotely located operator. To realize this, multiple sensors namely a LiDAR, cameras and IMUs are utilized. For processing of the acquired sensory data, pose estimation pipelines are devised for industrial objects of both known and unknown geometries. We further propose an active learning pipeline in order to increase the sample efficiency of a pipeline component that relies on Deep Neural Networks (DNNs) based object detection. All these algorithms jointly address various challenges encountered during the execution of perception tasks in industrial scenarios. In the experiments, exhaustive ablation studies are provided to validate the proposed pipelines. Methodologically, these results commonly suggest how an awareness of the algorithms' own failures and uncertainty ("introspection") can be used tackle the encountered problems. Moreover, outdoor experiments are conducted to evaluate the effectiveness of the overall system in enhancing aerial manipulation capabilities. In particular, with flight campaigns over days and nights, from spring to winter, and with different users and locations, we demonstrate over 70 robust executions of pick-and-place, force application and peg-in-hole tasks with the DLR cable-Suspended Aerial Manipulator (SAM). As a result, we show the viability of the proposed system in future industrial applications.
Mobile telepresence robots (MTRs) have become increasingly popular in the expanding world of remote work, providing new avenues for people to actively participate in activities at a distance. However, humans operating MTRs often have difficulty navigating in densely populated environments due to limited situation awareness and narrow field-of-view, which reduces user acceptance and satisfaction. Shared autonomy in navigation has been studied primarily in static environments or in situations where only one pedestrian interacts with the robot. We present a multimodal shared autonomy approach, leveraging visual and haptic guidance, to provide navigation assistance for remote operators in densely-populated environments. It uses a modified form of reciprocal velocity obstacles for generating safe control inputs while taking social proxemics constraints into account. Two different visual guidance designs, as well as haptic force rendering, were proposed to convey safe control input. We conducted a user study to compare the merits and limitations of multimodal navigation assistance to haptic or visual assistance alone on a shared navigation task. The study involved 15 participants operating a virtual telepresence robot in a virtual hall with moving pedestrians, using the different assistance modalities. We evaluated navigation performance, transparency and cooperation, as well as user preferences. Our results showed that participants preferred multimodal assistance with a visual guidance trajectory over haptic or visual modalities alone, although it had no impact on navigation performance. Additionally, we found that visual guidance trajectories conveyed a higher degree of understanding and cooperation than equivalent haptic cues in a navigation task.