How is human-robot interaction considered within the development of new robotic systems by practitioners? This study sets out to inquire, whether the development teams of robotic products have been considering human factor methods in their design and implementation process. We were specifically interested in the non-verbal communication methods they were aiming to implement, and how they have approached the design process for these. Although valuable insights on tasks and communication needs during the different phases of robot operation could be gathered, the results of this study indicate, that the perspective of the human user or bystander is very often neglected and that knowledge on methods for engineering human-robot interaction is missing. The study was conducted with eleven development teams consisting of robot manufacturers and students within a robot building course representing overall 68 individual participants.
Human values, or what people hold important in their life, such as freedom, fairness, and social responsibility, often remain unnoticed and unattended during software development. Ignoring values can lead to values violations in software that can result in financial losses, reputation damage, and widespread social and legal implications. However, embedding human values in software is not only non-trivial but also generally an unclear process. Commencing as early as during the Requirements Engineering (RE) activities promises to ensure fit-for-purpose and quality software products that adhere to human values. But what is the impact of considering human values explicitly during early RE activities? To answer this question, we conducted a scenario-based survey where 56 software practitioners contextualised requirements analysis towards a proposed mobile application for the homeless and suggested values-laden software features accordingly. The suggested features were qualitatively analysed. Results show that explicit considerations of values can help practitioners identify applicable values, associate purpose with the features they develop, think outside-the-box, and build connections between software features and human values. Finally, drawing from the results and experiences of this study, we propose a scenario-based values elicitation process -- a simple four-step takeaway as a practical implication of this study.
Transitioning between topics is a natural component of human-human dialog. Although topic transition has been studied in dialogue for decades, only a handful of corpora based studies have been performed to investigate the subtleties of topic transitions. Thus, this study annotates 215 conversations from the switchboard corpus and investigates how variables such as length, number of topic transitions, topic transitions share by participants and turns/topic are related. This work presents an empirical study on topic transition in switchboard corpus followed by modelling topic transition with a precision of 83% for in-domain(id) test set and 82% on 10 out-of-domain}(ood) test set. It is envisioned that this work will help in emulating human-human like topic transition in open-domain dialog systems.
This paper describes the design and control of a support and recovery system for use with planar legged robots. The system operates in three modes. First, it can be operated in a fully transparent mode where no forces are applied to the robot. In this mode, the system follows the robot closely to be able to quickly catch the robot if needed. Second, it can provide a vertical supportive force to assist a robot during operation. Third, it can catch the robot and pull it away from the ground after a failure to avoid falls and the associated damages. In this mode, the system automatically resets the robot after a trial allowing for multiple consecutive trials to be run without manual intervention. The supportive forces are applied to the robot through an actuated cable and pulley system that uses series elastic actuation with a unidirectional spring to enable truly transparent operation. The nonlinear nature of this system necessitates careful design of controllers to ensure predictable, safe behaviors. In this paper we introduce the mechatronic design of the recovery system, develop suitable controllers, and evaluate the system's performance on the bipedal robot RAMone.
A challenge in using robots in human-inhabited environments is to design behavior that is engaging, yet robust to the perturbations induced by human interaction. Our idea is to imbue the robot with intrinsic motivation (IM) so that it can handle new situations and appears as a genuine social other to humans and thus be of more interest to a human interaction partner. Human-robot interaction (HRI) experiments mainly focus on scripted or teleoperated robots, that mimic characteristics such as IM to control isolated behavior factors. This article presents a "robotologist" study design that allows comparing autonomously generated behaviors with each other, and, for the first time, evaluates the human perception of IM-based generated behavior in robots. We conducted a within-subjects user study (N=24) where participants interacted with a fully autonomous Sphero BB8 robot with different behavioral regimes: one realizing an adaptive, intrinsically motivated behavior and the other being reactive, but not adaptive. The robot and its behaviors are intentionally kept minimal to concentrate on the effect induced by IM. A quantitative analysis of post-interaction questionnaires showed a significantly higher perception of the dimension "Warmth" compared to the reactive baseline behavior. Warmth is considered a primary dimension for social attitude formation in human social cognition. A human perceived as warm (friendly, trustworthy) experiences more positive social interactions.
Inspired by the "Cognitive Hour-glass" model presented in //doi.org/10.1515/jagi-2016-0001, we propose a new framework for developing cognitive architectures aimed at cognitive robotics. The purpose of the proposed framework is foremost to ease the development of cognitive architectures by encouraging and mitigating cooperation and re-use of existing results. This is done by proposing a framework dividing the development of cognitive architectures into a series of layers that can be considered partly in isolation, and some of which directly relate to other research fields. Finally, we give introductions to and review some topics essential to the proposed framework.
To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.
This paper introduces a novel neural network-based reinforcement learning approach for robot gaze control. Our approach enables a robot to learn and to adapt its gaze control strategy for human-robot interaction neither with the use of external sensors nor with human supervision. The robot learns to focus its attention onto groups of people from its own audio-visual experiences, independently of the number of people, of their positions and of their physical appearances. In particular, we use a recurrent neural network architecture in combination with Q-learning to find an optimal action-selection policy; we pre-train the network using a simulated environment that mimics realistic scenarios that involve speaking/silent participants, thus avoiding the need of tedious sessions of a robot interacting with people. Our experimental evaluation suggests that the proposed method is robust against parameter estimation, i.e. the parameter values yielded by the method do not have a decisive impact on the performance. The best results are obtained when both audio and visual information is jointly used. Experiments with the Nao robot indicate that our framework is a step forward towards the autonomous learning of socially acceptable gaze behavior.
There is a need for systems to dynamically interact with ageing populations to gather information, monitor health condition and provide support, especially after hospital discharge or at-home settings. Several smart devices have been delivered by digital health, bundled with telemedicine systems, smartphone and other digital services. While such solutions offer personalised data and suggestions, the real disruptive step comes from the interaction of new digital ecosystem, represented by chatbots. Chatbots will play a leading role by embodying the function of a virtual assistant and bridging the gap between patients and clinicians. Powered by AI and machine learning algorithms, chatbots are forecasted to save healthcare costs when used in place of a human or assist them as a preliminary step of helping to assess a condition and providing self-care recommendations. This paper describes integrating chatbots into telemedicine systems intended for elderly patient after their hospital discharge. The paper discusses possible ways to utilise chatbots to assist healthcare providers and support patients with their condition.
Effective task management is essential to successful team collaboration. While the past decade has seen considerable innovation in systems that track and manage group tasks, these innovations have typically been outside of the principal communication channels: email, instant messenger, and group chat. Teams formulate, discuss, refine, assign, and track the progress of their collaborative tasks over electronic communication channels, yet they must leave these channels to update their task-tracking tools, creating a source of friction and inefficiency. To address this problem, we explore how bots might be used to mediate task management for individuals and teams. We deploy a prototype bot to eight different teams of information workers to help them create, assign, and keep track of tasks, all within their main communication channel. We derived seven insights for the design of future bots for coordinating work.