We propose the novel concept of a cyber-human system (CHS) and a diverse and pluralistic "mixed-life society," in which cyber and human societies commit to each other. This concept enhances the cyber-physical system (CPS), which is associated with the current Society 5.0, which is a social vision realized through the fusion of cyber space (virtual space) and physical space (real space). In addition, the Cyber-Human Social Co-Operating System (Social Co-OS) combining cyber and human societies is shown as a form of architecture that embodies the CHS. In this architecture, the cyber system and the human system cooperate through the fast loop (operation and administration) and the slow loop (consensus and politics). Furthermore, the technical content and current implementation of the basic functions of the Social Co-OS are described. These functions consist of individual behavioral diagnostics and interventions in the fast loop and, group decision diagnostics and consensus building in the slow loop. This system will contribute to mutual aid communities and platform cooperatives.
Preserving energy in households and office buildings is a significant challenge, mainly due to the recent shortage of energy resources, the uprising of the current environmental problems, and the global lack of utilizing energy-saving technologies. Not to mention, within some regions, COVID-19 social distancing measures have led to a temporary transfer of energy demand from commercial and urban centers to residential areas, causing an increased use and higher charges, and in turn, creating economic impacts on customers. Therefore, the marketplace could benefit from developing an internet of things (IoT) ecosystem that monitors energy consumption habits and promptly recommends action to facilitate energy efficiency. This paper aims to present the full integration of a proposed energy efficiency framework into the Home-Assistant platform using an edge-based architecture. End-users can visualize their consumption patterns as well as ambient environmental data using the Home-Assistant user interface. More notably, explainable energy-saving recommendations are delivered to end-users in the form of notifications via the mobile application to facilitate habit change. In this context, to the best of the authors' knowledge, this is the first attempt to develop and implement an energy-saving recommender system on edge devices. Thus, ensuring better privacy preservation since data are processed locally on the edge, without the need to transmit them to remote servers, as is the case with cloudlet platforms.
Cyber-Physical Systems (CPS) consist of inter-wined computational (cyber) and physical components interacting through sensors and/or actuators. Computational elements are networked at every scale and can communicate with each other and with humans. Nodes can join and leave the network at any time or they can move to different spatial locations. In this scenario, monitoring spatial and temporal properties plays a key role in the understanding of how complex behaviors can emerge from local and dynamic interactions. We revisit here the Spatio-Temporal Reach and Escape Logic (STREL), a logic-based formal language designed to express and monitor spatio-temporal requirements over the execution of mobile and spatially distributed CPS. STREL considers the physical space in which CPS entities (nodes of the graph) are arranged as a weighted graph representing their dynamic topological configuration. Both nodes and edges include attributes modeling physical and logical quantities that can evolve over time. STREL combines the Signal Temporal Logic with two spatial modalities reach and escape that operate over the weighted graph. From these basic operators, we can derive other important spatial modalities such as everywhere, somewhere and surround. We propose both qualitative and quantitative semantics based on constraint semiring algebraic structure. We provide an offline monitoring algorithm for STREL and we show the feasibility of our approach with the application to two case studies: monitoring spatio-temporal requirements over a simulated mobile ad-hoc sensor network and a simulated epidemic spreading model for COVID19.
Seamless human robot interaction (HRI) and cooperative human-robot (HR) teaming critically rely upon accurate and timely human mental workload (MW) models. Cognitive Load Theory (CLT) suggests representative physical environments produce representative mental processes; physical environment fidelity corresponds with improved modeling accuracy. Virtual Reality (VR) systems provide immersive environments capable of replicating complicated scenarios, particularly those associated with high-risk, high-stress scenarios. Passive biosignal modeling shows promise as a noninvasive method of MW modeling. However, VR systems rarely include multimodal psychophysiological feedback or capitalize on biosignal data for online MW modeling. Here, we develop a novel VR simulation pipeline, inspired by the NASA Multi-Attribute Task Battery II (MATB-II) task architecture, capable of synchronous collection of objective performance, subjective performance, and passive human biosignals in a simulated hazardous exploration environment. Our system design extracts and publishes biofeatures through the Robot Operating System (ROS), facilitating real time psychophysiology-based MW model integration into complete end-to-end systems. A VR simulation pipeline capable of evaluating MWs online could be foundational for advancing HR systems and VR experiences by enabling these systems to adaptively alter their behaviors in response to operator MW.
The availability of large-scale video action understanding datasets has facilitated advances in the interpretation of visual scenes containing people. However, learning to recognise human actions and their social interactions in an unconstrained real-world environment comprising numerous people, with potentially highly unbalanced and long-tailed distributed action labels from a stream of sensory data captured from a mobile robot platform remains a significant challenge, not least owing to the lack of a reflective large-scale dataset. In this paper, we introduce JRDB-Act, as an extension of the existing JRDB, which is captured by a social mobile manipulator and reflects a real distribution of human daily-life actions in a university campus environment. JRDB-Act has been densely annotated with atomic actions, comprises over 2.8M action labels, constituting a large-scale spatio-temporal action detection dataset. Each human bounding box is labeled with one pose-based action label and multiple~(optional) interaction-based action labels. Moreover JRDB-Act provides social group annotation, conducive to the task of grouping individuals based on their interactions in the scene to infer their social activities~(common activities in each social group). Each annotated label in JRDB-Act is tagged with the annotators' confidence level which contributes to the development of reliable evaluation strategies. In order to demonstrate how one can effectively utilise such annotations, we develop an end-to-end trainable pipeline to learn and infer these tasks, i.e. individual action and social group detection. The data and the evaluation code is publicly available at //jrdb.erc.monash.edu/.
The explanation dimension of Artificial Intelligence (AI) based system has been a hot topic for the past years. Different communities have raised concerns about the increasing presence of AI in people's everyday tasks and how it can affect people's lives. There is a lot of research addressing the interpretability and transparency concepts of explainable AI (XAI), which are usually related to algorithms and Machine Learning (ML) models. But in decision-making scenarios, people need more awareness of how AI works and its outcomes to build a relationship with that system. Decision-makers usually need to justify their decision to others in different domains. If that decision is somehow based on or influenced by an AI-system outcome, the explanation about how the AI reached that result is key to building trust between AI and humans in decision-making scenarios. In this position paper, we discuss the role of XAI in decision-making scenarios, our vision of Decision-Making with AI-system in the loop, and explore one case from the literature about how XAI can impact people justifying their decisions, considering the importance of building the human-AI relationship for those scenarios.
Recommender systems exploit interaction history to estimate user preference, having been heavily used in a wide range of industry applications. However, static recommendation models are difficult to answer two important questions well due to inherent shortcomings: (a) What exactly does a user like? (b) Why does a user like an item? The shortcomings are due to the way that static models learn user preference, i.e., without explicit instructions and active feedback from users. The recent rise of conversational recommender systems (CRSs) changes this situation fundamentally. In a CRS, users and the system can dynamically communicate through natural language interactions, which provide unprecedented opportunities to explicitly obtain the exact preference of users. Considerable efforts, spread across disparate settings and applications, have been put into developing CRSs. Existing models, technologies, and evaluation methods for CRSs are far from mature. In this paper, we provide a systematic review of the techniques used in current CRSs. We summarize the key challenges of developing CRSs into five directions: (1) Question-based user preference elicitation. (2) Multi-turn conversational recommendation strategies. (3) Dialogue understanding and generation. (4) Exploitation-exploration trade-offs. (5) Evaluation and user simulation. These research directions involve multiple research fields like information retrieval (IR), natural language processing (NLP), and human-computer interaction (HCI). Based on these research directions, we discuss some future challenges and opportunities. We provide a road map for researchers from multiple communities to get started in this area. We hope this survey helps to identify and address challenges in CRSs and inspire future research.
Nowadays, recommender systems are present in many daily activities such as online shopping, browsing social networks, etc. Given the rising demand for reinvigoration of the tourist industry through information technology, recommenders have been included into tourism websites such as Expedia, Booking or Tripadvisor, among others. Furthermore, the amount of scientific papers related to recommender systems for tourism is on solid and continuous growth since 2004. Much of this growth is due to social networks that, besides to offer researchers the possibility of using a great mass of available and constantly updated data, they also enable the recommendation systems to become more personalised, effective and natural. This paper reviews and analyses many research publications focusing on tourism recommender systems that use social networks in their projects. We detail their main characteristics, like which social networks are exploited, which data is extracted, the applied recommendation techniques, the methods of evaluation, etc. Through a comprehensive literature review, we aim to collaborate with the future recommender systems, by giving some clear classifications and descriptions of the current tourism recommender systems.
Current captioning approaches can describe images using black-box architectures whose behavior is hardly controllable and explainable from the exterior. As an image can be described in infinite ways depending on the goal and the context at hand, a higher degree of controllability is needed to apply captioning algorithms in complex scenarios. In this paper, we introduce a novel framework for image captioning which can generate diverse descriptions by allowing both grounding and controllability. Given a control signal in the form of a sequence or set of image regions, we generate the corresponding caption through a recurrent architecture which predicts textual chunks explicitly grounded on regions, following the constraints of the given control. Experiments are conducted on Flickr30k Entities and on COCO Entities, an extended version of COCO in which we add grounding annotations collected in a semi-automatic manner. Results demonstrate that our method achieves state of the art performances on controllable image captioning, in terms of caption quality and diversity. Code will be made publicly available.
This paper introduces a novel neural network-based reinforcement learning approach for robot gaze control. Our approach enables a robot to learn and to adapt its gaze control strategy for human-robot interaction neither with the use of external sensors nor with human supervision. The robot learns to focus its attention onto groups of people from its own audio-visual experiences, independently of the number of people, of their positions and of their physical appearances. In particular, we use a recurrent neural network architecture in combination with Q-learning to find an optimal action-selection policy; we pre-train the network using a simulated environment that mimics realistic scenarios that involve speaking/silent participants, thus avoiding the need of tedious sessions of a robot interacting with people. Our experimental evaluation suggests that the proposed method is robust against parameter estimation, i.e. the parameter values yielded by the method do not have a decisive impact on the performance. The best results are obtained when both audio and visual information is jointly used. Experiments with the Nao robot indicate that our framework is a step forward towards the autonomous learning of socially acceptable gaze behavior.
Conversational systems have come a long way after decades of research and development, from Eliza and Parry in the 60's and 70's, to task-completion systems as in the ATIS project, to intelligent personal assistants such as Siri, and to today's social chatbots like XiaoIce. Social chatbots' appeal lies in not only their ability to respond to users' diverse requests, but also in being able to establish an emotional connection with users. The latter is done by satisfying the users' essential needs for communication, affection, and social belonging. The design of social chatbots must focus on user engagement and take both intellectual quotient (IQ) and emotional quotient (EQ) into account. Users should want to engage with the social chatbot; as such, we define the success metric for social chatbots as conversation-turns per session (CPS). Using XiaoIce as an illustrative example, we discuss key technologies in building social chatbots from core chat to visual sense to skills. We also show how XiaoIce can dynamically recognize emotion and engage the user throughout long conversations with appropriate interpersonal responses. As we become the first generation of humans ever living with AI, social chatbots that are well-designed to be both useful and empathic will soon be ubiquitous.