亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

During a public health crisis like the COVID-19 pandemic, a credible and easy-to-access information portal is highly desirable. It helps with disease prevention, public health planning, and misinformation mitigation. However, creating such an information portal is challenging because 1) domain expertise is required to identify and curate credible and intelligible content, 2) the information needs to be updated promptly in response to the fast-changing environment, and 3) the information should be easily accessible by the general public; which is particularly difficult when most people do not have the domain expertise about the crisis. In this paper, we presented an expert-sourcing framework and created Jennifer, an AI chatbot, which serves as a credible and easy-to-access information portal for individuals during the COVID-19 pandemic. Jennifer was created by a team of over 150 scientists and health professionals around the world, deployed in the real world and answered thousands of user questions about COVID-19. We evaluated Jennifer from two key stakeholders' perspectives, expert volunteers and information seekers. We first interviewed experts who contributed to the collaborative creation of Jennifer to learn about the challenges in the process and opportunities for future improvement. We then conducted an online experiment that examined Jennifer's effectiveness in supporting information seekers in locating COVID-19 information and gaining their trust. We share the key lessons learned and discuss design implications for building expert-sourced and AI-powered information portals, along with the risks and opportunities of misinformation mitigation and beyond.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · 多峰值 · 多模態學習 · INFORMS · 張成子空間 ·
2023 年 3 月 16 日

Multimodal Learning Analytics (MMLA) innovations make use of rapidly evolving sensing and artificial intelligence algorithms to collect rich data about learning activities that unfold in physical learning spaces. The analysis of these data is opening exciting new avenues for both studying and supporting learning. Yet, practical and logistical challenges commonly appear while deploying MMLA innovations "in-the-wild". These can span from technical issues related to enhancing the learning space with sensing capabilities, to the increased complexity of teachers' tasks and informed consent. These practicalities have been rarely discussed. This paper addresses this gap by presenting a set of lessons learnt from a 2-year human-centred MMLA in-the-wild study conducted with 399 students and 17 educators. The lessons learnt were synthesised into topics related to i) technological/physical aspects of the deployment; ii) multimodal data and interfaces; iii) the design process; iv) participation, ethics and privacy; and v) the sustainability of the deployment.

Reinforcement learning (RL) has become widely adopted in robot control. Despite many successes, one major persisting problem can be very low data efficiency. One solution is interactive feedback, which has been shown to speed up RL considerably. As a result, there is an abundance of different strategies, which are, however, primarily tested on discrete grid-world and small scale optimal control scenarios. In the literature, there is no consensus about which feedback frequency is optimal or at which time the feedback is most beneficial. To resolve these discrepancies we isolate and quantify the effect of feedback frequency in robotic tasks with continuous state and action spaces. The experiments encompass inverse kinematics learning for robotic manipulator arms of different complexity. We show that seemingly contradictory reported phenomena occur at different complexity levels. Furthermore, our results suggest that no single ideal feedback frequency exists. Rather that feedback frequency should be changed as the agent's proficiency in the task increases.

The field of visual representation learning has seen explosive growth in the past years, but its benefits in robotics have been surprisingly limited so far. Prior work uses generic visual representations as a basis to learn (task-specific) robot action policies (e.g. via behavior cloning). While the visual representations do accelerate learning, they are primarily used to encode visual observations. Thus, action information has to be derived purely from robot data, which is expensive to collect! In this work, we present a scalable alternative where the visual representations can help directly infer robot actions. We observe that vision encoders express relationships between image observations as distances (e.g. via embedding dot product) that could be used to efficiently plan robot behavior. We operationalize this insight and develop a simple algorithm for acquiring a distance function and dynamics predictor, by fine-tuning a pre-trained representation on human collected video sequences. The final method is able to substantially outperform traditional robot learning baselines (e.g. 70% success v.s. 50% for behavior cloning on pick-place) on a suite of diverse real-world manipulation tasks. It can also generalize to novel objects, without using any robot demonstrations during train time. For visualizations of the learned policies please check: //agi-labs.github.io/manipulate-by-seeing/

The citation network of patents citing prior art arises from the legal obligation of patent applicants to properly disclose their invention. One way to study the relationship between current patents and their antecedents is by analyzing the similarity between the textual elements of patents. Many patent similarity indicators have shown a constant decrease since the mid-70s. Although several explanations have been proposed, more comprehensive analyses of this phenomenon have been rare. In this paper, we use a computationally efficient measure of patent similarity scores that leverages state-of-the-art Natural Language Processing tools, to investigate potential drivers of this apparent similarity decrease. This is achieved by modeling patent similarity scores by means of generalized additive models. We found that non-linear modeling specifications are able to distinguish between distinct, temporally varying drivers of the patent similarity levels that explain more variation in the data ($R^2\sim 18\%$) compared to previous methods. Moreover, the model reveals an underlying trend in similarity scores that is fundamentally different from the one presented previously.

Recently the field of Human-Robot Interaction gained popularity, due to the wide range of possibilities of how robots can support humans during daily tasks. One form of supportive robots are socially assistive robots which are specifically built for communicating with humans, e.g., as service robots or personal companions. As they understand humans through artificial intelligence, these robots will at some point make wrong assumptions about the humans' current state and give an unexpected response. In human-human conversations, unexpected responses happen frequently. However, it is currently unclear how such robots should act if they understand that the human did not expect their response, or even showing the uncertainty of their response in the first place. For this, we explore the different forms of potential uncertainties during human-robot conversations and how humanoids can, through verbal and non-verbal cues, communicate these uncertainties.

From the urbanists' perspective, the everyday experience of young people, as an underrepresented group in the design of public spaces, includes tactics they use to challenge the strategies which rule over urban spaces. In this regard, youth led social movements are a set of collective tactics which groups of young people use to resist power structures. Social informational streams have revolutionized the way youth organize and mobilize for social movements throughout the world, especially in urban areas. However, just like public spaces, these algorithm based platforms have been developed with a great power imbalance between the developers and users which results in the creation of non inclusive social informational streams for young activists. Social activism grows agency and confidence in youth which is critical to their development. This paper employs a youth centric lens, which is used in designing public spaces, for designing algorithmic spaces that can improve bottom up youth led movements. By reviewing the structure of these spaces and how young people interact with these structures in the different cultural contexts of Iran and the US, we propose a humanistic approach to designing social informational streams which can enhance youth activism.

How will superhuman artificial intelligence (AI) affect human decision making? And what will be the mechanisms behind this effect? We address these questions in a domain where AI already exceeds human performance, analyzing more than 5.8 million move decisions made by professional Go players over the past 71 years (1950-2021). To address the first question, we use a superhuman AI program to estimate the quality of human decisions across time, generating 58 billion counterfactual game patterns and comparing the win rates of actual human decisions with those of counterfactual AI decisions. We find that humans began to make significantly better decisions following the advent of superhuman AI. We then examine human players' strategies across time and find that novel decisions (i.e., previously unobserved moves) occurred more frequently and became associated with higher decision quality after the advent of superhuman AI. Our findings suggest that the development of superhuman AI programs may have prompted human players to break away from traditional strategies and induced them to explore novel moves, which in turn may have improved their decision-making.

In its pragmatic turn, the new discipline of AI ethics came to be dominated by humanity's collective fear of its creatures, as reflected in an extensive and perennially popular literary tradition. Dr. Frankenstein's monster in the novel by Mary Shelley rising against its creator; the unorthodox golem in H. Leivick's 1920 play going on a rampage; the rebellious robots of Karel \v{C}apek -- these and hundreds of other examples of the genre are the background against which the preoccupation of AI ethics with preventing robots from behaving badly towards people is best understood. In each of these three fictional cases (as well as in many others), the miserable artificial creature -- mercilessly exploited, or cornered by a murderous mob, and driven to violence in self-defense -- has its author's sympathy. In real life, with very few exceptions, things are different: theorists working on the ethics of AI completely ignore the possibility of robots needing protection from their creators. The present book chapter takes up this, less commonly considered, ethical angle of AI.

Although measuring held-out accuracy has been the primary approach to evaluate generalization, it often overestimates the performance of NLP models, while alternative approaches for evaluating models either focus on individual tasks or on specific behaviors. Inspired by principles of behavioral testing in software engineering, we introduce CheckList, a task-agnostic methodology for testing NLP models. CheckList includes a matrix of general linguistic capabilities and test types that facilitate comprehensive test ideation, as well as a software tool to generate a large and diverse number of test cases quickly. We illustrate the utility of CheckList with tests for three tasks, identifying critical failures in both commercial and state-of-art models. In a user study, a team responsible for a commercial sentiment analysis model found new and actionable bugs in an extensively tested model. In another user study, NLP practitioners with CheckList created twice as many tests, and found almost three times as many bugs as users without it.

Chatbot has become an important solution to rapidly increasing customer care demands on social media in recent years. However, current work on chatbot for customer care ignores a key to impact user experience - tones. In this work, we create a novel tone-aware chatbot that generates toned responses to user requests on social media. We first conduct a formative research, in which the effects of tones are studied. Significant and various influences of different tones on user experience are uncovered in the study. With the knowledge of effects of tones, we design a deep learning based chatbot that takes tone information into account. We train our system on over 1.5 million real customer care conversations collected from Twitter. The evaluation reveals that our tone-aware chatbot generates as appropriate responses to user requests as human agents. More importantly, our chatbot is perceived to be even more empathetic than human agents.

北京阿比特科技有限公司