亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Discovering and making sense of relevant literature is fundamental in any scientific field. Node-link diagram-based visualization tools can aid this process; however, existing tools have been evaluated only on small scales. This paper evaluates Argo Scholar, an open-source visualization tool designed for interactive exploration of literature and easy sharing of exploration results. A large-scale user study of 122 participants from diverse backgrounds and experiences showed that Argo Scholar is effective at helping users find related work and understand paper connections, and incremental graph-based exploration is effective across diverse disciplines. Based on the user study and user feedback, we provide design considerations and feature suggestions for future work.

相關內容

這個新版本的工具會議系列恢復了從1989年到2012年的50個會議的傳統。工具最初是“面向對象語言和系統的技術”,后來發展到包括軟件技術的所有創新方面。今天許多最重要的軟件概念都是在這里首次引入的。2019年TOOLS 50+1在俄羅斯喀山附近舉行,以同樣的創新精神、對所有與軟件相關的事物的熱情、科學穩健性和行業適用性的結合以及歡迎該領域所有趨勢和社區的開放態度,延續了該系列。 官網鏈接: · MoDELS · 語言模型化 · 知識 (knowledge) · 多樣性 ·
2022 年 12 月 11 日

In this work, we introduce IndicXTREME, a benchmark consisting of nine diverse tasks covering 18 languages from the Indic sub-continent belonging to four different families. Across languages and tasks, IndicXTREME contains a total of 103 evaluation sets, of which 51 are new contributions to the literature. To maintain high quality, we only use human annotators to curate or translate\footnote{for IndicXParaphrase, where an automatic translation system is used, a second human verification and correction step is done.} our datasets. To the best of our knowledge, this is the first effort toward creating a standard benchmark for Indic languages that aims to test the zero-shot capabilities of pretrained language models. We also release IndicCorp v2, an updated and much larger version of IndicCorp that contains 20.9 billion tokens in 24 languages. We pretrain IndicBERT v2 on IndicCorp v2 and evaluate it on IndicXTREME to show that it outperforms existing multilingual language models such as XLM-R and MuRIL.

Autonomous cars are indispensable when humans go further down the hands-free route. Although existing literature highlights that the acceptance of the autonomous car will increase if it drives in a human-like manner, sparse research offers the naturalistic experience from a passenger's seat perspective to examine the human likeness of current autonomous cars. The present study tested whether the AI driver could create a human-like ride experience for passengers based on 69 participants' feedback in a real-road scenario. We designed a ride experience-based version of the non-verbal Turing test for automated driving. Participants rode in autonomous cars (driven by either human or AI drivers) as a passenger and judged whether the driver was human or AI. The AI driver failed to pass our test because passengers detected the AI driver above chance. In contrast, when the human driver drove the car, the passengers' judgement was around chance. We further investigated how human passengers ascribe humanness in our test. Based on Lewin's field theory, we advanced a computational model combining signal detection theory with pre-trained language models to predict passengers' humanness rating behaviour. We employed affective transition between pre-study baseline emotions and corresponding post-stage emotions as the signal strength of our model. Results showed that the passengers' ascription of humanness would increase with the greater affective transition. Our study suggested an important role of affective transition in passengers' ascription of humanness, which might become a future direction for autonomous driving.

According to the public goods game (PGG) protocol, participants decide freely whether they want to contribute to a common pool or not, but the resulting benefit is distributed equally. A conceptually similar dilemma situation may emerge when participants consider if they claim a common resource but the related cost is covered equally by all group members. The latter establishes a reversed form of the original public goods game (R-PGG). In this work, we show that R-PGG is equivalent to PGG in several circumstances, starting from the traditional analysis, via the evolutionary approach in unstructured populations, to Monte Carlo simulations in structured populations. However, there are also cases when the behavior of R-PGG could be surprisingly different from the outcome of PGG. When the key parameters are heterogeneous, for instance, the results of PGG and R-PGG could be diverse even if we apply the same amplitudes of heterogeneity. We find that the heterogeneity in R-PGG generally impedes cooperation, while the opposite is observed for PGG. These diverse system reactions can be understood if we follow how payoff functions change when introducing heterogeneity in the parameter space. This analysis also reveals the distinct roles of cooperator and defector strategies in the mentioned games. Our observations may hopefully stimulate further research to check the potential differences between PGG and R-PGG due to the alternative complexity of conditions.

Despite the advance of the Open Access (OA) movement, most scholarly production can only be accessed through a paywall. We conduct an international survey among researchers (N=3,304) to measure the willingness and motivations to use (or not use) scholarly piracy sites, and other alternatives to overcome a paywall such as paying with their own money, institutional loans, just reading the abstract, asking the corresponding author for a copy of the document, asking a colleague to get the document for them, or searching for an OA version of the paper. We also explore differences in terms of age, professional position, country income level, discipline, and commitment to OA. The results show that researchers most frequently look for OA versions of the documents. However, more than 50% of the participants have used a scholarly piracy site at least once. This is less common in high-income countries, and among older and better-established scholars. Regarding disciplines, such services were less used in Life & Health Sciences and Social Sciences. Those who have never used a pirate library highlighted ethical and legal objections or pointed out that they were not aware of the existence of such libraries.

Inferring reward functions from human behavior is at the center of value alignment - aligning AI objectives with what we, humans, actually want. But doing so relies on models of how humans behave given their objectives. After decades of research in cognitive science, neuroscience, and behavioral economics, obtaining accurate human models remains an open research topic. This begs the question: how accurate do these models need to be in order for the reward inference to be accurate? On the one hand, if small errors in the model can lead to catastrophic error in inference, the entire framework of reward learning seems ill-fated, as we will never have perfect models of human behavior. On the other hand, if as our models improve, we can have a guarantee that reward accuracy also improves, this would show the benefit of more work on the modeling side. We study this question both theoretically and empirically. We do show that it is unfortunately possible to construct small adversarial biases in behavior that lead to arbitrarily large errors in the inferred reward. However, and arguably more importantly, we are also able to identify reasonable assumptions under which the reward inference error can be bounded linearly in the error in the human model. Finally, we verify our theoretical insights in discrete and continuous control tasks with simulated and human data.

The social acceptance of AI agents, including intelligent virtual agents and physical robots, is becoming more important for the integration of AI into human society. Although the agents used in human society share various tasks with humans, their cooperation may frequently reduce the task performance. One way to improve the relationship between humans and AI agents is to have humans empathize with the agents. By empathizing, humans feel positively and kindly toward agents, which makes it easier to accept them. In this study, we focus on tasks in which humans and agents have various interactions together, and we investigate the properties of agents that significantly influence human empathy toward the agents. To investigate the effects of task content, difficulty, task completion, and an agent's expression on human empathy, two experiments were conducted. The results of the two experiments showed that human empathy toward the agent was difficult to maintain with only task factors, and that the agent's expression was able to maintain human empathy. In addition, a higher task difficulty reduced the decrease in human empathy, regardless of task content. These results demonstrate that an AI agent's properties play an important role in helping humans accept them.

Gender/ing guides how we view ourselves, the world around us, and each other--including non-humans. Critical voices have raised the alarm about stereotyped gendering in the design of socially embodied artificial agents like voice assistants, conversational agents, and robots. Yet, little is known about how this plays out in research and to what extent. As a first step, we critically reviewed the case of Pepper, a gender-ambiguous humanoid robot. We conducted a systematic review (n=75) involving meta-synthesis and content analysis, examining how participants and researchers gendered Pepper through stated and unstated signifiers and pronoun usage. We found that ascriptions of Pepper's gender were inconsistent, limited, and at times discordant, with little evidence of conscious gendering and some indication of researcher influence on participant gendering. We offer six challenges driving the state of affairs and a practical framework coupled with a critical checklist for centering gender in research on artificial agents.

Along with the massive growth of the Internet from the 1990s until now, various innovative technologies have been created to bring users breathtaking experiences with more virtual interactions in cyberspace. Many virtual environments with thousands of services and applications, from social networks to virtual gaming worlds, have been developed with immersive experience and digital transformation, but most are incoherent instead of being integrated into a platform. In this context, metaverse, a term formed by combining meta and universe, has been introduced as a shared virtual world that is fueled by many emerging technologies, such as fifth-generation networks and beyond, virtual reality, and artificial intelligence (AI). Among such technologies, AI has shown the great importance of processing big data to enhance immersive experience and enable human-like intelligence of virtual agents. In this survey, we make a beneficial effort to explore the role of AI in the foundation and development of the metaverse. We first deliver a preliminary of AI, including machine learning algorithms and deep learning architectures, and its role in the metaverse. We then convey a comprehensive investigation of AI-based methods concerning six technical aspects that have potentials for the metaverse: natural language processing, machine vision, blockchain, networking, digital twin, and neural interface, and being potential for the metaverse. Subsequently, several AI-aided applications, such as healthcare, manufacturing, smart cities, and gaming, are studied to be deployed in the virtual worlds. Finally, we conclude the key contribution of this survey and open some future research directions in AI for the metaverse.

Since real-world objects and their interactions are often multi-modal and multi-typed, heterogeneous networks have been widely used as a more powerful, realistic, and generic superclass of traditional homogeneous networks (graphs). Meanwhile, representation learning (\aka~embedding) has recently been intensively studied and shown effective for various network mining and analytical tasks. In this work, we aim to provide a unified framework to deeply summarize and evaluate existing research on heterogeneous network embedding (HNE), which includes but goes beyond a normal survey. Since there has already been a broad body of HNE algorithms, as the first contribution of this work, we provide a generic paradigm for the systematic categorization and analysis over the merits of various existing HNE algorithms. Moreover, existing HNE algorithms, though mostly claimed generic, are often evaluated on different datasets. Understandable due to the application favor of HNE, such indirect comparisons largely hinder the proper attribution of improved task performance towards effective data preprocessing and novel technical design, especially considering the various ways possible to construct a heterogeneous network from real-world application data. Therefore, as the second contribution, we create four benchmark datasets with various properties regarding scale, structure, attribute/label availability, and \etc.~from different sources, towards handy and fair evaluations of HNE algorithms. As the third contribution, we carefully refactor and amend the implementations and create friendly interfaces for 13 popular HNE algorithms, and provide all-around comparisons among them over multiple tasks and experimental settings.

北京阿比特科技有限公司