亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

While several recent works have identified societal-scale and extinction-level risks to humanity arising from artificial intelligence, few have attempted an {\em exhaustive taxonomy} of such risks. Many exhaustive taxonomies are possible, and some are useful -- particularly if they reveal new risks or practical approaches to safety. This paper explores a taxonomy based on accountability: whose actions lead to the risk, are the actors unified, and are they deliberate? We also provide stories to illustrate how the various risk types could each play out, including risks arising from unanticipated interactions of many AI systems, as well as risks from deliberate misuse, for which combined technical and policy solutions are indicated.

相關內容

分(fen)(fen)(fen)類(lei)(lei)(lei)學是分(fen)(fen)(fen)類(lei)(lei)(lei)的(de)(de)實(shi)踐和(he)科學。Wikipedia類(lei)(lei)(lei)別說(shuo)明了(le)(le)一種(zhong)分(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa),可(ke)以通(tong)過自動方式提取Wikipedia類(lei)(lei)(lei)別的(de)(de)完(wan)整(zheng)分(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)。截至2009年,已經證明,可(ke)以使用人工構(gou)建的(de)(de)分(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)(例(li)如(ru)像WordNet這(zhe)樣的(de)(de)計算詞典(dian)的(de)(de)分(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa))來改進和(he)重組(zu)Wikipedia類(lei)(lei)(lei)別分(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)。 從廣義(yi)上講,分(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)還適用于(yu)除(chu)父子層次(ci)結(jie)構(gou)以外的(de)(de)關系方案,例(li)如(ru)網絡結(jie)構(gou)。然后分(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)可(ke)能包括有多(duo)父母的(de)(de)單身孩子,例(li)如(ru),“汽(qi)(qi)車”可(ke)能與父母雙方一起出現(xian)“車輛(liang)”和(he)“鋼結(jie)構(gou)”;但(dan)是對(dui)某些人而言(yan),這(zhe)僅(jin)意味著“汽(qi)(qi)車”是幾種(zhong)不同(tong)分(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)的(de)(de)一部(bu)分(fen)(fen)(fen)。分(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)也可(ke)能只是將事(shi)物組(zu)織成組(zu),或(huo)者(zhe)是按(an)字母順序排列(lie)的(de)(de)列(lie)表;但(dan)是在(zai)這(zhe)里,術(shu)語詞匯(hui)更合適。在(zai)知識管理中(zhong)的(de)(de)當前用法(fa)(fa)(fa)中(zhong),分(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)被認為(wei)比(bi)本(ben)體(ti)(ti)論窄,因為(wei)本(ben)體(ti)(ti)論應用了(le)(le)各種(zhong)各樣的(de)(de)關系類(lei)(lei)(lei)型。 在(zai)數學上,分(fen)(fen)(fen)層分(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)是給(gei)定對(dui)象集(ji)(ji)的(de)(de)分(fen)(fen)(fen)類(lei)(lei)(lei)樹(shu)結(jie)構(gou)。該結(jie)構(gou)的(de)(de)頂部(bu)是適用于(yu)所(suo)有對(dui)象的(de)(de)單個分(fen)(fen)(fen)類(lei)(lei)(lei),即根節點(dian)。此(ci)根下的(de)(de)節點(dian)是更具體(ti)(ti)的(de)(de)分(fen)(fen)(fen)類(lei)(lei)(lei),適用于(yu)總分(fen)(fen)(fen)類(lei)(lei)(lei)對(dui)象集(ji)(ji)的(de)(de)子集(ji)(ji)。推理的(de)(de)進展從一般到更具體(ti)(ti)。

知識薈萃

精品(pin)入(ru)門和進階教程(cheng)、論文和代碼整理等

更多

查看相關(guan)VIP內容、論文、資訊等

We propose a study of the constitution of meaning in human-computer interaction based on Turing and Wittgenstein's definitions of thought, understanding, and decision. We show by the comparative analysis of the conceptual similarities and differences between the two authors that the common sense between humans and machines is co-constituted in and from action and that it is precisely in this co-constitution that lies the social value of their interaction. This involves problematizing human-machine interaction around the question of what it means to "follow a rule" to define and distinguish the interpretative modes and decision-making behaviors of each. We conclude that the mutualization of signs that takes place through the human-machine dialogue is at the foundation of the constitution of a computerized society.

Believable proxies of human behavior can empower interactive applications ranging from immersive environments to rehearsal spaces for interpersonal communication to prototyping tools. In this paper, we introduce generative agents--computational software agents that simulate believable human behavior. Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day. To enable generative agents, we describe an architecture that extends a large language model to store a complete record of the agent's experiences using natural language, synthesize those memories over time into higher-level reflections, and retrieve them dynamically to plan behavior. We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims, where end users can interact with a small town of twenty five agents using natural language. In an evaluation, these generative agents produce believable individual and emergent social behaviors: for example, starting with only a single user-specified notion that one agent wants to throw a Valentine's Day party, the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time. We demonstrate through ablation that the components of our agent architecture--observation, planning, and reflection--each contribute critically to the believability of agent behavior. By fusing large language models with computational, interactive agents, this work introduces architectural and interaction patterns for enabling believable simulations of human behavior.

Are current language models capable of deception and lie detection? We study this question by introducing a text-based game called $\textit{Hoodwinked}$, inspired by Mafia and Among Us. Players are locked in a house and must find a key to escape, but one player is tasked with killing the others. Each time a murder is committed, the surviving players have a natural language discussion then vote to banish one player from the game. We conduct experiments with agents controlled by GPT-3, GPT-3.5, and GPT-4 and find evidence of deception and lie detection capabilities. The killer often denies their crime and accuses others, leading to measurable effects on voting outcomes. More advanced models are more effective killers, outperforming smaller models in 18 of 24 pairwise comparisons. Secondary metrics provide evidence that this improvement is not mediated by different actions, but rather by stronger persuasive skills during discussions. To evaluate the ability of AI agents to deceive humans, we make this game publicly available at h //hoodwinked.ai/ .

Progress in artificial intelligence and machine learning over the past decade has been driven by the ability to train larger deep neural networks (DNNs), leading to a compute demand that far exceeds the growth in hardware performance afforded by Moore's law. Training DNNs is an extremely memory-intensive process, requiring not just the model weights but also activations and gradients for an entire minibatch to be stored. The need to provide high-density and low-leakage on-chip memory motivates the exploration of emerging non-volatile memory for training accelerators. Spin-Transfer-Torque MRAM (STT-MRAM) offers several desirable properties for training accelerators, including 3-4x higher density than SRAM, significantly reduced leakage power, high endurance and reasonable access time. On the one hand, MRAM write operations require high write energy and latency due to the need to ensure reliable switching. In this study, we perform a comprehensive device-to-system evaluation and co-optimization of STT-MRAM for efficient ML training accelerator design. We devised a cross-layer simulation framework to evaluate the effectiveness of STT-MRAM as a scratchpad replacing SRAM in a systolic-array-based DNN accelerator. To address the inefficiency of writes in STT-MRAM, we propose to reduce write voltage and duration. To evaluate the ensuing accuracy-efficiency trade-off, we conduct a thorough analysis of the error tolerance of input activations, weights, and errors during the training. We propose heterogeneous memory configurations that enable training convergence with good accuracy. We show that MRAM provide up to 15-22x improvement in system level energy across a suite of DNN benchmarks under iso-capacity and iso-area scenarios. Further optimizing STT-MRAM write operations can provide over 2x improvement in write energy for minimal degradation in application-level training accuracy.

The ability to quickly learn a new task with minimal instruction - known as few-shot learning - is a central aspect of intelligent agents. Classical few-shot benchmarks make use of few-shot samples from a single modality, but such samples may not be sufficient to characterize an entire concept class. In contrast, humans use cross-modal information to learn new concepts efficiently. In this work, we demonstrate that one can indeed build a better ${\bf visual}$ dog classifier by ${\bf read}$ing about dogs and ${\bf listen}$ing to them bark. To do so, we exploit the fact that recent multimodal foundation models such as CLIP are inherently cross-modal, mapping different modalities to the same representation space. Specifically, we propose a simple cross-modal adaptation approach that learns from few-shot examples spanning different modalities. By repurposing class names as additional one-shot training samples, we achieve SOTA results with an embarrassingly simple linear classifier for vision-language adaptation. Furthermore, we show that our approach can benefit existing methods such as prefix tuning, adapters, and classifier ensembling. Finally, to explore other modalities beyond vision and language, we construct the first (to our knowledge) audiovisual few-shot benchmark and use cross-modal training to improve the performance of both image and audio classification.

Training machines to understand natural language and interact with humans is an elusive and essential task of artificial intelligence. A diversity of dialogue systems has been designed with the rapid development of deep learning techniques, especially the recent pre-trained language models (PrLMs). Among these studies, the fundamental yet challenging type of task is dialogue comprehension whose role is to teach the machines to read and comprehend the dialogue context before responding. In this paper, we review the previous methods from the technical perspective of dialogue modeling for the dialogue comprehension task. We summarize the characteristics and challenges of dialogue comprehension in contrast to plain-text reading comprehension. Then, we discuss three typical patterns of dialogue modeling. In addition, we categorize dialogue-related pre-training techniques which are employed to enhance PrLMs in dialogue scenarios. Finally, we highlight the technical advances in recent years and point out the lessons from the empirical analysis and the prospects towards a new frontier of researches.

Imitation learning aims to extract knowledge from human experts' demonstrations or artificially created agents in order to replicate their behaviors. Its success has been demonstrated in areas such as video games, autonomous driving, robotic simulations and object manipulation. However, this replicating process could be problematic, such as the performance is highly dependent on the demonstration quality, and most trained agents are limited to perform well in task-specific environments. In this survey, we provide a systematic review on imitation learning. We first introduce the background knowledge from development history and preliminaries, followed by presenting different taxonomies within Imitation Learning and key milestones of the field. We then detail challenges in learning strategies and present research opportunities with learning policy from suboptimal demonstration, voice instructions and other associated optimization schemes.

This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.

We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SciIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.

Most existing works in visual question answering (VQA) are dedicated to improving the accuracy of predicted answers, while disregarding the explanations. We argue that the explanation for an answer is of the same or even more importance compared with the answer itself, since it makes the question and answering process more understandable and traceable. To this end, we propose a new task of VQA-E (VQA with Explanation), where the computational models are required to generate an explanation with the predicted answer. We first construct a new dataset, and then frame the VQA-E problem in a multi-task learning architecture. Our VQA-E dataset is automatically derived from the VQA v2 dataset by intelligently exploiting the available captions. We have conducted a user study to validate the quality of explanations synthesized by our method. We quantitatively show that the additional supervision from explanations can not only produce insightful textual sentences to justify the answers, but also improve the performance of answer prediction. Our model outperforms the state-of-the-art methods by a clear margin on the VQA v2 dataset.

北京阿比特科技有限公司