亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Artificial intelligence (AI) researchers have been developing and refining large language models (LLMs) that exhibit remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. The latest model developed by OpenAI, GPT-4, was trained using an unprecedented scale of compute and data. In this paper, we report on our investigation of an early version of GPT-4, when it was still in active development by OpenAI. We contend that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google's PaLM for example) that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models. We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4's performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system. In our exploration of GPT-4, we put special emphasis on discovering its limitations, and we discuss the challenges ahead for advancing towards deeper and more comprehensive versions of AGI, including the possible need for pursuing a new paradigm that moves beyond next-word prediction. We conclude with reflections on societal influences of the recent technological leap and future research directions.

相關內容

北京時間2023年3月15日凌晨,ChatGPT開發商OpenAI 發布了(le)發布了(le)全新的(de)多模態預訓練大(da)模型 GPT-4,可(ke)以更(geng)可(ke)靠、更(geng)具創造力、能處(chu)(chu)理(li)更(geng)細節的(de)指令,根據圖(tu)片和文(wen)字(zi)提示都能生成相(xiang)應內容。 具體來說來說,GPT-4 相(xiang)比(bi)上一代的(de)模型,實現了(le)飛躍式提升:支(zhi)持圖(tu)像(xiang)和文(wen)本輸(shu)入,擁有強大(da)的(de)識圖(tu)能力;大(da)幅提升了(le)文(wen)字(zi)輸(shu)入限制,在(zai)ChatGPT模式下,GPT-4可(ke)以處(chu)(chu)理(li)超過(guo)2.5萬(wan)字(zi)的(de)文(wen)本,可(ke)以處(chu)(chu)理(li)一些更(geng)加細節的(de)指令;回答準確性也(ye)得到(dao)了(le)顯著(zhu)提高。

This paper evaluates the capability of two state-of-the-art artificial intelligence (AI) models, GPT-3.5 and Bard, in generating Java code given a function description. We sourced the descriptions from CodingBat.com, a popular online platform that provides practice problems to learn programming. We compared the Java code generated by both models based on correctness, verified through the platform's own test cases. The results indicate clear differences in the capabilities of the two models. GPT-3.5 demonstrated superior performance, generating correct code for approximately 90.6% of the function descriptions, whereas Bard produced correct code for 53.1% of the functions. While both models exhibited strengths and weaknesses, these findings suggest potential avenues for the development and refinement of more advanced AI-assisted code generation tools. The study underlines the potential of AI in automating and supporting aspects of software development, although further research is required to fully realize this potential.

Artificial general intelligence (AGI) has gained global recognition as a future technology due to the emergence of breakthrough large language models and chatbots such as GPT-4 and ChatGPT, respectively. AGI aims to replicate human intelligence through computer systems, which is one of the critical technologies having the potential to revolutionize the field of education. Compared to conventional AI models, typically designed for a limited range of tasks, demand significant amounts of domain-specific data for training and may not always consider intricate interpersonal dynamics in education. AGI, driven by the recent large pre-trained models, represents a significant leap in the capability of machines to perform tasks that require human-level intelligence, such as reasoning, problem-solving, decision-making, and even understanding human emotions and social interactions. This work reviews AGI's key concepts, capabilities, scope, and potential within future education, including setting educational goals, designing pedagogy and curriculum, and performing assessments. We also provide rich discussions over various ethical issues in education faced by AGI and how AGI will affect human educators. The development of AGI necessitates interdisciplinary collaborations between educators and AI engineers to advance research and application efforts.

In this study, we investigate the capacity of large language models (LLMs), specifically GPT-3.5, to operationalise natural language descriptions of cooperative, competitive, altruistic, and self-interested behavior in social dilemmas. Our focus is on the iterated Prisoner's Dilemma, a classic example of a non-zero-sum interaction, but our broader research program encompasses a range of experimental economics scenarios, including the ultimatum game, dictator game, and public goods game. Using a within-subject experimental design, we instantiated LLM-generated agents with various prompts that conveyed different cooperative and competitive stances. We then assessed the agents' level of cooperation in the iterated Prisoner's Dilemma, taking into account their responsiveness to the cooperative or defection actions of their partners. Our results provide evidence that LLMs can translate natural language descriptions of altruism and selfishness into appropriate behaviour to some extent, but exhibit limitations in adapting their behavior based on conditioned reciprocity. The observed pattern of increased cooperation with defectors and decreased cooperation with cooperators highlights potential constraints in the LLM's ability to generalize its knowledge about human behavior in social dilemmas. We call upon the research community to further explore the factors contributing to the emergent behavior of LLM-generated agents in a wider array of social dilemmas, examining the impact of model architecture, training parameters, and various partner strategies on agent behavior. As more advanced LLMs like GPT-4 become available, it is crucial to investigate whether they exhibit similar limitations or are capable of more nuanced cooperative behaviors, ultimately fostering the development of AI systems that better align with human values and social norms.

Automatic Speech Recognition (ASR) systems exhibit the best performance on speech that is similar to that on which it was trained. As such, underrepresented varieties including regional dialects, minority-speakers, and low-resource languages, see much higher word error rates (WERs) than those varieties seen as 'prestigious', 'mainstream', or 'standard'. This can act as a barrier to incorporating ASR technology into the annotation process for large-scale linguistic research since the manual correction of the erroneous automated transcripts can be just as time and resource consuming as manual transcriptions. A deeper understanding of the behaviour of an ASR system is thus beneficial from a speech technology standpoint, in terms of improving ASR accuracy, and from an annotation standpoint, where knowing the likely errors made by an ASR system can aid in this manual correction. This work demonstrates a method of probing an ASR system to discover how it handles phonetic variation across a number of L2 Englishes. Specifically, how particular phonetic realisations which were rare or absent in the system's training data can lead to phoneme level misrecognitions and contribute to higher WERs. It is demonstrated that the behaviour of the ASR is systematic and consistent across speakers with similar spoken varieties (in this case the same L1) and phoneme substitution errors are typically in agreement with human annotators. By identifying problematic productions specific weaknesses can be addressed by sourcing such realisations for training and fine-tuning thus making the system more robust to pronunciation variation.

人(ren)工(gong)智(zhi)能(neng)(AI)研究人(ren)員一(yi)(yi)(yi)(yi)直在(zai)(zai)開(kai)(kai)發(fa)和(he)(he)完善大型(xing)語言(yan)模(mo)(mo)型(xing)(LLM),這(zhe)些模(mo)(mo)型(xing)在(zai)(zai)各種(zhong)領域和(he)(he)任務中(zhong)表現出非凡的(de)(de)(de)(de)(de)(de)能(neng)力,挑(tiao)戰(zhan)(zhan)了(le)我(wo)們對(dui)學習(xi)和(he)(he)認知的(de)(de)(de)(de)(de)(de)理解。OpenAI開(kai)(kai)發(fa)的(de)(de)(de)(de)(de)(de)最(zui)新(xin)模(mo)(mo)型(xing)GPT-4是(shi)使用前所(suo)未(wei)有的(de)(de)(de)(de)(de)(de)計算(suan)和(he)(he)數據規模(mo)(mo)進(jin)行訓練的(de)(de)(de)(de)(de)(de)。本文報(bao)告(gao)了(le)對(dui)早期(qi)版(ban)本的(de)(de)(de)(de)(de)(de)GPT-4的(de)(de)(de)(de)(de)(de)調(diao)研,當時它(ta)(ta)仍(reng)由OpenAI積極開(kai)(kai)發(fa)。我(wo)們認為(這(zhe)個早期(qi)版(ban)本)GPT-4是(shi)新(xin)一(yi)(yi)(yi)(yi)代(dai)LLM的(de)(de)(de)(de)(de)(de)一(yi)(yi)(yi)(yi)部分(例如ChatGPT和(he)(he)谷歌(ge)的(de)(de)(de)(de)(de)(de)PaLM),它(ta)(ta)們比以(yi)前的(de)(de)(de)(de)(de)(de)人(ren)工(gong)智(zhi)能(neng)模(mo)(mo)型(xing)表現出更多的(de)(de)(de)(de)(de)(de)通(tong)用智(zhi)能(neng)。討論了(le)這(zhe)些模(mo)(mo)型(xing)不(bu)斷(duan)提高的(de)(de)(de)(de)(de)(de)能(neng)力和(he)(he)影響。證明了(le)GPT-4除(chu)了(le)對(dui)語言(yan)的(de)(de)(de)(de)(de)(de)掌握外,還可以(yi)解決跨越數學、編碼、視覺、醫(yi)學、法(fa)律、心理學等(deng)新(xin)穎和(he)(he)困難的(de)(de)(de)(de)(de)(de)任務,而不(bu)需要任何特(te)別的(de)(de)(de)(de)(de)(de)提示。此外,在(zai)(zai)所(suo)有這(zhe)些任務中(zhong),GPT-4的(de)(de)(de)(de)(de)(de)性(xing)能(neng)驚人(ren)地(di)接近(jin)人(ren)類水平的(de)(de)(de)(de)(de)(de)性(xing)能(neng),并經常大大超(chao)(chao)過之前的(de)(de)(de)(de)(de)(de)模(mo)(mo)型(xing),如ChatGPT。鑒于GPT-4能(neng)力的(de)(de)(de)(de)(de)(de)廣度和(he)(he)深(shen)度,我(wo)們相信它(ta)(ta)可以(yi)被合(he)理地(di)視為人(ren)工(gong)通(tong)用智(zhi)能(neng)(AGI)系統的(de)(de)(de)(de)(de)(de)早期(qi)(但仍(reng)不(bu)完整(zheng))版(ban)本。在(zai)(zai)對(dui)GPT-4的(de)(de)(de)(de)(de)(de)探索(suo)中(zhong),特(te)別強調(diao)了(le)發(fa)現其局限性(xing),并討論了(le)向(xiang)更深(shen)入、更全(quan)面的(de)(de)(de)(de)(de)(de)AGI版(ban)本前進(jin)的(de)(de)(de)(de)(de)(de)挑(tiao)戰(zhan)(zhan),包括可能(neng)需要追求一(yi)(yi)(yi)(yi)種(zhong)超(chao)(chao)越下一(yi)(yi)(yi)(yi)個字預測(ce)的(de)(de)(de)(de)(de)(de)新(xin)范式。最(zui)后,反思了(le)最(zui)近(jin)技術飛躍的(de)(de)(de)(de)(de)(de)社會影響和(he)(he)未(wei)來的(de)(de)(de)(de)(de)(de)研究方向(xiang)。

付費5元查看完整內容

Artificial intelligence (AI) researchers have been developing and refining large language models (LLMs) that exhibit remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. The latest model developed by OpenAI, GPT-4, was trained using an unprecedented scale of compute and data. In this paper, we report on our investigation of an early version of GPT-4, when it was still in active development by OpenAI. We contend that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google's PaLM for example) that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models. We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4's performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system. In our exploration of GPT-4, we put special emphasis on discovering its limitations, and we discuss the challenges ahead for advancing towards deeper and more comprehensive versions of AGI, including the possible need for pursuing a new paradigm that moves beyond next-word prediction. We conclude with reflections on societal influences of the recent technological leap and future research directions.

In practically every industry today, artificial intelligence is one of the most effective ways for machines to assist humans. Since its inception, a large number of researchers throughout the globe have been pioneering the application of artificial intelligence in medicine. Although artificial intelligence may seem to be a 21st-century concept, Alan Turing pioneered the first foundation concept in the 1940s. Artificial intelligence in medicine has a huge variety of applications that researchers are continually exploring. The tremendous increase in computer and human resources has hastened progress in the 21st century, and it will continue to do so for many years to come. This review of the literature will highlight the emerging field of artificial intelligence in medicine and its current level of development.

Along with the massive growth of the Internet from the 1990s until now, various innovative technologies have been created to bring users breathtaking experiences with more virtual interactions in cyberspace. Many virtual environments with thousands of services and applications, from social networks to virtual gaming worlds, have been developed with immersive experience and digital transformation, but most are incoherent instead of being integrated into a platform. In this context, metaverse, a term formed by combining meta and universe, has been introduced as a shared virtual world that is fueled by many emerging technologies, such as fifth-generation networks and beyond, virtual reality, and artificial intelligence (AI). Among such technologies, AI has shown the great importance of processing big data to enhance immersive experience and enable human-like intelligence of virtual agents. In this survey, we make a beneficial effort to explore the role of AI in the foundation and development of the metaverse. We first deliver a preliminary of AI, including machine learning algorithms and deep learning architectures, and its role in the metaverse. We then convey a comprehensive investigation of AI-based methods concerning six technical aspects that have potentials for the metaverse: natural language processing, machine vision, blockchain, networking, digital twin, and neural interface, and being potential for the metaverse. Subsequently, several AI-aided applications, such as healthcare, manufacturing, smart cities, and gaming, are studied to be deployed in the virtual worlds. Finally, we conclude the key contribution of this survey and open some future research directions in AI for the metaverse.

Meta-learning, or learning to learn, has gained renewed interest in recent years within the artificial intelligence community. However, meta-learning is incredibly prevalent within nature, has deep roots in cognitive science and psychology, and is currently studied in various forms within neuroscience. The aim of this review is to recast previous lines of research in the study of biological intelligence within the lens of meta-learning, placing these works into a common framework. More recent points of interaction between AI and neuroscience will be discussed, as well as interesting new directions that arise under this perspective.

With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.

北京阿比特科技有限公司