亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

BigScience Workshop, :,Teven Le Scao,Angela Fan,Christopher Akiki,Ellie Pavlick,Suzana Ili?,Daniel Hesslow,Roman Castagné,Alexandra Sasha Luccioni,Fran?ois Yvon,Matthias Gallé,Jonathan Tow,Alexander M. Rush,Stella Biderman,Albert Webson,Pawan Sasanka Ammanamanchi,Thomas Wang,Beno?t Sagot,Niklas Muennighoff,Albert Villanova del Moral,Olatunji Ruwase,Rachel Bawden,Stas Bekman,Angelina McMillan-Major,Iz Beltagy,Huu Nguyen,Lucile Saulnier,Samson Tan,Pedro Ortiz Suarez,Victor Sanh,Hugo Lauren?on,Yacine Jernite,Julien Launay,Margaret Mitchell,Colin Raffel,Aaron Gokaslan,Adi Simhi,Aitor Soroa,Alham Fikri Aji,Amit Alfassy,Anna Rogers,Ariel Kreisberg Nitzav,Canwen Xu,Chenghao Mou,Chris Emezue,Christopher Klamm,Colin Leong,Daniel van Strien,David Ifeoluwa Adelani,Dragomir Radev,Eduardo González Ponferrada,Efrat Levkovizh,Ethan Kim,Eyal Bar Natan,Francesco De Toni,Gérard Dupont,Germán Kruszewski,Giada Pistilli,Hady Elsahar,Hamza Benyamina,Hieu Tran,Ian Yu,Idris Abdulmumin,Isaac Johnson,Itziar Gonzalez-Dios,Javier de la Rosa,Jenny Chim,Jesse Dodge,Jian Zhu,Jonathan Chang,J?rg Frohberg,Joseph Tobing,Joydeep Bhattacharjee,Khalid Almubarak,Kimbo Chen,Kyle Lo,Leandro Von Werra,Leon Weber,Long Phan,Loubna Ben allal,Ludovic Tanguy,Manan Dey,Manuel Romero Mu?oz,Maraim Masoud,María Grandury,Mario ?a?ko,Max Huang,Maximin Coavoux,Mayank Singh,Mike Tian-Jian Jiang,Minh Chien Vu,Mohammad A. Jauhar,Mustafa Ghaleb,Nishant Subramani,Nora Kassner,Nurulaqilla Khamis,Olivier Nguyen,Omar Espejel,Ona de Gibert,Paulo Villegas,Peter Henderson,Pierre Colombo,Priscilla Amuok,Quentin Lhoest,Rheza Harliman,Rishi Bommasani,Roberto Luis López,Rui Ribeiro,Salomey Osei,Sampo Pyysalo,Sebastian Nagel,Shamik Bose,Shamsuddeen Hassan Muhammad,Shanya Sharma,Shayne Longpre,Somaieh Nikpoor,Stanislav Silberberg,Suhas Pai,Sydney Zink,Tiago Timponi Torrent,Timo Schick,Tristan Thrush,Valentin Danchev,Vassilina Nikoulina,Veronika Laippala,Violette Lepercq,Vrinda Prabhu,Zaid Alyafeai,Zeerak Talat,Arun Raja,Benjamin Heinzerling,Chenglei Si,Davut Emre Ta?ar,Elizabeth Salesky,Sabrina J. Mielke,Wilson Y. Lee,Abheesht Sharma,Andrea Santilli,Antoine Chaffin,Arnaud Stiegler,Debajyoti Datta,Eliza Szczechla,Gunjan Chhablani,Han Wang,Harshit Pandey,Hendrik Strobelt,Jason Alan Fries,Jos Rozen,Leo Gao,Lintang Sutawika,M Saiful Bari,Maged S. Al-shaibani,Matteo Manica,Nihal Nayak,Ryan Teehan,Samuel Albanie,Sheng Shen,Srulik Ben-David,Stephen H. Bach,Taewoon Kim,Tali Bers,Thibault Fevry,Trishala Neeraj,Urmish Thakker,Vikas Raunak,Xiangru Tang,Zheng-Xin Yong,Zhiqing Sun,Shaked Brody,Yallow Uri,Hadar Tojarieh,Adam Roberts,Hyung Won Chung,Jaesung Tae,Jason Phang,Ofir Press,Conglong Li,Deepak Narayanan,Hatim Bourfoune,Jared Casper,Jeff Rasley,Max Ryabinin,Mayank Mishra,Minjia Zhang,Mohammad Shoeybi,Myriam Peyrounette,Nicolas Patry,Nouamane Tazi,Omar Sanseviero,Patrick von Platen,Pierre Cornette,Pierre Fran?ois Lavallée,Rémi Lacroix,Samyam Rajbhandari,Sanchit Gandhi,Shaden Smith,Stéphane Requena,Suraj Patil,Tim Dettmers,Ahmed Baruwa,Amanpreet Singh,Anastasia Cheveleva,Anne-Laure Ligozat,Arjun Subramonian,Aurélie Névéol,Charles Lovering,Dan Garrette,Deepak Tunuguntla,Ehud Reiter,Ekaterina Taktasheva,Ekaterina Voloshina,Eli Bogdanov,Genta Indra Winata,Hailey Schoelkopf,Jan-Christoph Kalo,Jekaterina Novikova,Jessica Zosa Forde,Jordan Clive,Jungo Kasai,Ken Kawamura,Liam Hazan,Marine Carpuat,Miruna Clinciu,Najoung Kim,Newton Cheng,Oleg Serikov,Omer Antverg,Oskar van der Wal,Rui Zhang,Ruochen Zhang,Sebastian Gehrmann,Shachar Mirkin,Shani Pais,Tatiana Shavrina,Thomas Scialom,Tian Yun,Tomasz Limisiewicz,Verena Rieser,Vitaly Protasov,Vladislav Mikhailov,Yada Pruksachatkun,Yonatan Belinkov,Zachary Bamberger,Zdeněk Kasner,Alice Rueda,Amanda Pestana,Amir Feizpour,Ammar Khan,Amy Faranak,Ana Santos,Anthony Hevia,Antigona Unldreaj,Arash Aghagol,Arezoo Abdollahi,Aycha Tammour,Azadeh HajiHosseini,Bahareh Behroozi,Benjamin Ajibade,Bharat Saxena,Carlos Mu?oz Ferrandis,Danish Contractor,David Lansky,Davis David,Douwe Kiela,Duong A. Nguyen,Edward Tan,Emi Baylor,Ezinwanne Ozoani,Fatima Mirza,Frankline Ononiwu,Habib Rezanejad,Hessie Jones,Indrani Bhattacharya,Irene Solaiman,Irina Sedenko,Isar Nejadgholi,Jesse Passmore,Josh Seltzer,Julio Bonis Sanz,Livia Dutra,Mairon Samagaio,Maraim Elbadri,Margot Mieskes,Marissa Gerchick,Martha Akinlolu,Michael McKenna,Mike Qiu,Muhammed Ghauri,Mykola Burynok,Nafis Abrar,Nazneen Rajani,Nour Elkott,Nour Fahmy,Olanrewaju Samuel,Ran An,Rasmus Kromann,Ryan Hao,Samira Alizadeh,Sarmad Shubber,Silas Wang,Sourav Roy,Sylvain Viguier,Thanh Le,Tobi Oyebade,Trieu Le,Yoyo Yang,Zach Nguyen,Abhinav Ramesh Kashyap,Alfredo Palasciano,Alison Callahan,Anima Shukla,Antonio Miranda-Escalada,Ayush Singh,Benjamin Beilharz,Bo Wang,Caio Brito,Chenxi Zhou,Chirag Jain,Chuxin Xu,Clémentine Fourrier,Daniel León Peri?án,Daniel Molano,Dian Yu,Enrique Manjavacas,Fabio Barth,Florian Fuhrimann,Gabriel Altay,Giyaseddin Bayrak,Gully Burns,Helena U. Vrabec,Imane Bello,Ishani Dash,Jihyun Kang,John Giorgi,Jonas Golde,Jose David Posada,Karthik Rangasai Sivaraman,Lokesh Bulchandani,Lu Liu,Luisa Shinzato,Madeleine Hahn de Bykhovetz,Maiko Takeuchi,Marc Pàmies,Maria A Castillo,Marianna Nezhurina,Mario S?nger,Matthias Samwald,Michael Cullan,Michael Weinberg,Michiel De Wolf,Mina Mihaljcic,Minna Liu,Moritz Freidank,Myungsun Kang,Natasha Seelam,Nathan Dahlberg,Nicholas Michio Broad,Nikolaus Muellner,Pascale Fung,Patrick Haller,Ramya Chandrasekhar,Renata Eisenberg,Robert Martin,Rodrigo Canalli,Rosaline Su,Ruisi Su,Samuel Cahyawijaya,Samuele Garda,Shlok S Deshmukh,Shubhanshu Mishra,Sid Kiblawi,Simon Ott,Sinee Sang-aroonsiri,Srishti Kumar,Stefan Schweter,Sushil Bharati,Tanmay Laud,Théo Gigant,Tomoya Kainuma,Wojciech Kusa,Yanis Labrak,Yash Shailesh Bajaj,Yash Venkatraman,Yifan Xu,Yingxin Xu,Yu Xu,Zhe Tan,Zhongli Xie,Zifan Ye,Mathilde Bras,Younes Belkada,Thomas Wolf
BigScience Workshop, :,Teven Le Scao,Angela Fan,Christopher Akiki,Ellie Pavlick,Suzana Ili?,Daniel Hesslow,Roman Castagné,Alexandra Sasha Luccioni,Fran?ois Yvon,Matthias Gallé,Jonathan Tow,Alexander M. Rush,Stella Biderman,Albert Webson,Pawan Sasanka Ammanamanchi,Thomas Wang,Beno?t Sagot,Niklas Muennighoff,Albert Villanova del Moral,Olatunji Ruwase,Rachel Bawden,Stas Bekman,Angelina McMillan-Major,Iz Beltagy,Huu Nguyen,Lucile Saulnier,Samson Tan,Pedro Ortiz Suarez,Victor Sanh,Hugo Lauren?on,Yacine Jernite,Julien Launay,Margaret Mitchell,Colin Raffel,Aaron Gokaslan,Adi Simhi,Aitor Soroa,Alham Fikri Aji,Amit Alfassy,Anna Rogers,Ariel Kreisberg Nitzav,Canwen Xu,Chenghao Mou,Chris Emezue,Christopher Klamm,Colin Leong,Daniel van Strien,David Ifeoluwa Adelani,Dragomir Radev,Eduardo González Ponferrada,Efrat Levkovizh,Ethan Kim,Eyal Bar Natan,Francesco De Toni,Gérard Dupont,Germán Kruszewski,Giada Pistilli,Hady Elsahar,Hamza Benyamina,Hieu Tran,Ian Yu,Idris Abdulmumin,Isaac Johnson,Itziar Gonzalez-Dios,Javier de la Rosa,Jenny Chim,Jesse Dodge,Jian Zhu,Jonathan Chang,J?rg Frohberg,Joseph Tobing,Joydeep Bhattacharjee,Khalid Almubarak,Kimbo Chen,Kyle Lo,Leandro Von Werra,Leon Weber,Long Phan,Loubna Ben allal,Ludovic Tanguy,Manan Dey,Manuel Romero Mu?oz,Maraim Masoud,María Grandury,Mario ?a?ko,Max Huang,Maximin Coavoux,Mayank Singh,Mike Tian-Jian Jiang,Minh Chien Vu,Mohammad A. Jauhar,Mustafa Ghaleb,Nishant Subramani,Nora Kassner,Nurulaqilla Khamis,Olivier Nguyen,Omar Espejel,Ona de Gibert,Paulo Villegas,Peter Henderson,Pierre Colombo,Priscilla Amuok,Quentin Lhoest,Rheza Harliman,Rishi Bommasani,Roberto Luis López,Rui Ribeiro,Salomey Osei,Sampo Pyysalo,Sebastian Nagel,Shamik Bose,Shamsuddeen Hassan Muhammad,Shanya Sharma,Shayne Longpre,Somaieh Nikpoor,Stanislav Silberberg,Suhas Pai,Sydney Zink,Tiago Timponi Torrent,Timo Schick,Tristan Thrush,Valentin Danchev,Vassilina Nikoulina,Veronika Laippala,Violette Lepercq,Vrinda Prabhu,Zaid Alyafeai,Zeerak Talat,Arun Raja,Benjamin Heinzerling,Chenglei Si,Davut Emre Ta?ar,Elizabeth Salesky,Sabrina J. Mielke,Wilson Y. Lee,Abheesht Sharma,Andrea Santilli,Antoine Chaffin,Arnaud Stiegler,Debajyoti Datta,Eliza Szczechla,Gunjan Chhablani,Han Wang,Harshit Pandey,Hendrik Strobelt,Jason Alan Fries,Jos Rozen,Leo Gao,Lintang Sutawika,M Saiful Bari,Maged S. Al-shaibani,Matteo Manica,Nihal Nayak,Ryan Teehan,Samuel Albanie,Sheng Shen,Srulik Ben-David,Stephen H. Bach,Taewoon Kim,Tali Bers,Thibault Fevry,Trishala Neeraj,Urmish Thakker,Vikas Raunak,Xiangru Tang,Zheng-Xin Yong,Zhiqing Sun,Shaked Brody,Yallow Uri,Hadar Tojarieh,Adam Roberts,Hyung Won Chung,Jaesung Tae,Jason Phang,Ofir Press,Conglong Li,Deepak Narayanan,Hatim Bourfoune,Jared Casper,Jeff Rasley,Max Ryabinin,Mayank Mishra,Minjia Zhang,Mohammad Shoeybi,Myriam Peyrounette,Nicolas Patry,Nouamane Tazi,Omar Sanseviero,Patrick von Platen,Pierre Cornette,Pierre Fran?ois Lavallée,Rémi Lacroix,Samyam Rajbhandari,Sanchit Gandhi,Shaden Smith,Stéphane Requena,Suraj Patil,Tim Dettmers,Ahmed Baruwa,Amanpreet Singh,Anastasia Cheveleva,Anne-Laure Ligozat,Arjun Subramonian,Aurélie Névéol,Charles Lovering,Dan Garrette,Deepak Tunuguntla,Ehud Reiter,Ekaterina Taktasheva,Ekaterina Voloshina,Eli Bogdanov,Genta Indra Winata,Hailey Schoelkopf,Jan-Christoph Kalo,Jekaterina Novikova,Jessica Zosa Forde,Jordan Clive,Jungo Kasai,Ken Kawamura,Liam Hazan,Marine Carpuat,Miruna Clinciu,Najoung Kim,Newton Cheng,Oleg Serikov,Omer Antverg,Oskar van der Wal,Rui Zhang,Ruochen Zhang,Sebastian Gehrmann,Shachar Mirkin,Shani Pais,Tatiana Shavrina,Thomas Scialom,Tian Yun,Tomasz Limisiewicz,Verena Rieser,Vitaly Protasov,Vladislav Mikhailov,Yada Pruksachatkun,Yonatan Belinkov,Zachary Bamberger,Zdeněk Kasner,Alice Rueda,Amanda Pestana,Amir Feizpour,Ammar Khan,Amy Faranak,Ana Santos,Anthony Hevia,Antigona Unldreaj,Arash Aghagol,Arezoo Abdollahi,Aycha Tammour,Azadeh HajiHosseini,Bahareh Behroozi,Benjamin Ajibade,Bharat Saxena,Carlos Mu?oz Ferrandis,Danish Contractor,David Lansky,Davis David,Douwe Kiela,Duong A. Nguyen,Edward Tan,Emi Baylor,Ezinwanne Ozoani,Fatima Mirza,Frankline Ononiwu,Habib Rezanejad,Hessie Jones,Indrani Bhattacharya,Irene Solaiman,Irina Sedenko,Isar Nejadgholi,Jesse Passmore,Josh Seltzer,Julio Bonis Sanz,Livia Dutra,Mairon Samagaio,Maraim Elbadri,Margot Mieskes,Marissa Gerchick,Martha Akinlolu,Michael McKenna,Mike Qiu,Muhammed Ghauri,Mykola Burynok,Nafis Abrar,Nazneen Rajani,Nour Elkott,Nour Fahmy,Olanrewaju Samuel,Ran An,Rasmus Kromann,Ryan Hao,Samira Alizadeh,Sarmad Shubber,Silas Wang,Sourav Roy,Sylvain Viguier,Thanh Le,Tobi Oyebade,Trieu Le,Yoyo Yang,Zach Nguyen,Abhinav Ramesh Kashyap,Alfredo Palasciano,Alison Callahan,Anima Shukla,Antonio Miranda-Escalada,Ayush Singh,Benjamin Beilharz,Bo Wang,Caio Brito,Chenxi Zhou,Chirag Jain,Chuxin Xu,Clémentine Fourrier,Daniel León Peri?án,Daniel Molano,Dian Yu,Enrique Manjavacas,Fabio Barth,Florian Fuhrimann,Gabriel Altay,Giyaseddin Bayrak,Gully Burns,Helena U. Vrabec,Imane Bello,Ishani Dash,Jihyun Kang,John Giorgi,Jonas Golde,Jose David Posada,Karthik Rangasai Sivaraman,Lokesh Bulchandani,Lu Liu,Luisa Shinzato,Madeleine Hahn de Bykhovetz,Maiko Takeuchi,Marc Pàmies,Maria A Castillo,Marianna Nezhurina,Mario S?nger,Matthias Samwald,Michael Cullan,Michael Weinberg,Michiel De Wolf,Mina Mihaljcic,Minna Liu,Moritz Freidank,Myungsun Kang,Natasha Seelam,Nathan Dahlberg,Nicholas Michio Broad,Nikolaus Muellner,Pascale Fung,Patrick Haller,Ramya Chandrasekhar,Renata Eisenberg,Robert Martin,Rodrigo Canalli,Rosaline Su,Ruisi Su,Samuel Cahyawijaya,Samuele Garda,Shlok S Deshmukh,Shubhanshu Mishra,Sid Kiblawi,Simon Ott,Sinee Sang-aroonsiri,Srishti Kumar,Stefan Schweter,Sushil Bharati,Tanmay Laud,Théo Gigant,Tomoya Kainuma,Wojciech Kusa,Yanis Labrak,Yash Shailesh Bajaj,Yash Venkatraman,Yifan Xu,Yingxin Xu,Yu Xu,Zhe Tan,Zhongli Xie,Zifan Ye,Mathilde Bras,Younes Belkada,Thomas Wolf

Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.

相關內容

This paper describes the results of SemEval 2023 task 7 -- Multi-Evidence Natural Language Inference for Clinical Trial Data (NLI4CT) -- consisting of 2 tasks, a Natural Language Inference (NLI) task, and an evidence selection task on clinical trial data. The proposed challenges require multi-hop biomedical and numerical reasoning, which are of significant importance to the development of systems capable of large-scale interpretation and retrieval of medical evidence, to provide personalized evidence-based care. Task 1, the entailment task, received 643 submissions from 40 participants, and Task 2, the evidence selection task, received 364 submissions from 23 participants. The tasks are challenging, with the majority of submitted systems failing to significantly outperform the majority class baseline on the entailment task, and we observe significantly better performance on the evidence selection task than on the entailment task. Increasing the number of model parameters leads to a direct increase in performance, far more significant than the effect of biomedical pre-training. Future works could explore the limitations of large models for generalization and numerical inference, and investigate methods to augment clinical datasets to allow for more rigorous testing and to facilitate fine-tuning. We envisage that the dataset, models, and results of this task will be useful to the biomedical NLI and evidence retrieval communities. The dataset, competition leaderboard, and website are publicly available.

In recent years, sentiment analysis has gained significant importance in natural language processing. However, most existing models and datasets for sentiment analysis are developed for high-resource languages, such as English and Chinese, leaving low-resource languages, particularly African languages, largely unexplored. The AfriSenti-SemEval 2023 Shared Task 12 aims to fill this gap by evaluating sentiment analysis models on low-resource African languages. In this paper, we present our solution to the shared task, where we employed different multilingual XLM-R models with classification head trained on various data, including those retrained in African dialects and fine-tuned on target languages. Our team achieved the third-best results in Subtask B, Track 16: Multilingual, demonstrating the effectiveness of our approach. While our model showed relatively good results on multilingual data, it performed poorly in some languages. Our findings highlight the importance of developing more comprehensive datasets and models for low-resource African languages to advance sentiment analysis research. We also provided the solution on the github repository.

Keeping track of how states and relations of entities change as a text or dialog unfolds is a key prerequisite to discourse understanding. Despite this fact, there have been few systematic investigations into the ability of large language models (LLMs) to track discourse entities. In this work, we present a task to probe to what extent a language model can infer the final state of an entity given an English description of the initial state and a series of state-changing operations. We use this task to first investigate whether Flan-T5, GPT-3 and GPT-3.5 can track the state of entities, and find that only GPT-3.5 models, which have been pretrained on large amounts of code, exhibit this ability. We then investigate whether smaller models pretrained primarily on text can learn to track entities, through finetuning T5 on several training/evaluation splits. While performance degrades for more complex splits, we find that even for splits with almost no lexical overlap between training and evaluation, a finetuned model can often perform non-trivial entity tracking. Taken together, these results suggest that language models can learn to track entities but pretraining on large text corpora alone does not make this capacity surface.

Speech AI Technologies are largely trained on publicly available datasets or by the massive web-crawling of speech. In both cases, data acquisition focuses on minimizing collection effort, without necessarily taking the data subjects' protection or user needs into consideration. This results to models that are not robust when used on users who deviate from the dominant demographics in the training set, discriminating individuals having different dialects, accents, speaking styles, and disfluencies. In this talk, we use automatic speech recognition as a case study and examine the properties that ethical speech datasets should possess towards responsible AI applications. We showcase diversity issues, inclusion practices, and necessary considerations that can improve trained models, while facilitating model explainability and protecting users and data subjects. We argue for the legal & privacy protection of data subjects, targeted data sampling corresponding to user demographics & needs, appropriate meta data that ensure explainability & accountability in cases of model failure, and the sociotechnical \& situated model design. We hope this talk can inspire researchers \& practitioners to design and use more human-centric datasets in speech technologies and other domains, in ways that empower and respect users, while improving machine learning models' robustness and utility.

Despite recent successes with neural models for sign language translation (SLT), translation quality still lags behind spoken languages because of the data scarcity and modality gap between sign video and text. To address both problems, we investigate strategies for cross-modality representation sharing for SLT. We propose SLTUNET, a simple unified neural model designed to support multiple SLTrelated tasks jointly, such as sign-to-gloss, gloss-to-text and sign-to-text translation. Jointly modeling different tasks endows SLTUNET with the capability to explore the cross-task relatedness that could help narrow the modality gap. In addition, this allows us to leverage the knowledge from external resources, such as abundant parallel data used for spoken-language machine translation (MT). We show in experiments that SLTUNET achieves competitive and even state-of-the-art performance on PHOENIX-2014T and CSL-Daily when augmented with MT data and equipped with a set of optimization techniques. We further use the DGS Corpus for end-to-end SLT for the first time. It covers broader domains with a significantly larger vocabulary, which is more challenging and which we consider to allow for a more realistic assessment of the current state of SLT than the former two. Still, SLTUNET obtains improved results on the DGS Corpus. Code is available at //github.com/bzhangGo/sltunet.

Large language models (LLMs) have demonstrated remarkable potential in handling multilingual machine translation (MMT). In this paper, we systematically investigate the advantages and challenges of LLMs for MMT by answering two questions: 1) How well do LLMs perform in translating a massive number of languages? 2) Which factors affect LLMs' performance in translation? We evaluate popular LLMs, including XGLM, OPT, BLOOMZ, and ChatGPT, on 102 languages. Our empirical results show that even the best model ChatGPT still lags behind the supervised baseline NLLB in 83.33% of translation directions. Through further analysis, we discover that LLMs exhibit new working patterns when used for MMT. First, prompt semantics can surprisingly be ignored when given in-context exemplars, where LLMs still show strong performance even with unreasonable prompts. Second, cross-lingual exemplars can provide better task instruction for low-resource translation than exemplars in the same language pairs. Third, we observe the overestimated performance of BLOOMZ on dataset Flores-101, indicating the potential risk when using public datasets for evaluation.

We present Prompt Diffusion, a framework for enabling in-context learning in diffusion-based generative models. Given a pair of task-specific example images, such as depth from/to image and scribble from/to image, and a text guidance, our model automatically understands the underlying task and performs the same task on a new query image following the text guidance. To achieve this, we propose a vision-language prompt that can model a wide range of vision-language tasks and a diffusion model that takes it as input. The diffusion model is trained jointly over six different tasks using these prompts. The resulting Prompt Diffusion model is the first diffusion-based vision-language foundation model capable of in-context learning. It demonstrates high-quality in-context generation on the trained tasks and generalizes effectively to new, unseen vision tasks with their respective prompts. Our model also shows compelling text-guided image editing results. Our framework, with code publicly available at //github.com/Zhendong-Wang/Prompt-Diffusion, aims to facilitate research into in-context learning for computer vision.

This paper surveys and organizes research works in a new paradigm in natural language processing, which we dub "prompt-based learning". Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y|x), prompt-based learning is based on language models that model the probability of text directly. To use these models to perform prediction tasks, the original input x is modified using a template into a textual string prompt x' that has some unfilled slots, and then the language model is used to probabilistically fill the unfilled information to obtain a final string x, from which the final output y can be derived. This framework is powerful and attractive for a number of reasons: it allows the language model to be pre-trained on massive amounts of raw text, and by defining a new prompting function the model is able to perform few-shot or even zero-shot learning, adapting to new scenarios with few or no labeled data. In this paper we introduce the basics of this promising paradigm, describe a unified set of mathematical notations that can cover a wide variety of existing work, and organize existing work along several dimensions, e.g.the choice of pre-trained models, prompts, and tuning strategies. To make the field more accessible to interested beginners, we not only make a systematic review of existing works and a highly structured typology of prompt-based concepts, but also release other resources, e.g., a website //pretrain.nlpedia.ai/ including constantly-updated survey, and paperlist.

We propose UniViLM: a Unified Video and Language pre-training Model for multimodal understanding and generation. Motivated by the recent success of BERT based pre-training technique for NLP and image-language tasks, VideoBERT and CBT are proposed to exploit BERT model for video and language pre-training using narrated instructional videos. Different from their works which only pre-train understanding task, we propose a unified video-language pre-training model for both understanding and generation tasks. Our model comprises of 4 components including two single-modal encoders, a cross encoder and a decoder with the Transformer backbone. We first pre-train our model to learn the universal representation for both video and language on a large instructional video dataset. Then we fine-tune the model on two multimodal tasks including understanding task (text-based video retrieval) and generation task (multimodal video captioning). Our extensive experiments show that our method can improve the performance of both understanding and generation tasks and achieves the state-of-the art results.

Attention Model has now become an important concept in neural networks that has been researched within diverse application domains. This survey provides a structured and comprehensive overview of the developments in modeling attention. In particular, we propose a taxonomy which groups existing techniques into coherent categories. We review the different neural architectures in which attention has been incorporated, and also show how attention improves interpretability of neural models. Finally, we discuss some applications in which modeling attention has a significant impact. We hope this survey will provide a succinct introduction to attention models and guide practitioners while developing approaches for their applications.

北京阿比特科技有限公司