亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In recent years, neural models learned through self-supervised pretraining on large scale multilingual text or speech data have exhibited promising results for underresourced languages, especially when a relatively large amount of data from related language(s) is available. While the technology has a potential for facilitating tasks carried out in language documentation projects, such as speech transcription, pretraining a multilingual model from scratch for every new language would be highly impractical. We investigate the possibility for adapting an existing multilingual wav2vec 2.0 model for a new language, focusing on actual fieldwork data from a critically endangered tongue: Ainu. Specifically, we (i) examine the feasibility of leveraging data from similar languages also in fine-tuning; (ii) verify whether the model's performance can be improved by further pretraining on target language data. Our results show that continued pretraining is the most effective method to adapt a wav2vec 2.0 model for a new language and leads to considerable reduction in error rates. Furthermore, we find that if a model pretrained on a related speech variety or an unrelated language with similar phonological characteristics is available, multilingual fine-tuning using additional data from that language can have positive impact on speech recognition performance when there is very little labeled data in the target language.

相關內容

讓 iOS 8 和 OS X Yosemite 無縫切換的一個新特性。 > Apple products have always been designed to work together beautifully. But now they may really surprise you. With iOS 8 and OS X Yosemite, you’ll be able to do more wonderful things than ever before.

Source:
BigScience Workshop, :,Teven Le Scao,Angela Fan,Christopher Akiki,Ellie Pavlick,Suzana Ili?,Daniel Hesslow,Roman Castagné,Alexandra Sasha Luccioni,Fran?ois Yvon,Matthias Gallé,Jonathan Tow,Alexander M. Rush,Stella Biderman,Albert Webson,Pawan Sasanka Ammanamanchi,Thomas Wang,Beno?t Sagot,Niklas Muennighoff,Albert Villanova del Moral,Olatunji Ruwase,Rachel Bawden,Stas Bekman,Angelina McMillan-Major,Iz Beltagy,Huu Nguyen,Lucile Saulnier,Samson Tan,Pedro Ortiz Suarez,Victor Sanh,Hugo Lauren?on,Yacine Jernite,Julien Launay,Margaret Mitchell,Colin Raffel,Aaron Gokaslan,Adi Simhi,Aitor Soroa,Alham Fikri Aji,Amit Alfassy,Anna Rogers,Ariel Kreisberg Nitzav,Canwen Xu,Chenghao Mou,Chris Emezue,Christopher Klamm,Colin Leong,Daniel van Strien,David Ifeoluwa Adelani,Dragomir Radev,Eduardo González Ponferrada,Efrat Levkovizh,Ethan Kim,Eyal Bar Natan,Francesco De Toni,Gérard Dupont,Germán Kruszewski,Giada Pistilli,Hady Elsahar,Hamza Benyamina,Hieu Tran,Ian Yu,Idris Abdulmumin,Isaac Johnson,Itziar Gonzalez-Dios,Javier de la Rosa,Jenny Chim,Jesse Dodge,Jian Zhu,Jonathan Chang,J?rg Frohberg,Joseph Tobing,Joydeep Bhattacharjee,Khalid Almubarak,Kimbo Chen,Kyle Lo,Leandro Von Werra,Leon Weber,Long Phan,Loubna Ben allal,Ludovic Tanguy,Manan Dey,Manuel Romero Mu?oz,Maraim Masoud,María Grandury,Mario ?a?ko,Max Huang,Maximin Coavoux,Mayank Singh,Mike Tian-Jian Jiang,Minh Chien Vu,Mohammad A. Jauhar,Mustafa Ghaleb,Nishant Subramani,Nora Kassner,Nurulaqilla Khamis,Olivier Nguyen,Omar Espejel,Ona de Gibert,Paulo Villegas,Peter Henderson,Pierre Colombo,Priscilla Amuok,Quentin Lhoest,Rheza Harliman,Rishi Bommasani,Roberto Luis López,Rui Ribeiro,Salomey Osei,Sampo Pyysalo,Sebastian Nagel,Shamik Bose,Shamsuddeen Hassan Muhammad,Shanya Sharma,Shayne Longpre,Somaieh Nikpoor,Stanislav Silberberg,Suhas Pai,Sydney Zink,Tiago Timponi Torrent,Timo Schick,Tristan Thrush,Valentin Danchev,Vassilina Nikoulina,Veronika Laippala,Violette Lepercq,Vrinda Prabhu,Zaid Alyafeai,Zeerak Talat,Arun Raja,Benjamin Heinzerling,Chenglei Si,Davut Emre Ta?ar,Elizabeth Salesky,Sabrina J. Mielke,Wilson Y. Lee,Abheesht Sharma,Andrea Santilli,Antoine Chaffin,Arnaud Stiegler,Debajyoti Datta,Eliza Szczechla,Gunjan Chhablani,Han Wang,Harshit Pandey,Hendrik Strobelt,Jason Alan Fries,Jos Rozen,Leo Gao,Lintang Sutawika,M Saiful Bari,Maged S. Al-shaibani,Matteo Manica,Nihal Nayak,Ryan Teehan,Samuel Albanie,Sheng Shen,Srulik Ben-David,Stephen H. Bach,Taewoon Kim,Tali Bers,Thibault Fevry,Trishala Neeraj,Urmish Thakker,Vikas Raunak,Xiangru Tang,Zheng-Xin Yong,Zhiqing Sun,Shaked Brody,Yallow Uri,Hadar Tojarieh,Adam Roberts,Hyung Won Chung,Jaesung Tae,Jason Phang,Ofir Press,Conglong Li,Deepak Narayanan,Hatim Bourfoune,Jared Casper,Jeff Rasley,Max Ryabinin,Mayank Mishra,Minjia Zhang,Mohammad Shoeybi,Myriam Peyrounette,Nicolas Patry,Nouamane Tazi,Omar Sanseviero,Patrick von Platen,Pierre Cornette,Pierre Fran?ois Lavallée,Rémi Lacroix,Samyam Rajbhandari,Sanchit Gandhi,Shaden Smith,Stéphane Requena,Suraj Patil,Tim Dettmers,Ahmed Baruwa,Amanpreet Singh,Anastasia Cheveleva,Anne-Laure Ligozat,Arjun Subramonian,Aurélie Névéol,Charles Lovering,Dan Garrette,Deepak Tunuguntla,Ehud Reiter,Ekaterina Taktasheva,Ekaterina Voloshina,Eli Bogdanov,Genta Indra Winata,Hailey Schoelkopf,Jan-Christoph Kalo,Jekaterina Novikova,Jessica Zosa Forde,Jordan Clive,Jungo Kasai,Ken Kawamura,Liam Hazan,Marine Carpuat,Miruna Clinciu,Najoung Kim,Newton Cheng,Oleg Serikov,Omer Antverg,Oskar van der Wal,Rui Zhang,Ruochen Zhang,Sebastian Gehrmann,Shachar Mirkin,Shani Pais,Tatiana Shavrina,Thomas Scialom,Tian Yun,Tomasz Limisiewicz,Verena Rieser,Vitaly Protasov,Vladislav Mikhailov,Yada Pruksachatkun,Yonatan Belinkov,Zachary Bamberger,Zdeněk Kasner,Alice Rueda,Amanda Pestana,Amir Feizpour,Ammar Khan,Amy Faranak,Ana Santos,Anthony Hevia,Antigona Unldreaj,Arash Aghagol,Arezoo Abdollahi,Aycha Tammour,Azadeh HajiHosseini,Bahareh Behroozi,Benjamin Ajibade,Bharat Saxena,Carlos Mu?oz Ferrandis,Danish Contractor,David Lansky,Davis David,Douwe Kiela,Duong A. Nguyen,Edward Tan,Emi Baylor,Ezinwanne Ozoani,Fatima Mirza,Frankline Ononiwu,Habib Rezanejad,Hessie Jones,Indrani Bhattacharya,Irene Solaiman,Irina Sedenko,Isar Nejadgholi,Jesse Passmore,Josh Seltzer,Julio Bonis Sanz,Livia Dutra,Mairon Samagaio,Maraim Elbadri,Margot Mieskes,Marissa Gerchick,Martha Akinlolu,Michael McKenna,Mike Qiu,Muhammed Ghauri,Mykola Burynok,Nafis Abrar,Nazneen Rajani,Nour Elkott,Nour Fahmy,Olanrewaju Samuel,Ran An,Rasmus Kromann,Ryan Hao,Samira Alizadeh,Sarmad Shubber,Silas Wang,Sourav Roy,Sylvain Viguier,Thanh Le,Tobi Oyebade,Trieu Le,Yoyo Yang,Zach Nguyen,Abhinav Ramesh Kashyap,Alfredo Palasciano,Alison Callahan,Anima Shukla,Antonio Miranda-Escalada,Ayush Singh,Benjamin Beilharz,Bo Wang,Caio Brito,Chenxi Zhou,Chirag Jain,Chuxin Xu,Clémentine Fourrier,Daniel León Peri?án,Daniel Molano,Dian Yu,Enrique Manjavacas,Fabio Barth,Florian Fuhrimann,Gabriel Altay,Giyaseddin Bayrak,Gully Burns,Helena U. Vrabec,Imane Bello,Ishani Dash,Jihyun Kang,John Giorgi,Jonas Golde,Jose David Posada,Karthik Rangasai Sivaraman,Lokesh Bulchandani,Lu Liu,Luisa Shinzato,Madeleine Hahn de Bykhovetz,Maiko Takeuchi,Marc Pàmies,Maria A Castillo,Marianna Nezhurina,Mario S?nger,Matthias Samwald,Michael Cullan,Michael Weinberg,Michiel De Wolf,Mina Mihaljcic,Minna Liu,Moritz Freidank,Myungsun Kang,Natasha Seelam,Nathan Dahlberg,Nicholas Michio Broad,Nikolaus Muellner,Pascale Fung,Patrick Haller,Ramya Chandrasekhar,Renata Eisenberg,Robert Martin,Rodrigo Canalli,Rosaline Su,Ruisi Su,Samuel Cahyawijaya,Samuele Garda,Shlok S Deshmukh,Shubhanshu Mishra,Sid Kiblawi,Simon Ott,Sinee Sang-aroonsiri,Srishti Kumar,Stefan Schweter,Sushil Bharati,Tanmay Laud,Théo Gigant,Tomoya Kainuma,Wojciech Kusa,Yanis Labrak,Yash Shailesh Bajaj,Yash Venkatraman,Yifan Xu,Yingxin Xu,Yu Xu,Zhe Tan,Zhongli Xie,Zifan Ye,Mathilde Bras,Younes Belkada,Thomas Wolf
BigScience Workshop, :,Teven Le Scao,Angela Fan,Christopher Akiki,Ellie Pavlick,Suzana Ili?,Daniel Hesslow,Roman Castagné,Alexandra Sasha Luccioni,Fran?ois Yvon,Matthias Gallé,Jonathan Tow,Alexander M. Rush,Stella Biderman,Albert Webson,Pawan Sasanka Ammanamanchi,Thomas Wang,Beno?t Sagot,Niklas Muennighoff,Albert Villanova del Moral,Olatunji Ruwase,Rachel Bawden,Stas Bekman,Angelina McMillan-Major,Iz Beltagy,Huu Nguyen,Lucile Saulnier,Samson Tan,Pedro Ortiz Suarez,Victor Sanh,Hugo Lauren?on,Yacine Jernite,Julien Launay,Margaret Mitchell,Colin Raffel,Aaron Gokaslan,Adi Simhi,Aitor Soroa,Alham Fikri Aji,Amit Alfassy,Anna Rogers,Ariel Kreisberg Nitzav,Canwen Xu,Chenghao Mou,Chris Emezue,Christopher Klamm,Colin Leong,Daniel van Strien,David Ifeoluwa Adelani,Dragomir Radev,Eduardo González Ponferrada,Efrat Levkovizh,Ethan Kim,Eyal Bar Natan,Francesco De Toni,Gérard Dupont,Germán Kruszewski,Giada Pistilli,Hady Elsahar,Hamza Benyamina,Hieu Tran,Ian Yu,Idris Abdulmumin,Isaac Johnson,Itziar Gonzalez-Dios,Javier de la Rosa,Jenny Chim,Jesse Dodge,Jian Zhu,Jonathan Chang,J?rg Frohberg,Joseph Tobing,Joydeep Bhattacharjee,Khalid Almubarak,Kimbo Chen,Kyle Lo,Leandro Von Werra,Leon Weber,Long Phan,Loubna Ben allal,Ludovic Tanguy,Manan Dey,Manuel Romero Mu?oz,Maraim Masoud,María Grandury,Mario ?a?ko,Max Huang,Maximin Coavoux,Mayank Singh,Mike Tian-Jian Jiang,Minh Chien Vu,Mohammad A. Jauhar,Mustafa Ghaleb,Nishant Subramani,Nora Kassner,Nurulaqilla Khamis,Olivier Nguyen,Omar Espejel,Ona de Gibert,Paulo Villegas,Peter Henderson,Pierre Colombo,Priscilla Amuok,Quentin Lhoest,Rheza Harliman,Rishi Bommasani,Roberto Luis López,Rui Ribeiro,Salomey Osei,Sampo Pyysalo,Sebastian Nagel,Shamik Bose,Shamsuddeen Hassan Muhammad,Shanya Sharma,Shayne Longpre,Somaieh Nikpoor,Stanislav Silberberg,Suhas Pai,Sydney Zink,Tiago Timponi Torrent,Timo Schick,Tristan Thrush,Valentin Danchev,Vassilina Nikoulina,Veronika Laippala,Violette Lepercq,Vrinda Prabhu,Zaid Alyafeai,Zeerak Talat,Arun Raja,Benjamin Heinzerling,Chenglei Si,Davut Emre Ta?ar,Elizabeth Salesky,Sabrina J. Mielke,Wilson Y. Lee,Abheesht Sharma,Andrea Santilli,Antoine Chaffin,Arnaud Stiegler,Debajyoti Datta,Eliza Szczechla,Gunjan Chhablani,Han Wang,Harshit Pandey,Hendrik Strobelt,Jason Alan Fries,Jos Rozen,Leo Gao,Lintang Sutawika,M Saiful Bari,Maged S. Al-shaibani,Matteo Manica,Nihal Nayak,Ryan Teehan,Samuel Albanie,Sheng Shen,Srulik Ben-David,Stephen H. Bach,Taewoon Kim,Tali Bers,Thibault Fevry,Trishala Neeraj,Urmish Thakker,Vikas Raunak,Xiangru Tang,Zheng-Xin Yong,Zhiqing Sun,Shaked Brody,Yallow Uri,Hadar Tojarieh,Adam Roberts,Hyung Won Chung,Jaesung Tae,Jason Phang,Ofir Press,Conglong Li,Deepak Narayanan,Hatim Bourfoune,Jared Casper,Jeff Rasley,Max Ryabinin,Mayank Mishra,Minjia Zhang,Mohammad Shoeybi,Myriam Peyrounette,Nicolas Patry,Nouamane Tazi,Omar Sanseviero,Patrick von Platen,Pierre Cornette,Pierre Fran?ois Lavallée,Rémi Lacroix,Samyam Rajbhandari,Sanchit Gandhi,Shaden Smith,Stéphane Requena,Suraj Patil,Tim Dettmers,Ahmed Baruwa,Amanpreet Singh,Anastasia Cheveleva,Anne-Laure Ligozat,Arjun Subramonian,Aurélie Névéol,Charles Lovering,Dan Garrette,Deepak Tunuguntla,Ehud Reiter,Ekaterina Taktasheva,Ekaterina Voloshina,Eli Bogdanov,Genta Indra Winata,Hailey Schoelkopf,Jan-Christoph Kalo,Jekaterina Novikova,Jessica Zosa Forde,Jordan Clive,Jungo Kasai,Ken Kawamura,Liam Hazan,Marine Carpuat,Miruna Clinciu,Najoung Kim,Newton Cheng,Oleg Serikov,Omer Antverg,Oskar van der Wal,Rui Zhang,Ruochen Zhang,Sebastian Gehrmann,Shachar Mirkin,Shani Pais,Tatiana Shavrina,Thomas Scialom,Tian Yun,Tomasz Limisiewicz,Verena Rieser,Vitaly Protasov,Vladislav Mikhailov,Yada Pruksachatkun,Yonatan Belinkov,Zachary Bamberger,Zdeněk Kasner,Alice Rueda,Amanda Pestana,Amir Feizpour,Ammar Khan,Amy Faranak,Ana Santos,Anthony Hevia,Antigona Unldreaj,Arash Aghagol,Arezoo Abdollahi,Aycha Tammour,Azadeh HajiHosseini,Bahareh Behroozi,Benjamin Ajibade,Bharat Saxena,Carlos Mu?oz Ferrandis,Danish Contractor,David Lansky,Davis David,Douwe Kiela,Duong A. Nguyen,Edward Tan,Emi Baylor,Ezinwanne Ozoani,Fatima Mirza,Frankline Ononiwu,Habib Rezanejad,Hessie Jones,Indrani Bhattacharya,Irene Solaiman,Irina Sedenko,Isar Nejadgholi,Jesse Passmore,Josh Seltzer,Julio Bonis Sanz,Livia Dutra,Mairon Samagaio,Maraim Elbadri,Margot Mieskes,Marissa Gerchick,Martha Akinlolu,Michael McKenna,Mike Qiu,Muhammed Ghauri,Mykola Burynok,Nafis Abrar,Nazneen Rajani,Nour Elkott,Nour Fahmy,Olanrewaju Samuel,Ran An,Rasmus Kromann,Ryan Hao,Samira Alizadeh,Sarmad Shubber,Silas Wang,Sourav Roy,Sylvain Viguier,Thanh Le,Tobi Oyebade,Trieu Le,Yoyo Yang,Zach Nguyen,Abhinav Ramesh Kashyap,Alfredo Palasciano,Alison Callahan,Anima Shukla,Antonio Miranda-Escalada,Ayush Singh,Benjamin Beilharz,Bo Wang,Caio Brito,Chenxi Zhou,Chirag Jain,Chuxin Xu,Clémentine Fourrier,Daniel León Peri?án,Daniel Molano,Dian Yu,Enrique Manjavacas,Fabio Barth,Florian Fuhrimann,Gabriel Altay,Giyaseddin Bayrak,Gully Burns,Helena U. Vrabec,Imane Bello,Ishani Dash,Jihyun Kang,John Giorgi,Jonas Golde,Jose David Posada,Karthik Rangasai Sivaraman,Lokesh Bulchandani,Lu Liu,Luisa Shinzato,Madeleine Hahn de Bykhovetz,Maiko Takeuchi,Marc Pàmies,Maria A Castillo,Marianna Nezhurina,Mario S?nger,Matthias Samwald,Michael Cullan,Michael Weinberg,Michiel De Wolf,Mina Mihaljcic,Minna Liu,Moritz Freidank,Myungsun Kang,Natasha Seelam,Nathan Dahlberg,Nicholas Michio Broad,Nikolaus Muellner,Pascale Fung,Patrick Haller,Ramya Chandrasekhar,Renata Eisenberg,Robert Martin,Rodrigo Canalli,Rosaline Su,Ruisi Su,Samuel Cahyawijaya,Samuele Garda,Shlok S Deshmukh,Shubhanshu Mishra,Sid Kiblawi,Simon Ott,Sinee Sang-aroonsiri,Srishti Kumar,Stefan Schweter,Sushil Bharati,Tanmay Laud,Théo Gigant,Tomoya Kainuma,Wojciech Kusa,Yanis Labrak,Yash Shailesh Bajaj,Yash Venkatraman,Yifan Xu,Yingxin Xu,Yu Xu,Zhe Tan,Zhongli Xie,Zifan Ye,Mathilde Bras,Younes Belkada,Thomas Wolf

Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.

Recent developments in pre-trained speech representation utilizing self-supervised learning (SSL) have yielded exceptional results on a variety of downstream tasks. One such technique, known as masked predictive coding (MPC), has been employed by some of the most high-performing models. In this study, we investigate the impact of MPC loss on the type of information learnt at various layers in the HuBERT model, using nine probing tasks. Our findings indicate that the amount of content information learned at various layers of the HuBERT model has a positive correlation to the MPC loss. Additionally, it is also observed that any speaker-related information learned at intermediate layers of the model, is an indirect consequence of the learning process, and therefore cannot be controlled using the MPC loss. These findings may serve as inspiration for further research in the speech community, specifically in the development of new pre-training tasks or the exploration of new pre-training criterion's that directly preserves both speaker and content information at various layers of a learnt model.

Transfer learning is beneficial by allowing the expressive features of models pretrained on large-scale datasets to be finetuned for the target task of smaller, more domain-specific datasets. However, there is a concern that these pretrained models may come with their own biases which would propagate into the finetuned model. In this work, we investigate bias when conceptualized as both spurious correlations between the target task and a sensitive attribute as well as underrepresentation of a particular group in the dataset. Under both notions of bias, we find that (1) models finetuned on top of pretrained models can indeed inherit their biases, but (2) this bias can be corrected for through relatively minor interventions to the finetuning dataset, and often with a negligible impact to performance. Our findings imply that careful curation of the finetuning dataset is important for reducing biases on a downstream task, and doing so can even compensate for bias in the pretrained model.

With promising yet saturated results in high-resource settings, low-resource datasets have gradually become popular benchmarks for evaluating the learning ability of advanced neural networks (e.g., BigBench, superGLUE). Some models even surpass humans according to benchmark test results. However, we find that there exists a set of hard examples in low-resource settings that challenge neural networks but are not well evaluated, which causes over-estimated performance. We first give a theoretical analysis on which factors bring the difficulty of low-resource learning. It then motivate us to propose a challenging benchmark hardBench to better evaluate the learning ability, which covers 11 datasets, including 3 computer vision (CV) datasets and 8 natural language process (NLP) datasets. Experiments on a wide range of models show that neural networks, even pre-trained language models, have sharp performance drops on our benchmark, demonstrating the effectiveness on evaluating the weaknesses of neural networks. On NLP tasks, we surprisingly find that despite better results on traditional low-resource benchmarks, pre-trained networks, does not show performance improvements on our benchmarks. These results demonstrate that there are still a large robustness gap between existing models and human-level performance.

Transformer-based pretrained language models (T-PTLMs) have achieved great success in almost every NLP task. The evolution of these models started with GPT and BERT. These models are built on the top of transformers, self-supervised learning and transfer learning. Transformed-based PTLMs learn universal language representations from large volumes of text data using self-supervised learning and transfer this knowledge to downstream tasks. These models provide good background knowledge to downstream tasks which avoids training of downstream models from scratch. In this comprehensive survey paper, we initially give a brief overview of self-supervised learning. Next, we explain various core concepts like pretraining, pretraining methods, pretraining tasks, embeddings and downstream adaptation methods. Next, we present a new taxonomy of T-PTLMs and then give brief overview of various benchmarks including both intrinsic and extrinsic. We present a summary of various useful libraries to work with T-PTLMs. Finally, we highlight some of the future research directions which will further improve these models. We strongly believe that this comprehensive survey paper will serve as a good reference to learn the core concepts as well as to stay updated with the recent happenings in T-PTLMs.

Graph neural networks (GNNs) is widely used to learn a powerful representation of graph-structured data. Recent work demonstrates that transferring knowledge from self-supervised tasks to downstream tasks could further improve graph representation. However, there is an inherent gap between self-supervised tasks and downstream tasks in terms of optimization objective and training data. Conventional pre-training methods may be not effective enough on knowledge transfer since they do not make any adaptation for downstream tasks. To solve such problems, we propose a new transfer learning paradigm on GNNs which could effectively leverage self-supervised tasks as auxiliary tasks to help the target task. Our methods would adaptively select and combine different auxiliary tasks with the target task in the fine-tuning stage. We design an adaptive auxiliary loss weighting model to learn the weights of auxiliary tasks by quantifying the consistency between auxiliary tasks and the target task. In addition, we learn the weighting model through meta-learning. Our methods can be applied to various transfer learning approaches, it performs well not only in multi-task learning but also in pre-training and fine-tuning. Comprehensive experiments on multiple downstream tasks demonstrate that the proposed methods can effectively combine auxiliary tasks with the target task and significantly improve the performance compared to state-of-the-art methods.

Self-supervised learning has been widely used to obtain transferrable representations from unlabeled images. Especially, recent contrastive learning methods have shown impressive performances on downstream image classification tasks. While these contrastive methods mainly focus on generating invariant global representations at the image-level under semantic-preserving transformations, they are prone to overlook spatial consistency of local representations and therefore have a limitation in pretraining for localization tasks such as object detection and instance segmentation. Moreover, aggressively cropped views used in existing contrastive methods can minimize representation distances between the semantically different regions of a single image. In this paper, we propose a spatially consistent representation learning algorithm (SCRL) for multi-object and location-specific tasks. In particular, we devise a novel self-supervised objective that tries to produce coherent spatial representations of a randomly cropped local region according to geometric translations and zooming operations. On various downstream localization tasks with benchmark datasets, the proposed SCRL shows significant performance improvements over the image-level supervised pretraining as well as the state-of-the-art self-supervised learning methods.

Recently, contrastive learning (CL) has emerged as a successful method for unsupervised graph representation learning. Most graph CL methods first perform stochastic augmentation on the input graph to obtain two graph views and maximize the agreement of representations in the two views. Despite the prosperous development of graph CL methods, the design of graph augmentation schemes -- a crucial component in CL -- remains rarely explored. We argue that the data augmentation schemes should preserve intrinsic structures and attributes of graphs, which will force the model to learn representations that are insensitive to perturbation on unimportant nodes and edges. However, most existing methods adopt uniform data augmentation schemes, like uniformly dropping edges and uniformly shuffling features, leading to suboptimal performance. In this paper, we propose a novel graph contrastive representation learning method with adaptive augmentation that incorporates various priors for topological and semantic aspects of the graph. Specifically, on the topology level, we design augmentation schemes based on node centrality measures to highlight important connective structures. On the node attribute level, we corrupt node features by adding more noise to unimportant node features, to enforce the model to recognize underlying semantic information. We perform extensive experiments of node classification on a variety of real-world datasets. Experimental results demonstrate that our proposed method consistently outperforms existing state-of-the-art baselines and even surpasses some supervised counterparts, which validates the effectiveness of the proposed contrastive framework with adaptive augmentation.

Language model pre-training, such as BERT, has significantly improved the performances of many natural language processing tasks. However, pre-trained language models are usually computationally expensive and memory intensive, so it is difficult to effectively execute them on some resource-restricted devices. To accelerate inference and reduce model size while maintaining accuracy, we firstly propose a novel transformer distillation method that is a specially designed knowledge distillation (KD) method for transformer-based models. By leveraging this new KD method, the plenty of knowledge encoded in a large teacher BERT can be well transferred to a small student TinyBERT. Moreover, we introduce a new two-stage learning framework for TinyBERT, which performs transformer distillation at both the pre-training and task-specific learning stages. This framework ensures that TinyBERT can capture both the general-domain and task-specific knowledge of the teacher BERT. TinyBERT is empirically effective and achieves comparable results with BERT in GLUE datasets, while being 7.5x smaller and 9.4x faster on inference. TinyBERT is also significantly better than state-of-the-art baselines, even with only about 28% parameters and 31% inference time of baselines.

Pre-trained language representation models, such as BERT, capture a general language representation from large-scale corpora, but lack domain-specific knowledge. When reading a domain text, experts make inferences with relevant knowledge. For machines to achieve this capability, we propose a knowledge-enabled language representation model (K-BERT) with knowledge graphs (KGs), in which triples are injected into the sentences as domain knowledge. However, too much knowledge incorporation may divert the sentence from its correct meaning, which is called knowledge noise (KN) issue. To overcome KN, K-BERT introduces soft-position and visible matrix to limit the impact of knowledge. K-BERT can easily inject domain knowledge into the models by equipped with a KG without pre-training by-self because it is capable of loading model parameters from the pre-trained BERT. Our investigation reveals promising results in twelve NLP tasks. Especially in domain-specific tasks (including finance, law, and medicine), K-BERT significantly outperforms BERT, which demonstrates that K-BERT is an excellent choice for solving the knowledge-driven problems that require experts.

北京阿比特科技有限公司