亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Open X-Embodiment Collaboration,Abby O'Neill,Abdul Rehman,Abhinav Gupta,Abhiram Maddukuri,Abhishek Gupta,Abhishek Padalkar,Abraham Lee,Acorn Pooley,Agrim Gupta,Ajay Mandlekar,Ajinkya Jain,Albert Tung,Alex Bewley,Alex Herzog,Alex Irpan,Alexander Khazatsky,Anant Rai,Anchit Gupta,Andrew Wang,Andrey Kolobov,Anikait Singh,Animesh Garg,Aniruddha Kembhavi,Annie Xie,Anthony Brohan,Antonin Raffin,Archit Sharma,Arefeh Yavary,Arhan Jain,Ashwin Balakrishna,Ayzaan Wahid,Ben Burgess-Limerick,Beomjoon Kim,Bernhard Sch?lkopf,Blake Wulfe,Brian Ichter,Cewu Lu,Charles Xu,Charlotte Le,Chelsea Finn,Chen Wang,Chenfeng Xu,Cheng Chi,Chenguang Huang,Christine Chan,Christopher Agia,Chuer Pan,Chuyuan Fu,Coline Devin,Danfei Xu,Daniel Morton,Danny Driess,Daphne Chen,Deepak Pathak,Dhruv Shah,Dieter Büchler,Dinesh Jayaraman,Dmitry Kalashnikov,Dorsa Sadigh,Edward Johns,Ethan Foster,Fangchen Liu,Federico Ceola,Fei Xia,Feiyu Zhao,Felipe Vieira Frujeri,Freek Stulp,Gaoyue Zhou,Gaurav S. Sukhatme,Gautam Salhotra,Ge Yan,Gilbert Feng,Giulio Schiavi,Glen Berseth,Gregory Kahn,Guangwen Yang,Guanzhi Wang,Hao Su,Hao-Shu Fang,Haochen Shi,Henghui Bao,Heni Ben Amor,Henrik I Christensen,Hiroki Furuta,Homanga Bharadhwaj,Homer Walke,Hongjie Fang,Huy Ha,Igor Mordatch,Ilija Radosavovic,Isabel Leal,Jacky Liang,Jad Abou-Chakra,Jaehyung Kim,Jaimyn Drake,Jan Peters,Jan Schneider,Jasmine Hsu,Jay Vakil,Jeannette Bohg,Jeffrey Bingham,Jeffrey Wu,Jensen Gao,Jiaheng Hu,Jiajun Wu,Jialin Wu,Jiankai Sun,Jianlan Luo,Jiayuan Gu,Jie Tan,Jihoon Oh,Jimmy Wu,Jingpei Lu,Jingyun Yang,Jitendra Malik,Jo?o Silvério,Joey Hejna,Jonathan Booher,Jonathan Tompson,Jonathan Yang,Jordi Salvador,Joseph J. Lim,Junhyek Han,Kaiyuan Wang,Kanishka Rao,Karl Pertsch,Karol Hausman,Keegan Go,Keerthana Gopalakrishnan,Ken Goldberg,Kendra Byrne,Kenneth Oslund,Kento Kawaharazuka,Kevin Black,Kevin Lin,Kevin Zhang,Kiana Ehsani,Kiran Lekkala,Kirsty Ellis,Krishan Rana,Krishnan Srinivasan,Kuan Fang,Kunal Pratap Singh,Kuo-Hao Zeng,Kyle Hatch,Kyle Hsu,Laurent Itti,Lawrence Yunliang Chen,Lerrel Pinto,Li Fei-Fei,Liam Tan,Linxi "Jim" Fan,Lionel Ott,Lisa Lee,Luca Weihs,Magnum Chen,Marion Lepert,Marius Memmel,Masayoshi Tomizuka,Masha Itkina,Mateo Guaman Castro,Max Spero,Maximilian Du,Michael Ahn,Michael C. Yip,Mingtong Zhang,Mingyu Ding,Minho Heo,Mohan Kumar Srirama,Mohit Sharma,Moo Jin Kim,Naoaki Kanazawa,Nicklas Hansen,Nicolas Heess,Nikhil J Joshi,Niko Suenderhauf,Ning Liu,Norman Di Palo,Nur Muhammad Mahi Shafiullah,Oier Mees,Oliver Kroemer,Osbert Bastani,Pannag R Sanketi,Patrick "Tree" Miller,Patrick Yin,Paul Wohlhart,Peng Xu,Peter David Fagan,Peter Mitrano,Pierre Sermanet,Pieter Abbeel,Priya Sundaresan,Qiuyu Chen,Quan Vuong,Rafael Rafailov,Ran Tian,Ria Doshi,Roberto Mart'in-Mart'in,Rohan Baijal,Rosario Scalise,Rose Hendrix,Roy Lin,Runjia Qian,Ruohan Zhang,Russell Mendonca,Rutav Shah,Ryan Hoque,Ryan Julian,Samuel Bustamante,Sean Kirmani,Sergey Levine,Shan Lin,Sherry Moore,Shikhar Bahl,Shivin Dass,Shubham Sonawani,Shubham Tulsiani,Shuran Song,Sichun Xu,Siddhant Haldar,Siddharth Karamcheti,Simeon Adebola,Simon Guist,Soroush Nasiriany,Stefan Schaal,Stefan Welker,Stephen Tian,Subramanian Ramamoorthy,Sudeep Dasari,Suneel Belkhale,Sungjae Park,Suraj Nair,Suvir Mirchandani,Takayuki Osa,Tanmay Gupta,Tatsuya Harada,Tatsuya Matsushima,Ted Xiao,Thomas Kollar,Tianhe Yu,Tianli Ding,Todor Davchev,Tony Z. Zhao,Travis Armstrong,Trevor Darrell,Trinity Chung,Vidhi Jain,Vikash Kumar,Vincent Vanhoucke,Wei Zhan,Wenxuan Zhou,Wolfram Burgard,Xi Chen,Xiangyu Chen,Xiaolong Wang,Xinghao Zhu,Xinyang Geng,Xiyuan Liu,Xu Liangwei,Xuanlin Li,Yansong Pang,Yao Lu,Yecheng Jason Ma,Yejin Kim,Yevgen Chebotar,Yifan Zhou,Yifeng Zhu,Yilin Wu,Ying Xu,Yixuan Wang,Yonatan Bisk,Yongqiang Dou,Yoonyoung Cho,Youngwoon Lee,Yuchen Cui,Yue Cao,Yueh-Hua Wu,Yujin Tang,Yuke Zhu,Yunchu Zhang,Yunfan Jiang,Yunshuang Li,Yunzhu Li,Yusuke Iwasawa,Yutaka Matsuo,Zehan Ma,Zhuo Xu,Zichen Jeff Cui,Zichen Zhang,Zipeng Fu,Zipeng Lin
Open X-Embodiment Collaboration,Abby O'Neill,Abdul Rehman,Abhinav Gupta,Abhiram Maddukuri,Abhishek Gupta,Abhishek Padalkar,Abraham Lee,Acorn Pooley,Agrim Gupta,Ajay Mandlekar,Ajinkya Jain,Albert Tung,Alex Bewley,Alex Herzog,Alex Irpan,Alexander Khazatsky,Anant Rai,Anchit Gupta,Andrew Wang,Andrey Kolobov,Anikait Singh,Animesh Garg,Aniruddha Kembhavi,Annie Xie,Anthony Brohan,Antonin Raffin,Archit Sharma,Arefeh Yavary,Arhan Jain,Ashwin Balakrishna,Ayzaan Wahid,Ben Burgess-Limerick,Beomjoon Kim,Bernhard Sch?lkopf,Blake Wulfe,Brian Ichter,Cewu Lu,Charles Xu,Charlotte Le,Chelsea Finn,Chen Wang,Chenfeng Xu,Cheng Chi,Chenguang Huang,Christine Chan,Christopher Agia,Chuer Pan,Chuyuan Fu,Coline Devin,Danfei Xu,Daniel Morton,Danny Driess,Daphne Chen,Deepak Pathak,Dhruv Shah,Dieter Büchler,Dinesh Jayaraman,Dmitry Kalashnikov,Dorsa Sadigh,Edward Johns,Ethan Foster,Fangchen Liu,Federico Ceola,Fei Xia,Feiyu Zhao,Felipe Vieira Frujeri,Freek Stulp,Gaoyue Zhou,Gaurav S. Sukhatme,Gautam Salhotra,Ge Yan,Gilbert Feng,Giulio Schiavi,Glen Berseth,Gregory Kahn,Guangwen Yang,Guanzhi Wang,Hao Su,Hao-Shu Fang,Haochen Shi,Henghui Bao,Heni Ben Amor,Henrik I Christensen,Hiroki Furuta,Homanga Bharadhwaj,Homer Walke,Hongjie Fang,Huy Ha,Igor Mordatch,Ilija Radosavovic,Isabel Leal,Jacky Liang,Jad Abou-Chakra,Jaehyung Kim,Jaimyn Drake,Jan Peters,Jan Schneider,Jasmine Hsu,Jay Vakil,Jeannette Bohg,Jeffrey Bingham,Jeffrey Wu,Jensen Gao,Jiaheng Hu,Jiajun Wu,Jialin Wu,Jiankai Sun,Jianlan Luo,Jiayuan Gu,Jie Tan,Jihoon Oh,Jimmy Wu,Jingpei Lu,Jingyun Yang,Jitendra Malik,Jo?o Silvério,Joey Hejna,Jonathan Booher,Jonathan Tompson,Jonathan Yang,Jordi Salvador,Joseph J. Lim,Junhyek Han,Kaiyuan Wang,Kanishka Rao,Karl Pertsch,Karol Hausman,Keegan Go,Keerthana Gopalakrishnan,Ken Goldberg,Kendra Byrne,Kenneth Oslund,Kento Kawaharazuka,Kevin Black,Kevin Lin,Kevin Zhang,Kiana Ehsani,Kiran Lekkala,Kirsty Ellis,Krishan Rana,Krishnan Srinivasan,Kuan Fang,Kunal Pratap Singh,Kuo-Hao Zeng,Kyle Hatch,Kyle Hsu,Laurent Itti,Lawrence Yunliang Chen,Lerrel Pinto,Li Fei-Fei,Liam Tan,Linxi "Jim" Fan,Lionel Ott,Lisa Lee,Luca Weihs,Magnum Chen,Marion Lepert,Marius Memmel,Masayoshi Tomizuka,Masha Itkina,Mateo Guaman Castro,Max Spero,Maximilian Du,Michael Ahn,Michael C. Yip,Mingtong Zhang,Mingyu Ding,Minho Heo,Mohan Kumar Srirama,Mohit Sharma,Moo Jin Kim,Naoaki Kanazawa,Nicklas Hansen,Nicolas Heess,Nikhil J Joshi,Niko Suenderhauf,Ning Liu,Norman Di Palo,Nur Muhammad Mahi Shafiullah,Oier Mees,Oliver Kroemer,Osbert Bastani,Pannag R Sanketi,Patrick "Tree" Miller,Patrick Yin,Paul Wohlhart,Peng Xu,Peter David Fagan,Peter Mitrano,Pierre Sermanet,Pieter Abbeel,Priya Sundaresan,Qiuyu Chen,Quan Vuong,Rafael Rafailov,Ran Tian,Ria Doshi,Roberto Mart'in-Mart'in,Rohan Baijal,Rosario Scalise,Rose Hendrix,Roy Lin,Runjia Qian,Ruohan Zhang,Russell Mendonca,Rutav Shah,Ryan Hoque,Ryan Julian,Samuel Bustamante,Sean Kirmani,Sergey Levine,Shan Lin,Sherry Moore,Shikhar Bahl,Shivin Dass,Shubham Sonawani,Shubham Tulsiani,Shuran Song,Sichun Xu,Siddhant Haldar,Siddharth Karamcheti,Simeon Adebola,Simon Guist,Soroush Nasiriany,Stefan Schaal,Stefan Welker,Stephen Tian,Subramanian Ramamoorthy,Sudeep Dasari,Suneel Belkhale,Sungjae Park,Suraj Nair,Suvir Mirchandani,Takayuki Osa,Tanmay Gupta,Tatsuya Harada,Tatsuya Matsushima,Ted Xiao,Thomas Kollar,Tianhe Yu,Tianli Ding,Todor Davchev,Tony Z. Zhao,Travis Armstrong,Trevor Darrell,Trinity Chung,Vidhi Jain,Vikash Kumar,Vincent Vanhoucke,Wei Zhan,Wenxuan Zhou,Wolfram Burgard,Xi Chen,Xiangyu Chen,Xiaolong Wang,Xinghao Zhu,Xinyang Geng,Xiyuan Liu,Xu Liangwei,Xuanlin Li,Yansong Pang,Yao Lu,Yecheng Jason Ma,Yejin Kim,Yevgen Chebotar,Yifan Zhou,Yifeng Zhu,Yilin Wu,Ying Xu,Yixuan Wang,Yonatan Bisk,Yongqiang Dou,Yoonyoung Cho,Youngwoon Lee,Yuchen Cui,Yue Cao,Yueh-Hua Wu,Yujin Tang,Yuke Zhu,Yunchu Zhang,Yunfan Jiang,Yunshuang Li,Yunzhu Li,Yusuke Iwasawa,Yutaka Matsuo,Zehan Ma,Zhuo Xu,Zichen Jeff Cui,Zichen Zhang,Zipeng Fu,Zipeng Lin

Large, high-capacity models trained on diverse datasets have shown remarkable successes on efficiently tackling downstream applications. In domains from NLP to Computer Vision, this has led to a consolidation of pretrained models, with general pretrained backbones serving as a starting point for many applications. Can such a consolidation happen in robotics? Conventionally, robotic learning methods train a separate model for every application, every robot, and even every environment. Can we instead train generalist X-robot policy that can be adapted efficiently to new robots, tasks, and environments? In this paper, we provide datasets in standardized data formats and models to make it possible to explore this possibility in the context of robotic manipulation, alongside experimental results that provide an example of effective X-robot policies. We assemble a dataset from 22 different robots collected through a collaboration between 21 institutions, demonstrating 527 skills (160266 tasks). We show that a high-capacity model trained on this data, which we call RT-X, exhibits positive transfer and improves the capabilities of multiple robots by leveraging experience from other platforms. More details can be found on the project website //robotics-transformer-x.github.io.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 模態 · 地球 · Learning · 數據集 ·
2024 年 7 月 12 日

The field of Earth Observations (EO) offers a wealth of data from diverse sensors, presenting a great opportunity for advancing self-supervised multimodal learning. However, current multimodal EO datasets and models focus on a single data type, either mono-date images or time series, which limits their expressivity. We introduce OmniSat, a novel architecture that exploits the spatial alignment between multiple EO modalities to learn expressive multimodal representations without labels. To demonstrate the advantages of combining modalities of different natures, we augment two existing datasets with new modalities. As demonstrated on three downstream tasks: forestry, land cover classification, and crop mapping. OmniSat can learn rich representations in an unsupervised manner, leading to improved performance in the semi- and fully-supervised settings, even when only one modality is available for inference. The code and dataset are available at //github.com/gastruc/OmniSat.

Large models represent a groundbreaking advancement in multiple application fields, enabling remarkable achievements across various tasks. However, their unprecedented scale comes with significant computational costs. These models, often consisting of billions of parameters, require vast amounts of computational resources for execution. Especially, the expansive scale and computational demands pose considerable challenges when customizing them for particular downstream tasks, particularly over the hardware platforms constrained by computational capabilities. Parameter Efficient Fine-Tuning (PEFT) provides a practical solution by efficiently adjusting the large models over the various downstream tasks. In particular, PEFT refers to the process of adjusting the parameters of a pre-trained large models to adapt it to a specific task or domain while minimizing the number of additional parameters introduced or computational resources required. This approach is particularly important when dealing with large-scale language models with high parameter counts, as fine-tuning these models from scratch can be computationally expensive and resource-intensive, posing considerable challenges in the supporting system platform design. In this survey, we present comprehensive studies of various PEFT algorithms, examining their performance and computational overhead. Moreover, we provide an overview of applications developed using different PEFT algorithms and discuss common techniques employed to mitigate computation costs for PEFT. In addition to providing an extensive survey from an algorithmic standpoint, we also examine various real-world system designs to investigate the implementation costs associated with different PEFT approaches. This survey serves as an indispensable resource for researchers aiming to understand both the PEFT algorithm and its system implementation, offering detailed ......

Existing learning-based denoising methods typically train models to generalize the image prior from large-scale datasets, suffering from the variability in noise distributions encountered in real-world scenarios. In this work, we propose a new perspective on the denoising challenge by highlighting the distinct separation between noise and image priors. This insight forms the basis for our development of conditional optimization framework, designed to overcome the constraints of traditional denoising framework. To this end, we introduce a Locally Noise Prior Estimation (LoNPE) algorithm, which accurately estimates the noise prior directly from a single raw noisy image. This estimation acts as an explicit prior representation of the camera sensor's imaging environment, distinct from the image prior of scenes. Additionally, we design an auxiliary learnable LoNPE network tailored for practical application to sRGB noisy images. Leveraging the estimated noise prior, we present a novel Conditional Denoising Transformer (Condformer), by incorporating the noise prior into a conditional self-attention mechanism. This integration allows the Condformer to segment the optimization process into multiple explicit subspaces, significantly enhancing the model's generalization and flexibility. Extensive experimental evaluations on both synthetic and real-world datasets, demonstrate that the proposed method achieves superior performance over current state-of-the-art methods. The source code is available at //github.com/YuanfeiHuang/Condformer.

Text-to-Image (T2I) models are being increasingly adopted in diverse global communities where they create visual representations of their unique cultures. Current T2I benchmarks primarily focus on faithfulness, aesthetics, and realism of generated images, overlooking the critical dimension of cultural competence. In this work, we introduce a framework to evaluate cultural competence of T2I models along two crucial dimensions: cultural awareness and cultural diversity, and present a scalable approach using a combination of structured knowledge bases and large language models to build a large dataset of cultural artifacts to enable this evaluation. In particular, we apply this approach to build CUBE (CUltural BEnchmark for Text-to-Image models), a first-of-its-kind benchmark to evaluate cultural competence of T2I models. CUBE covers cultural artifacts associated with 8 countries across different geo-cultural regions and along 3 concepts: cuisine, landmarks, and art. CUBE consists of 1) CUBE-1K, a set of high-quality prompts that enable the evaluation of cultural awareness, and 2) CUBE-CSpace, a larger dataset of cultural artifacts that serves as grounding to evaluate cultural diversity. We also introduce cultural diversity as a novel T2I evaluation component, leveraging quality-weighted Vendi score. Our evaluations reveal significant gaps in the cultural awareness of existing models across countries and provide valuable insights into the cultural diversity of T2I outputs for under-specified prompts. Our methodology is extendable to other cultural regions and concepts, and can facilitate the development of T2I models that better cater to the global population.

Large-scale LiDAR mappings and localization leverage place recognition techniques to mitigate odometry drifts, ensuring accurate mapping. These techniques utilize scene representations from LiDAR point clouds to identify previously visited sites within a database. Local descriptors, assigned to each point within a point cloud, are aggregated to form a scene representation for the point cloud. These descriptors are also used to re-rank the retrieved point clouds based on geometric fitness scores. We propose SALSA, a novel, lightweight, and efficient framework for LiDAR place recognition. It consists of a Sphereformer backbone that uses radial window attention to enable information aggregation for sparse distant points, an adaptive self-attention layer to pool local descriptors into tokens, and a multi-layer-perceptron Mixer layer for aggregating the tokens to generate a scene descriptor. The proposed framework outperforms existing methods on various LiDAR place recognition datasets in terms of both retrieval and metric localization while operating in real-time.

Large Language Models (LLMs) have made great strides in recent years to achieve unprecedented performance across different tasks. However, due to commercial interest, the most competitive models like GPT, Gemini, and Claude have been gated behind proprietary interfaces without disclosing the training details. Recently, many institutions have open-sourced several strong LLMs like LLaMA-3, comparable to existing closed-source LLMs. However, only the model's weights are provided with most details (e.g., intermediate checkpoints, pre-training corpus, and training code, etc.) being undisclosed. To improve the transparency of LLMs, the research community has formed to open-source truly open LLMs (e.g., Pythia, Amber, OLMo), where more details (e.g., pre-training corpus and training code) are being provided. These models have greatly advanced the scientific study of these large models including their strengths, weaknesses, biases and risks. However, we observe that the existing truly open LLMs on reasoning, knowledge, and coding tasks are still inferior to existing state-of-the-art LLMs with similar model sizes. To this end, we open-source MAP-Neo, a highly capable and transparent bilingual language model with 7B parameters trained from scratch on 4.5T high-quality tokens. Our MAP-Neo is the first fully open-sourced bilingual LLM with comparable performance compared to existing state-of-the-art LLMs. Moreover, we open-source all details to reproduce our MAP-Neo, where the cleaned pre-training corpus, data cleaning pipeline, checkpoints, and well-optimized training/evaluation framework are provided. Finally, we hope our MAP-Neo will enhance and strengthen the open research community and inspire more innovations and creativities to facilitate the further improvements of LLMs.

Continuous-time dynamic graphs (CTDGs) are essential for modeling interconnected, evolving systems. Traditional methods for extracting knowledge from these graphs often depend on feature engineering or deep learning. Feature engineering is limited by the manual and time-intensive nature of crafting features, while deep learning approaches suffer from high inference latency, making them impractical for real-time applications. This paper introduces Deep-Graph-Sprints (DGS), a novel deep learning architecture designed for efficient representation learning on CTDGs with low-latency inference requirements. We benchmark DGS against state-of-the-art feature engineering and graph neural network methods using five diverse datasets. The results indicate that DGS achieves competitive performance while improving inference speed up to 12x compared to other deep learning approaches on our tested benchmarks. Our method effectively bridges the gap between deep representation learning and low-latency application requirements for CTDGs.

We present VEnhancer, a generative space-time enhancement framework that improves the existing text-to-video results by adding more details in spatial domain and synthetic detailed motion in temporal domain. Given a generated low-quality video, our approach can increase its spatial and temporal resolution simultaneously with arbitrary up-sampling space and time scales through a unified video diffusion model. Furthermore, VEnhancer effectively removes generated spatial artifacts and temporal flickering of generated videos. To achieve this, basing on a pretrained video diffusion model, we train a video ControlNet and inject it to the diffusion model as a condition on low frame-rate and low-resolution videos. To effectively train this video ControlNet, we design space-time data augmentation as well as video-aware conditioning. Benefiting from the above designs, VEnhancer yields to be stable during training and shares an elegant end-to-end training manner. Extensive experiments show that VEnhancer surpasses existing state-of-the-art video super-resolution and space-time super-resolution methods in enhancing AI-generated videos. Moreover, with VEnhancer, exisiting open-source state-of-the-art text-to-video method, VideoCrafter-2, reaches the top one in video generation benchmark -- VBench.

Hierarchical structures are popular in recent vision transformers, however, they require sophisticated designs and massive datasets to work well. In this paper, we explore the idea of nesting basic local transformers on non-overlapping image blocks and aggregating them in a hierarchical way. We find that the block aggregation function plays a critical role in enabling cross-block non-local information communication. This observation leads us to design a simplified architecture that requires minor code changes upon the original vision transformer. The benefits of the proposed judiciously-selected design are threefold: (1) NesT converges faster and requires much less training data to achieve good generalization on both ImageNet and small datasets like CIFAR; (2) when extending our key ideas to image generation, NesT leads to a strong decoder that is 8$\times$ faster than previous transformer-based generators; and (3) we show that decoupling the feature learning and abstraction processes via this nested hierarchy in our design enables constructing a novel method (named GradCAT) for visually interpreting the learned model. Source code is available //github.com/google-research/nested-transformer.

Large-scale pre-trained models (PTMs) such as BERT and GPT have recently achieved great success and become a milestone in the field of artificial intelligence (AI). Owing to sophisticated pre-training objectives and huge model parameters, large-scale PTMs can effectively capture knowledge from massive labeled and unlabeled data. By storing knowledge into huge parameters and fine-tuning on specific tasks, the rich knowledge implicitly encoded in huge parameters can benefit a variety of downstream tasks, which has been extensively demonstrated via experimental verification and empirical analysis. It is now the consensus of the AI community to adopt PTMs as backbone for downstream tasks rather than learning models from scratch. In this paper, we take a deep look into the history of pre-training, especially its special relation with transfer learning and self-supervised learning, to reveal the crucial position of PTMs in the AI development spectrum. Further, we comprehensively review the latest breakthroughs of PTMs. These breakthroughs are driven by the surge of computational power and the increasing availability of data, towards four important directions: designing effective architectures, utilizing rich contexts, improving computational efficiency, and conducting interpretation and theoretical analysis. Finally, we discuss a series of open problems and research directions of PTMs, and hope our view can inspire and advance the future study of PTMs.

北京阿比特科技有限公司