亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Zero-shot inference is a powerful paradigm that enables the use of large pretrained models for downstream classification tasks without further training. However, these models are vulnerable to inherited biases that can impact their performance. The traditional solution is fine-tuning, but this undermines the key advantage of pretrained models, which is their ability to be used out-of-the-box. We propose RoboShot, a method that improves the robustness of pretrained model embeddings in a fully zero-shot fashion. First, we use language models (LMs) to obtain useful insights from task descriptions. These insights are embedded and used to remove harmful and boost useful components in embeddings -- without any supervision. Theoretically, we provide a simple and tractable model for biases in zero-shot embeddings and give a result characterizing under what conditions our approach can boost performance. Empirically, we evaluate RoboShot on nine image and NLP classification tasks and show an average improvement of 15.98% on worst group accuracy, with trivial decrease in overall accuracy over several zero-shot baselines. Additionally, we demonstrate that RoboShot is compatible with a variety of pretrained and language models and propose a way to further boost performance with a zero-shot adaptation variant.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 控制器 · 有向 · 樣例 · Learning ·
2024 年 3 月 22 日

We propose a data-driven control method for systems with aleatoric uncertainty, for example, robot fleets with variations between agents. Our method leverages shared trajectory data to increase the robustness of the designed controller and thus facilitate transfer to new variations without the need for prior parameter and uncertainty estimations. In contrast to existing work on experience transfer for performance, our approach focuses on robustness and uses data collected from multiple realizations to guarantee generalization to unseen ones. Our method is based on scenario optimization combined with recent formulations for direct data-driven control. We derive lower bounds on the amount of data required to achieve quadratic stability for probabilistic systems with aleatoric uncertainty and demonstrate the benefits of our data-driven method through a numerical example. We find that the learned controllers generalize well to high variations in the dynamics even when based on only a few short open-loop trajectories. Robust experience transfer enables the design of safe and robust controllers that work out of the box without any additional learning during deployment.

Prompt tuning for pre-trained masked language models (MLM) has shown promising performance in natural language processing tasks with few labeled examples. It tunes a prompt for the downstream task, and a verbalizer is used to bridge the predicted token and label prediction. Due to the limited training data, prompt initialization is crucial for prompt tuning. Recently, MetaPrompting (Hou et al., 2022) uses meta-learning to learn a shared initialization for all task-specific prompts. However, a single initialization is insufficient to obtain good prompts for all tasks and samples when the tasks are complex. Moreover, MetaPrompting requires tuning the whole MLM, causing a heavy burden on computation and memory as the MLM is usually large. To address these issues, we use a prompt pool to extract more task knowledge and construct instance-dependent prompts via attention. We further propose a novel soft verbalizer (RepVerb) which constructs label embedding from feature embeddings directly. Combining meta-learning the prompt pool and RepVerb, we propose MetaPrompter for effective structured prompting. MetaPrompter is parameter-efficient as only the pool is required to be tuned. Experimental results demonstrate that MetaPrompter performs better than the recent state-of-the-arts and RepVerb outperforms existing soft verbalizers.

Open X-Embodiment Collaboration,Abby O'Neill,Abdul Rehman,Abhiram Maddukuri,Abhishek Gupta,Abhishek Padalkar,Abraham Lee,Acorn Pooley,Agrim Gupta,Ajay Mandlekar,Ajinkya Jain,Albert Tung,Alex Bewley,Alex Herzog,Alex Irpan,Alexander Khazatsky,Anant Rai,Anchit Gupta,Andrew Wang,Anikait Singh,Animesh Garg,Aniruddha Kembhavi,Annie Xie,Anthony Brohan,Antonin Raffin,Archit Sharma,Arefeh Yavary,Arhan Jain,Ashwin Balakrishna,Ayzaan Wahid,Ben Burgess-Limerick,Beomjoon Kim,Bernhard Sch?lkopf,Blake Wulfe,Brian Ichter,Cewu Lu,Charles Xu,Charlotte Le,Chelsea Finn,Chen Wang,Chenfeng Xu,Cheng Chi,Chenguang Huang,Christine Chan,Christopher Agia,Chuer Pan,Chuyuan Fu,Coline Devin,Danfei Xu,Daniel Morton,Danny Driess,Daphne Chen,Deepak Pathak,Dhruv Shah,Dieter Büchler,Dinesh Jayaraman,Dmitry Kalashnikov,Dorsa Sadigh,Edward Johns,Ethan Foster,Fangchen Liu,Federico Ceola,Fei Xia,Feiyu Zhao,Freek Stulp,Gaoyue Zhou,Gaurav S. Sukhatme,Gautam Salhotra,Ge Yan,Gilbert Feng,Giulio Schiavi,Glen Berseth,Gregory Kahn,Guanzhi Wang,Hao Su,Hao-Shu Fang,Haochen Shi,Henghui Bao,Heni Ben Amor,Henrik I Christensen,Hiroki Furuta,Homer Walke,Hongjie Fang,Huy Ha,Igor Mordatch,Ilija Radosavovic,Isabel Leal,Jacky Liang,Jad Abou-Chakra,Jaehyung Kim,Jaimyn Drake,Jan Peters,Jan Schneider,Jasmine Hsu,Jeannette Bohg,Jeffrey Bingham,Jeffrey Wu,Jensen Gao,Jiaheng Hu,Jiajun Wu,Jialin Wu,Jiankai Sun,Jianlan Luo,Jiayuan Gu,Jie Tan,Jihoon Oh,Jimmy Wu,Jingpei Lu,Jingyun Yang,Jitendra Malik,Jo?o Silvério,Joey Hejna,Jonathan Booher,Jonathan Tompson,Jonathan Yang,Jordi Salvador,Joseph J. Lim,Junhyek Han,Kaiyuan Wang,Kanishka Rao,Karl Pertsch,Karol Hausman,Keegan Go,Keerthana Gopalakrishnan,Ken Goldberg,Kendra Byrne,Kenneth Oslund,Kento Kawaharazuka,Kevin Black,Kevin Lin,Kevin Zhang,Kiana Ehsani,Kiran Lekkala,Kirsty Ellis,Krishan Rana,Krishnan Srinivasan,Kuan Fang,Kunal Pratap Singh,Kuo-Hao Zeng,Kyle Hatch,Kyle Hsu,Laurent Itti,Lawrence Yunliang Chen,Lerrel Pinto,Li Fei-Fei,Liam Tan,Linxi "Jim" Fan,Lionel Ott,Lisa Lee,Luca Weihs,Magnum Chen,Marion Lepert,Marius Memmel,Masayoshi Tomizuka,Masha Itkina,Mateo Guaman Castro,Max Spero,Maximilian Du,Michael Ahn,Michael C. Yip,Mingtong Zhang,Mingyu Ding,Minho Heo,Mohan Kumar Srirama,Mohit Sharma,Moo Jin Kim,Naoaki Kanazawa,Nicklas Hansen,Nicolas Heess,Nikhil J Joshi,Niko Suenderhauf,Ning Liu,Norman Di Palo,Nur Muhammad Mahi Shafiullah,Oier Mees,Oliver Kroemer,Osbert Bastani,Pannag R Sanketi,Patrick "Tree" Miller,Patrick Yin,Paul Wohlhart,Peng Xu,Peter David Fagan,Peter Mitrano,Pierre Sermanet,Pieter Abbeel,Priya Sundaresan,Qiuyu Chen,Quan Vuong,Rafael Rafailov,Ran Tian,Ria Doshi,Roberto Martín-Martín,Rohan Baijal,Rosario Scalise,Rose Hendrix,Roy Lin,Runjia Qian,Ruohan Zhang,Russell Mendonca,Rutav Shah,Ryan Hoque,Ryan Julian,Samuel Bustamante,Sean Kirmani,Sergey Levine,Shan Lin,Sherry Moore,Shikhar Bahl,Shivin Dass,Shubham Sonawani,Shuran Song,Sichun Xu,Siddhant Haldar,Siddharth Karamcheti,Simeon Adebola,Simon Guist,Soroush Nasiriany,Stefan Schaal,Stefan Welker,Stephen Tian,Subramanian Ramamoorthy,Sudeep Dasari,Suneel Belkhale,Sungjae Park,Suraj Nair,Suvir Mirchandani,Takayuki Osa,Tanmay Gupta,Tatsuya Harada,Tatsuya Matsushima,Ted Xiao,Thomas Kollar,Tianhe Yu,Tianli Ding,Todor Davchev,Tony Z. Zhao,Travis Armstrong,Trevor Darrell,Trinity Chung,Vidhi Jain,Vincent Vanhoucke,Wei Zhan,Wenxuan Zhou,Wolfram Burgard,Xi Chen,Xiaolong Wang,Xinghao Zhu,Xinyang Geng,Xiyuan Liu,Xu Liangwei,Xuanlin Li,Yao Lu,Yecheng Jason Ma,Yejin Kim,Yevgen Chebotar,Yifan Zhou,Yifeng Zhu,Yilin Wu,Ying Xu,Yixuan Wang,Yonatan Bisk,Yoonyoung Cho,Youngwoon Lee,Yuchen Cui,Yue Cao,Yueh-Hua Wu,Yujin Tang,Yuke Zhu,Yunchu Zhang,Yunfan Jiang,Yunshuang Li,Yunzhu Li,Yusuke Iwasawa,Yutaka Matsuo,Zehan Ma,Zhuo Xu,Zichen Jeff Cui,Zichen Zhang,Zipeng Lin
Open X-Embodiment Collaboration,Abby O'Neill,Abdul Rehman,Abhiram Maddukuri,Abhishek Gupta,Abhishek Padalkar,Abraham Lee,Acorn Pooley,Agrim Gupta,Ajay Mandlekar,Ajinkya Jain,Albert Tung,Alex Bewley,Alex Herzog,Alex Irpan,Alexander Khazatsky,Anant Rai,Anchit Gupta,Andrew Wang,Anikait Singh,Animesh Garg,Aniruddha Kembhavi,Annie Xie,Anthony Brohan,Antonin Raffin,Archit Sharma,Arefeh Yavary,Arhan Jain,Ashwin Balakrishna,Ayzaan Wahid,Ben Burgess-Limerick,Beomjoon Kim,Bernhard Sch?lkopf,Blake Wulfe,Brian Ichter,Cewu Lu,Charles Xu,Charlotte Le,Chelsea Finn,Chen Wang,Chenfeng Xu,Cheng Chi,Chenguang Huang,Christine Chan,Christopher Agia,Chuer Pan,Chuyuan Fu,Coline Devin,Danfei Xu,Daniel Morton,Danny Driess,Daphne Chen,Deepak Pathak,Dhruv Shah,Dieter Büchler,Dinesh Jayaraman,Dmitry Kalashnikov,Dorsa Sadigh,Edward Johns,Ethan Foster,Fangchen Liu,Federico Ceola,Fei Xia,Feiyu Zhao,Freek Stulp,Gaoyue Zhou,Gaurav S. Sukhatme,Gautam Salhotra,Ge Yan,Gilbert Feng,Giulio Schiavi,Glen Berseth,Gregory Kahn,Guanzhi Wang,Hao Su,Hao-Shu Fang,Haochen Shi,Henghui Bao,Heni Ben Amor,Henrik I Christensen,Hiroki Furuta,Homer Walke,Hongjie Fang,Huy Ha,Igor Mordatch,Ilija Radosavovic,Isabel Leal,Jacky Liang,Jad Abou-Chakra,Jaehyung Kim,Jaimyn Drake,Jan Peters,Jan Schneider,Jasmine Hsu,Jeannette Bohg,Jeffrey Bingham,Jeffrey Wu,Jensen Gao,Jiaheng Hu,Jiajun Wu,Jialin Wu,Jiankai Sun,Jianlan Luo,Jiayuan Gu,Jie Tan,Jihoon Oh,Jimmy Wu,Jingpei Lu,Jingyun Yang,Jitendra Malik,Jo?o Silvério,Joey Hejna,Jonathan Booher,Jonathan Tompson,Jonathan Yang,Jordi Salvador,Joseph J. Lim,Junhyek Han,Kaiyuan Wang,Kanishka Rao,Karl Pertsch,Karol Hausman,Keegan Go,Keerthana Gopalakrishnan,Ken Goldberg,Kendra Byrne,Kenneth Oslund,Kento Kawaharazuka,Kevin Black,Kevin Lin,Kevin Zhang,Kiana Ehsani,Kiran Lekkala,Kirsty Ellis,Krishan Rana,Krishnan Srinivasan,Kuan Fang,Kunal Pratap Singh,Kuo-Hao Zeng,Kyle Hatch,Kyle Hsu,Laurent Itti,Lawrence Yunliang Chen,Lerrel Pinto,Li Fei-Fei,Liam Tan,Linxi "Jim" Fan,Lionel Ott,Lisa Lee,Luca Weihs,Magnum Chen,Marion Lepert,Marius Memmel,Masayoshi Tomizuka,Masha Itkina,Mateo Guaman Castro,Max Spero,Maximilian Du,Michael Ahn,Michael C. Yip,Mingtong Zhang,Mingyu Ding,Minho Heo,Mohan Kumar Srirama,Mohit Sharma,Moo Jin Kim,Naoaki Kanazawa,Nicklas Hansen,Nicolas Heess,Nikhil J Joshi,Niko Suenderhauf,Ning Liu,Norman Di Palo,Nur Muhammad Mahi Shafiullah,Oier Mees,Oliver Kroemer,Osbert Bastani,Pannag R Sanketi,Patrick "Tree" Miller,Patrick Yin,Paul Wohlhart,Peng Xu,Peter David Fagan,Peter Mitrano,Pierre Sermanet,Pieter Abbeel,Priya Sundaresan,Qiuyu Chen,Quan Vuong,Rafael Rafailov,Ran Tian,Ria Doshi,Roberto Martín-Martín,Rohan Baijal,Rosario Scalise,Rose Hendrix,Roy Lin,Runjia Qian,Ruohan Zhang,Russell Mendonca,Rutav Shah,Ryan Hoque,Ryan Julian,Samuel Bustamante,Sean Kirmani,Sergey Levine,Shan Lin,Sherry Moore,Shikhar Bahl,Shivin Dass,Shubham Sonawani,Shuran Song,Sichun Xu,Siddhant Haldar,Siddharth Karamcheti,Simeon Adebola,Simon Guist,Soroush Nasiriany,Stefan Schaal,Stefan Welker,Stephen Tian,Subramanian Ramamoorthy,Sudeep Dasari,Suneel Belkhale,Sungjae Park,Suraj Nair,Suvir Mirchandani,Takayuki Osa,Tanmay Gupta,Tatsuya Harada,Tatsuya Matsushima,Ted Xiao,Thomas Kollar,Tianhe Yu,Tianli Ding,Todor Davchev,Tony Z. Zhao,Travis Armstrong,Trevor Darrell,Trinity Chung,Vidhi Jain,Vincent Vanhoucke,Wei Zhan,Wenxuan Zhou,Wolfram Burgard,Xi Chen,Xiaolong Wang,Xinghao Zhu,Xinyang Geng,Xiyuan Liu,Xu Liangwei,Xuanlin Li,Yao Lu,Yecheng Jason Ma,Yejin Kim,Yevgen Chebotar,Yifan Zhou,Yifeng Zhu,Yilin Wu,Ying Xu,Yixuan Wang,Yonatan Bisk,Yoonyoung Cho,Youngwoon Lee,Yuchen Cui,Yue Cao,Yueh-Hua Wu,Yujin Tang,Yuke Zhu,Yunchu Zhang,Yunfan Jiang,Yunshuang Li,Yunzhu Li,Yusuke Iwasawa,Yutaka Matsuo,Zehan Ma,Zhuo Xu,Zichen Jeff Cui,Zichen Zhang,Zipeng Lin

Large, high-capacity models trained on diverse datasets have shown remarkable successes on efficiently tackling downstream applications. In domains from NLP to Computer Vision, this has led to a consolidation of pretrained models, with general pretrained backbones serving as a starting point for many applications. Can such a consolidation happen in robotics? Conventionally, robotic learning methods train a separate model for every application, every robot, and even every environment. Can we instead train generalist X-robot policy that can be adapted efficiently to new robots, tasks, and environments? In this paper, we provide datasets in standardized data formats and models to make it possible to explore this possibility in the context of robotic manipulation, alongside experimental results that provide an example of effective X-robot policies. We assemble a dataset from 22 different robots collected through a collaboration between 21 institutions, demonstrating 527 skills (160266 tasks). We show that a high-capacity model trained on this data, which we call RT-X, exhibits positive transfer and improves the capabilities of multiple robots by leveraging experience from other platforms. More details can be found on the project website //robotics-transformer-x.github.io.

Exploring the application of powerful large language models (LLMs) on the named entity recognition (NER) task has drawn much attention recently. This work pushes the performance boundary of zero-shot NER with LLMs by proposing a training-free self-improving framework, which utilizes an unlabeled corpus to stimulate the self-learning ability of LLMs. First, we use the LLM to make predictions on the unlabeled corpus using self-consistency and obtain a self-annotated dataset. Second, we explore various strategies to select reliable annotations to form a reliable self-annotated dataset. Finally, for each test input, we retrieve demonstrations from the reliable self-annotated dataset and perform inference via in-context learning. Experiments on four benchmarks show substantial performance improvements achieved by our framework. Through comprehensive experimental analysis, we find that increasing the size of unlabeled corpus or iterations of self-improving does not guarantee further improvement, but the performance might be boosted via more advanced strategies for reliable annotation selection. Code and data are publicly available at //github.com/Emma1066/Self-Improve-Zero-Shot-NER

Task-oriented object grasping and rearrangement are critical skills for robots to accomplish different real-world manipulation tasks. However, they remain challenging due to partial observations of the objects and shape variations in categorical objects. In this paper, we propose the Multi-feature Implicit Model (MIMO), a novel object representation that encodes multiple spatial features between a point and an object in an implicit neural field. Training such a model on multiple features ensures that it embeds the object shapes consistently in different aspects, thus improving its performance in object shape reconstruction from partial observation, shape similarity measure, and modeling spatial relations between objects. Based on MIMO, we propose a framework to learn task-oriented object grasping and rearrangement from single or multiple human demonstration videos. The evaluations in simulation show that our approach outperforms the state-of-the-art methods for multi- and single-view observations. Real-world experiments demonstrate the efficacy of our approach in one- and few-shot imitation learning of manipulation tasks.

Category-agnostic pose estimation (CAPE) aims to predict keypoints for arbitrary classes given a few support images annotated with keypoints. Existing methods only rely on the features extracted at support keypoints to predict or refine the keypoints on query image, but a few support feature vectors are local and inadequate for CAPE. Considering that human can quickly perceive potential keypoints of arbitrary objects, we propose a novel framework for CAPE based on such potential keypoints (named as meta-points). Specifically, we maintain learnable embeddings to capture inherent information of various keypoints, which interact with image feature maps to produce meta-points without any support. The produced meta-points could serve as meaningful potential keypoints for CAPE. Due to the inevitable gap between inherency and annotation, we finally utilize the identities and details offered by support keypoints to assign and refine meta-points to desired keypoints in query image. In addition, we propose a progressive deformable point decoder and a slacked regression loss for better prediction and supervision. Our novel framework not only reveals the inherency of keypoints but also outperforms existing methods of CAPE. Comprehensive experiments and in-depth studies on large-scale MP-100 dataset demonstrate the effectiveness of our framework.

The development of autonomous agents which can interact with other agents to accomplish a given task is a core area of research in artificial intelligence and machine learning. Towards this goal, the Autonomous Agents Research Group develops novel machine learning algorithms for autonomous systems control, with a specific focus on deep reinforcement learning and multi-agent reinforcement learning. Research problems include scalable learning of coordinated agent policies and inter-agent communication; reasoning about the behaviours, goals, and composition of other agents from limited observations; and sample-efficient learning based on intrinsic motivation, curriculum learning, causal inference, and representation learning. This article provides a broad overview of the ongoing research portfolio of the group and discusses open problems for future directions.

Social relations are often used to improve recommendation quality when user-item interaction data is sparse in recommender systems. Most existing social recommendation models exploit pairwise relations to mine potential user preferences. However, real-life interactions among users are very complicated and user relations can be high-order. Hypergraph provides a natural way to model complex high-order relations, while its potentials for improving social recommendation are under-explored. In this paper, we fill this gap and propose a multi-channel hypergraph convolutional network to enhance social recommendation by leveraging high-order user relations. Technically, each channel in the network encodes a hypergraph that depicts a common high-order user relation pattern via hypergraph convolution. By aggregating the embeddings learned through multiple channels, we obtain comprehensive user representations to generate recommendation results. However, the aggregation operation might also obscure the inherent characteristics of different types of high-order connectivity information. To compensate for the aggregating loss, we innovatively integrate self-supervised learning into the training of the hypergraph convolutional network to regain the connectivity information with hierarchical mutual information maximization. The experimental results on multiple real-world datasets show that the proposed model outperforms the SOTA methods, and the ablation study verifies the effectiveness of the multi-channel setting and the self-supervised task. The implementation of our model is available via //github.com/Coder-Yu/RecQ.

With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.

High spectral dimensionality and the shortage of annotations make hyperspectral image (HSI) classification a challenging problem. Recent studies suggest that convolutional neural networks can learn discriminative spatial features, which play a paramount role in HSI interpretation. However, most of these methods ignore the distinctive spectral-spatial characteristic of hyperspectral data. In addition, a large amount of unlabeled data remains an unexploited gold mine for efficient data use. Therefore, we proposed an integration of generative adversarial networks (GANs) and probabilistic graphical models for HSI classification. Specifically, we used a spectral-spatial generator and a discriminator to identify land cover categories of hyperspectral cubes. Moreover, to take advantage of a large amount of unlabeled data, we adopted a conditional random field to refine the preliminary classification results generated by GANs. Experimental results obtained using two commonly studied datasets demonstrate that the proposed framework achieved encouraging classification accuracy using a small number of data for training.

北京阿比特科技有限公司