最近發布的GPT-3讓我對NLP中的零學習和小樣本的狀態產生了興趣。雖然大多數的零樣本學習研究集中在計算機視覺,也有一些有趣的工作在NLP領域。
我將會寫一系列的博文來涵蓋現有的關于NLP零樣本學習的研究。在這第一篇文章中,我將解釋Pushp等人的論文“一次訓練,到處測試:文本分類的零樣本學習”。本文從2017年12月開始,首次提出了文本分類的零樣本學習范式。
什么是零樣本學習?
零樣本學習是檢測模型在訓練中從未見過的類的能力。它類似于我們人類在沒有明確監督的情況下歸納和識別新事物的能力。
例如,我們想要做情感分類和新聞分類。通常,我們將為每個數據集訓練/微調一個新模型。相比之下,零樣本學習,你可以直接執行任務,如情緒和新聞分類,沒有任何特定的任務訓練。
一次訓練,隨處測試
本文提出了一種簡單的零樣本分類方法。他們沒有將文本分類為X類,而是將任務重新組織為二元分類,以確定文本和類是否相關。
題目:
Con?dence-Aware Learning for Deep Neural Networks
簡介:
盡管深度神經網絡可以執行多種任務,但過分一致的預測問題限制了它們在許多安全關鍵型應用中的實際應用。已經提出了許多新的工作來減輕這個問題,但是大多數工作需要在訓練和/或推理階段增加計算成本,或者需要定制的體系結構來分別輸出置信估計。在本文中,我們提出了一種使用新的損失函數訓練深度神經網絡的方法,稱為正確排名損失,該方法將類別概率顯式規范化,以便根據依據的有序等級更好地進行置信估計。所提出的方法易于實現,并且無需進行任何修改即可應用于現有體系結構。而且,它的訓練計算成本幾乎與傳統的深度分類器相同,并且通過一次推斷就可以輸出可靠的預測。在分類基準數據集上的大量實驗結果表明,所提出的方法有助于網絡產生排列良好的置信度估計。我們還證明,它對于與置信估計,分布外檢測和主動學習密切相關的任務十分有效。
本文綜述了元學習在圖像分類、自然語言處理和機器人技術等領域的應用。與深度學習不同,元學習使用較少的樣本數據集,并考慮進一步改進模型泛化以獲得更高的預測精度。我們將元學習模型歸納為三類: 黑箱適應模型、基于相似度的方法模型和元學習過程模型。最近的應用集中在將元學習與貝葉斯深度學習和強化學習相結合,以提供可行的集成問題解決方案。介紹了元學習方法的性能比較,并討論了今后的研究方向。
元學習已被提出作為一個框架來解決具有挑戰性的小樣本學習設置。關鍵的思想是利用大量相似的小樣本任務,以學習如何使基學習者適應只有少數標記的樣本可用的新任務。由于深度神經網絡(DNNs)傾向于只使用少數樣本進行過度擬合,元學習通常使用淺層神經網絡(SNNs),因此限制了其有效性。本文提出了一種新的學習方法——元轉移學習(MTL)。具體來說,“meta”是指訓練多個任務,“transfer”是通過學習每個任務的DNN權值的縮放和變換函數來實現的。此外,我們還介紹了作為一種有效的MTL學習課程的困難任務元批處理方案。我們使用(5類,1次)和(5類,5次)識別任務,在兩個具有挑戰性的小樣本學習基準上進行實驗:miniImageNet和Fewshot-CIFAR100。通過與相關文獻的大量比較,驗證了本文提出的HT元批處理方案訓練的元轉移學習方法具有良好的學習效果。消融研究還表明,這兩種成分有助于快速收斂和高精度。
地址:
代碼:
Natural Language Processing (NLP) and especially natural language text analysis have seen great advances in recent times. Usage of deep learning in text processing has revolutionized the techniques for text processing and achieved remarkable results. Different deep learning architectures like CNN, LSTM, and very recent Transformer have been used to achieve state of the art results variety on NLP tasks. In this work, we survey a host of deep learning architectures for text classification tasks. The work is specifically concerned with the classification of Hindi text. The research in the classification of morphologically rich and low resource Hindi language written in Devanagari script has been limited due to the absence of large labeled corpus. In this work, we used translated versions of English data-sets to evaluate models based on CNN, LSTM and Attention. Multilingual pre-trained sentence embeddings based on BERT and LASER are also compared to evaluate their effectiveness for the Hindi language. The paper also serves as a tutorial for popular text classification techniques.
論文題目: Learning Conceptual-Contextual Embeddings for Medical Text
論文摘要:
對于自然語言理解任務來說,外部知識通常是有用的。本文介紹了一個上下文文本表示模型,稱為概念上下文(CC)嵌入,它將結構化的知識合并到文本表示中。與實體嵌入方法不同,文中提到的方法將知識圖編碼到上下文模型中。就像預先訓練好的語言模型一樣,CC嵌入可以很容易地在廣泛的任務中重用。模型利用語義泛化,有效地編碼了龐大的UMLS數據庫。電子實驗健康記錄(EHRs)和醫療文本處理基準表明,而使得模型大大提高了監督醫療NLP任務的性能。
In information retrieval (IR) and related tasks, term weighting approaches typically consider the frequency of the term in the document and in the collection in order to compute a score reflecting the importance of the term for the document. In tasks characterized by the presence of training data (such as text classification) it seems logical that the term weighting function should take into account the distribution (as estimated from training data) of the term across the classes of interest. Although `supervised term weighting' approaches that use this intuition have been described before, they have failed to show consistent improvements. In this article we analyse the possible reasons for this failure, and call consolidated assumptions into question. Following this criticism we propose a novel supervised term weighting approach that, instead of relying on any predefined formula, learns a term weighting function optimised on the training set of interest; we dub this approach \emph{Learning to Weight} (LTW). The experiments that we run on several well-known benchmarks, and using different learning methods, show that our method outperforms previous term weighting approaches in text classification.
My notes on Deep Learning for NLP.
Convolutional neural networks (CNNs) have recently emerged as a popular building block for natural language processing (NLP). Despite their success, most existing CNN models employed in NLP share the same learned (and static) set of filters for all input sentences. In this paper, we consider an approach of using a small meta network to learn context-sensitive convolutional filters for text processing. The role of meta network is to abstract the contextual information of a sentence or document into a set of input-aware filters. We further generalize this framework to model sentence pairs, where a bidirectional filter generation mechanism is introduced to encapsulate co-dependent sentence representations. In our benchmarks on four different tasks, including ontology classification, sentiment analysis, answer sentence selection, and paraphrase identification, our proposed model, a modified CNN with context-sensitive filters, consistently outperforms the standard CNN and attention-based CNN baselines. By visualizing the learned context-sensitive filters, we further validate and rationalize the effectiveness of proposed framework.