亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

主題: Exploring and Exploiting Interpretable Semantics in GANs

摘要: 諸如深度卷積神經網絡和遞歸神經網絡之類的復雜機器學習模型最近在諸如對象/場景識別,圖像字幕,視覺問題解答等廣泛的計算機視覺應用中取得了長足進步。但它們通常被視為黑匣子。隨著模型越來越深入地尋求更好的識別精度,變得越來越難以理解模型給出的預測及其原因。在此次課程中我們將回顧我們在可視化,解釋和解釋方法學方面的最新進展,以分析計算機視覺中的數據和模型。本教程的主要主題是通過闡明動機,典型方法,預期趨勢以及由此產生的可解釋性的潛在工業應用,來就新興的機器學習可解釋性主題達成共識。這是第一個lecture,由Bolei Zhou演講的Exploring and Exploiting Interpretable Semantics in GANs。

付費5元查看完整內容

相關內容

CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses. With its high quality and low cost, it provides an exceptional value for students, academics and industry researchers. CVPR 2020 will take place at The Washington State Convention Center in Seattle, WA, from June 16 to June 20, 2020. //cvpr2020.thecvf.com/

題目: Interpretable Deep Graph Generation with Node-edge Co-disentanglement

摘要:

解糾纏表示學習近年來受到了廣泛的關注,特別是在圖像表示學習領域。然而,學習圖背后的解糾纏表示在很大程度上仍未探索,特別是對于同時具有節點和邊緣特征的屬性圖。圖生成的解糾纏學習具有實質性的新挑戰,包括

  • 缺乏對節點和邊緣屬性聯合解碼的圖反褶積操作;
  • 各影響因素之間的潛在因素難以解除:i)節點;ii)只有邊緣;iii)它們之間的聯合模式。

為了解決這些問題,提出了一個新的屬性圖深層生成模型的解糾纏增強框架。特別地,提出了一種新的變分目標來解開上述三種潛在因素,并具有新的節點和邊緣反褶積結構。此外,在每種類型中,個體因素的分離進一步增強,這被證明是對現有圖像框架的一般化。在綜合數據集和真實數據集上的定性和定量實驗證明了該模型及其擴展的有效性。

付費5元查看完整內容

本文提出了一種學習深度卷積神經網絡(CNN)中可解釋卷積濾波器的通用方法,用于對象分類,每個可解釋濾波器都對一個特定對象部分的特征進行編碼。我們的方法不需要額外的注釋對象部分或紋理的監督。相反,我們使用與傳統CNNs相同的訓練數據。在學習過程中,我們的方法在一個高卷積層中自動分配每個可解釋的過濾器,每個過濾器的對象都是某個類別的一部分。在CNN的卷積層中,這種顯式的知識表示有助于人們理清CNN中所編碼的邏輯,即,回答CNN從輸入圖像中提取什么模式并用于預測。我們使用不同結構的基準CNNs測試了我們的方法,以證明我們的方法具有廣泛的適用性。實驗表明,我們的可解釋過濾器比傳統過濾器在語義上更有意義。

付費5元查看完整內容

諸如深度卷積神經網絡和遞歸神經網絡之類的復雜機器學習模型最近在諸如對象/場景識別,圖像字幕,視覺問題解答等廣泛的計算機視覺應用中取得了長足進步。但它們通常被視為黑匣子。隨著模型越來越深入地尋求更好的識別精度,變得越來越難以理解模型給出的預測及其原因。

本教程的目的是讓計算機視覺社區廣泛參與計算機視覺模型的可解釋性和可解釋性的主題。我們將回顧最近的進展,我們取得了可視化,解釋和解釋方法,以分析數據和模型在計算機視覺。本教程的主要主題是通過闡明機器學習可解釋性的動機、典型方法、未來趨勢和由此產生的可解釋性的潛在工業應用,就機器學習可解釋性這一新興主題建立共識。

內容目錄

  • 報告人:Bolei Zhou
  • 題目:Understanding Latent Semantics in GANs(基于GANs的潛在語義理解)
  • 報告人:Andrea Vedaldi
  • 題目:Understanding Models via Visualization and Attribution(基于可視化和屬性模型的理解)
  • 報告人:Alexander Binder
  • 題目: Explaining Deep Learning for Identifying Structures and Biases in Computer Vision (基于可解釋深度學習計算機視覺中的結構和偏差的識別)
  • 報告人:Alan L. Yuille
  • 題目: Deep Compositional Networks(深度組合網絡)
付費5元查看完整內容

With the widespread applications of deep convolutional neural networks (DCNNs), it becomes increasingly important for DCNNs not only to make accurate predictions but also to explain how they make their decisions. In this work, we propose a CHannel-wise disentangled InterPretation (CHIP) model to give the visual interpretation to the predictions of DCNNs. The proposed model distills the class-discriminative importance of channels in networks by utilizing the sparse regularization. Here, we first introduce the network perturbation technique to learn the model. The proposed model is capable to not only distill the global perspective knowledge from networks but also present the class-discriminative visual interpretation for specific predictions of networks. It is noteworthy that the proposed model is able to interpret different layers of networks without re-training. By combining the distilled interpretation knowledge in different layers, we further propose the Refined CHIP visual interpretation that is both high-resolution and class-discriminative. Experimental results on the standard dataset demonstrate that the proposed model provides promising visual interpretation for the predictions of networks in image classification task compared with existing visual interpretation methods. Besides, the proposed method outperforms related approaches in the application of ILSVRC 2015 weakly-supervised localization task.

Multi-relation Question Answering is a challenging task, due to the requirement of elaborated analysis on questions and reasoning over multiple fact triples in knowledge base. In this paper, we present a novel model called Interpretable Reasoning Network that employs an interpretable, hop-by-hop reasoning process for question answering. The model dynamically decides which part of an input question should be analyzed at each hop; predicts a relation that corresponds to the current parsed results; utilizes the predicted relation to update the question representation and the state of the reasoning process; and then drives the next-hop reasoning. Experiments show that our model yields state-of-the-art results on two datasets. More interestingly, the model can offer traceable and observable intermediate predictions for reasoning analysis and failure diagnosis, thereby allowing manual manipulation in predicting the final answer.

This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations. Although deep neural networks have exhibited superior performance in various tasks, the interpretability is always the Achilles' heel of deep neural networks. At present, deep neural networks obtain high discrimination power at the cost of low interpretability of their black-box representations. We believe that high model interpretability may help people to break several bottlenecks of deep learning, e.g., learning from very few annotations, learning via human-computer communications at the semantic level, and semantically debugging network representations. We focus on convolutional neural networks (CNNs), and we revisit the visualization of CNN representations, methods of diagnosing representations of pre-trained CNNs, approaches for disentangling pre-trained CNN representations, learning of CNNs with disentangled representations, and middle-to-end learning based on model interpretability. Finally, we discuss prospective trends in explainable artificial intelligence.

This paper proposes a method to modify traditional convolutional neural networks (CNNs) into interpretable CNNs, in order to clarify knowledge representations in high conv-layers of CNNs. In an interpretable CNN, each filter in a high conv-layer represents a certain object part. We do not need any annotations of object parts or textures to supervise the learning process. Instead, the interpretable CNN automatically assigns each filter in a high conv-layer with an object part during the learning process. Our method can be applied to different types of CNNs with different structures. The clear knowledge representation in an interpretable CNN can help people understand the logics inside a CNN, i.e., based on which patterns the CNN makes the decision. Experiments showed that filters in an interpretable CNN were more semantically meaningful than those in traditional CNNs.

北京阿比特科技有限公司