亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

本文介紹了一種新型高效的變換器模型GANsformer,并將其應用于可視化生成建模。該網絡采用了兩部分結構,使跨圖像的遠距離交互成為可能,同時保持線性效率的計算,可以很容易地擴展到高分辨率合成。它從一組潛在變量迭代地傳播信息到進化的視覺特征,反之亦然,以支持每一個根據另一個來細化,并鼓勵物體和場景的合成表現形式的出現。與經典的變換器架構相比,它利用了乘法積分,允許靈活的基于區域的調制,因此可以被視為成功的StyleGAN網絡的推廣。我們通過對一系列數據集(從模擬的多目標環境到豐富的真實室內和室外場景)的仔細評估,展示了該模型的強度和魯棒性,表明它在圖像質量和多樣性方面達到了最先進的結果,同時擁有快速學習和更好的數據效率。進一步的定性和定量實驗為我們提供了對模型內部工作的深入了解,揭示了改進的可解釋性和更強的解糾纏性,并說明了我們方法的好處和有效性。

付費5元查看完整內容

相關內容

Transformer是谷歌發表的論文《Attention Is All You Need》提出一種完全基于Attention的翻譯架構

知識薈萃

精品入門和進階教程、論文和代碼整理等

更多

查看相關VIP內容、論文、資訊等

摘要

圖神經網絡(GNNs)已被證明在建模圖結構的數據方面是強大的。然而,訓練GNN通常需要大量指定任務的標記數據,獲取這些數據的成本往往非常高。減少標記工作的一種有效方法是在未標記數據上預訓練一個具有表達能力的GNN模型,并進行自我監督,然后將學習到的模型遷移到只有少量標記的下游任務中。在本文中,我們提出了GPT-GNN框架,通過生成式預訓練來初始化GNN。GPT-GNN引入了一個自監督屬性圖生成任務來預訓練一個GNN,使其能夠捕獲圖的結構和語義屬性信息。我們將圖生成的概率分解為兩部分:1)屬性生成和2)邊生成。通過對兩個組件進行建模,GPT-GNN捕捉到生成過程中節點屬性與圖結構之間的內在依賴關系。在10億規模的開放學術圖和亞馬遜推薦數據上進行的綜合實驗表明,GPT-GNN在不經過預訓練的情況下,在各種下游任務中的表現顯著優于最先進的GNN模型,最高可達9.1%。

**關鍵詞:**生成式預訓練,圖神經網絡,圖表示學習,神經嵌入,GNN預訓練

付費5元查看完整內容

自回歸文本生成模型通常側重于局部的流暢性,在長文本生成過程中可能導致語義不一致。此外,自動生成具有相似語義的單詞是具有挑戰性的,而且手工編寫的語言規則很難應用。我們考慮了一個文本規劃方案,并提出了一個基于模型的模仿學習方法來緩解上述問題。具體來說,我們提出了一種新的引導網絡來關注更長的生成過程,它可以幫助下一個單詞的預測,并為生成器的優化提供中間獎勵。大量的實驗表明,該方法具有較好的性能。

付費5元查看完整內容

題目: Adversarial Training for Large Neural Language Models

簡介: 泛化性和魯棒性都是設計機器學習方法的關鍵要求。對抗性訓練可以增強魯棒性,但是過去的工作常常發現它不利于推廣。在自然語言處理(NLP)中,預訓練大型神經語言模型(例如BERT)在針對各種任務的通用化方面顯示出令人印象深刻的收益,而從對抗性微調中得到了進一步的改進。但是,這些模型仍然容易受到對抗性攻擊。在本文中,我們表明對抗性預訓練可以同時提高泛化性和魯棒性。我們提出了一種通用算法ALUM(大型神經語言模型的專家訓練),該算法通過在嵌入空間中應用擾動來最大化訓練目標,從而使對抗性損失最大化。我們將對所有階段的對抗訓練進行全面的研究,包括從頭開始進行預訓練,在訓練有素的模型上進行連續的預訓練以及針對特定任務的微調。在常規和對抗性方案中,在各種NLP任務上,ALUM都比BERT獲得了可觀的收益。即使對于已經在超大型文本語料庫上進行過良好訓練的模型(例如RoBERTa),ALUM仍可以通過連續的預訓練獲得可觀的收益,而傳統的非對抗方法則不能。可以將ALUM與特定于任務的微調進一步結合以獲取更多收益。

付費5元查看完整內容

真實的顏色紋理生成是RGB-D表面重建的一個重要步驟,但由于重建幾何形狀的不準確性、相機姿態的不正確以及與視圖相關的成像偽影,在實踐中仍然具有挑戰性。在這項工作中,我們提出了一種利用從弱監督視圖中獲得的條件對抗損失來生成顏色紋理的新方法。具體地說,我們提出了一種方法,通過學習一個目標函數來生成近似表面的真實感紋理,即使是在未對齊的圖像中。我們的方法的關鍵思想是學習一個基于補丁的條件鑒別器,它可以引導紋理優化對不匹配的容忍度。我們的鑒別器采用一個合成的視圖和一個真實的圖像,并在一個廣義的真實感定義下評估合成的圖像是否真實。我們通過提供輸入視圖的“真實”示例對及其未對齊的版本來訓練鑒別器,這樣學習到的競爭損失將能夠容忍掃描的錯誤。在定量或定性評價下對合成和真實數據進行的實驗表明,我們的方法與現有方法相比具有優勢。我們的代碼是公開的視頻演示。

付費5元查看完整內容

Generative Adversarial networks (GANs) have obtained remarkable success in many unsupervised learning tasks and unarguably, clustering is an important unsupervised learning problem. While one can potentially exploit the latent-space back-projection in GANs to cluster, we demonstrate that the cluster structure is not retained in the GAN latent space. In this paper, we propose ClusterGAN as a new mechanism for clustering using GANs. By sampling latent variables from a mixture of one-hot encoded variables and continuous latent variables, coupled with an inverse network (which projects the data to the latent space) trained jointly with a clustering specific loss, we are able to achieve clustering in the latent space. Our results show a remarkable phenomenon that GANs can preserve latent space interpolation across categories, even though the discriminator is never exposed to such vectors. We compare our results with various clustering baselines and demonstrate superior performance on both synthetic and real datasets.

In this paper, we propose the Self-Attention Generative Adversarial Network (SAGAN) which allows attention-driven, long-range dependency modeling for image generation tasks. Traditional convolutional GANs generate high-resolution details as a function of only spatially local points in lower-resolution feature maps. In SAGAN, details can be generated using cues from all feature locations. Moreover, the discriminator can check that highly detailed features in distant portions of the image are consistent with each other. Furthermore, recent work has shown that generator conditioning affects GAN performance. Leveraging this insight, we apply spectral normalization to the GAN generator and find that this improves training dynamics. The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset. Visualization of the attention layers shows that the generator leverages neighborhoods that correspond to object shapes rather than local regions of fixed shape.

Quantum machine learning is expected to be one of the first potential general-purpose applications of near-term quantum devices. A major recent breakthrough in classical machine learning is the notion of generative adversarial training, where the gradients of a discriminator model are used to train a separate generative model. In this work and a companion paper, we extend adversarial training to the quantum domain and show how to construct generative adversarial networks using quantum circuits. Furthermore, we also show how to compute gradients -- a key element in generative adversarial network training -- using another quantum circuit. We give an example of a simple practical circuit ansatz to parametrize quantum machine learning models and perform a simple numerical experiment to demonstrate that quantum generative adversarial networks can be trained successfully.

In this paper, we propose an improved quantitative evaluation framework for Generative Adversarial Networks (GANs) on generating domain-specific images, where we improve conventional evaluation methods on two levels: the feature representation and the evaluation metric. Unlike most existing evaluation frameworks which transfer the representation of ImageNet inception model to map images onto the feature space, our framework uses a specialized encoder to acquire fine-grained domain-specific representation. Moreover, for datasets with multiple classes, we propose Class-Aware Frechet Distance (CAFD), which employs a Gaussian mixture model on the feature space to better fit the multi-manifold feature distribution. Experiments and analysis on both the feature level and the image level were conducted to demonstrate improvements of our proposed framework over the recently proposed state-of-the-art FID method. To our best knowledge, we are the first to provide counter examples where FID gives inconsistent results with human judgments. It is shown in the experiments that our framework is able to overcome the shortness of FID and improves robustness. Code will be made available.

北京阿比特科技有限公司