最新深度學習對抗魯棒性教程
Google 研究科學家Mathieu Blondel在PSL大學的“機器學習的對偶性”課程材料。主題包括共軛函數,平滑技術,Fenchel對偶性,Fenchel-Young損失和塊對偶坐標上升算法。
//mblondel.org/teaching/duality-2020.pdf
Recently, many studies show that deep neural networks (DNNs) are susceptible to adversarial examples. However, in order to convince that adversarial examples are real threats in real physical world, it is necessary to study and evaluate the adversarial examples in real-world scenarios. In this paper, we propose a robust and natural physical adversarial example attack method targeting object detectors under real-world conditions, which is more challenging than targeting image classifiers. The generated adversarial examples are robust to various physical constraints and visually look similar to the original images, thus these adversarial examples are natural to humans and will not cause any suspicions. First, to ensure the robustness of the adversarial examples in real-world conditions, the proposed method exploits different image transformation functions (Distance, Angle, Illumination, Printing and Photographing), to simulate various physical changes during the iterative optimization of the adversarial examples generation. Second, to construct natural adversarial examples, the proposed method uses an adaptive mask to constrain the area and intensities of added perturbations, and utilizes the real-world perturbation score (RPS) to make the perturbations be similar to those real noises in physical world. Compared with existing studies, our generated adversarial examples can achieve a high success rate with less conspicuous perturbations. Experimental results demonstrate that, the generated adversarial examples are robust under various indoor and outdoor physical conditions. Finally, the proposed physical adversarial attack method is universal and can work in black-box scenarios. The generated adversarial examples generalize well between different models.
深度神經網絡(DNN)在各個領域的大量機器學習任務中取得了前所未有的成功。然而,在將DNN模型應用于諸如自動駕駛汽車和惡意軟件檢測等安全關鍵任務時,存在的一些反面例子給我們帶來了很大的猶豫。這些對抗例子都是故意制作的實例,無論是出現在火車上還是測試階段,都可以欺騙DNN模型,使其犯下嚴重錯誤。因此,人們致力于設計更健壯的模型來抵御對抗的例子,但它們通常會被新的更強大的攻擊擊垮。這種對抗性的攻擊和防御之間的軍備競賽近年來受到越來越多的關注。**在本教程中,我們將全面概述對抗性攻擊的前沿和進展,以及它們的對策。特別地,我們詳細介紹了不同場景下的不同類型的攻擊,包括閃避和中毒攻擊,白盒和黑盒攻擊。**我們還將討論防御策略如何發展以對抗這些攻擊,以及新的攻擊如何出現以打破這些防御。此外,我們將討論在其他數據域中的敵對攻擊和防御,特別是在圖結構數據中。然后介紹了Pytorch對抗式學習圖書館DeepRobust,旨在為該研究領域的發展搭建一個全面、易用的平臺。最后,我們通過討論對抗性攻擊和防御的開放問題和挑戰來總結本教程。通過我們的教程,我們的觀眾可以掌握對抗性攻擊和防御之間的主要思想和關鍵方法。
目錄內容: Part 1. Introduction about adversarial examples and robustness. Part 2. Algorithms for generating adversarial examples. Part 3. Defending algorithms and adaptive attacks. Part 4. Adversarial learning in Graph domain. Part 5. DeepRobust-- A Pytorch Repository for Adversarial learning.
如果你對在嵌入式設備上運行機器學習感興趣,但不確定如何開始,谷歌的TensorFlow微團隊的皮特·沃登將介紹如何構建和運行你自己的TinyML應用程序。這將包括可用的不同板子、軟件框架和教程的概述,以幫助您入門和運行。
Deep Reinforcement Learning via Policy Optimization