亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The existing work shows that the neural network trained by naive gradient-based optimization method is prone to adversarial attacks, adds small malicious on the ordinary input is enough to make the neural network wrong. At the same time, the attack against a neural network is the key to improving its robustness. The training against adversarial examples can make neural networks resist some kinds of adversarial attacks. At the same time, the adversarial attack against a neural network can also reveal some characteristics of the neural network, a complex high-dimensional non-linear function, as discussed in previous work. In This project, we develop a first-order method to attack the neural network. Compare with other first-order attacks, our method has a much higher success rate. Furthermore, it is much faster than second-order attacks and multi-steps first-order attacks.

相關內容

神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)(wang)絡(luo)(Neural Networks)是世界上三個最古老的(de)(de)(de)(de)(de)神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)建模(mo)學(xue)(xue)會的(de)(de)(de)(de)(de)檔案期刊:國(guo)際神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)(wang)絡(luo)學(xue)(xue)會(INNS)、歐洲神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)(wang)絡(luo)學(xue)(xue)會(ENNS)和(he)(he)(he)日本神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)(wang)絡(luo)學(xue)(xue)會(JNNS)。神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)(wang)絡(luo)提供了一(yi)(yi)個論壇,以發(fa)展和(he)(he)(he)培育一(yi)(yi)個國(guo)際社(she)會的(de)(de)(de)(de)(de)學(xue)(xue)者和(he)(he)(he)實(shi)踐者感興趣(qu)的(de)(de)(de)(de)(de)所有(you)方面(mian)的(de)(de)(de)(de)(de)神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)(wang)絡(luo)和(he)(he)(he)相關方法(fa)的(de)(de)(de)(de)(de)計(ji)算(suan)智(zhi)能。神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)(wang)絡(luo)歡迎高質量論文(wen)的(de)(de)(de)(de)(de)提交,有(you)助于全面(mian)的(de)(de)(de)(de)(de)神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)(wang)絡(luo)研究(jiu)(jiu),從行為(wei)和(he)(he)(he)大腦建模(mo),學(xue)(xue)習(xi)算(suan)法(fa),通過數(shu)學(xue)(xue)和(he)(he)(he)計(ji)算(suan)分(fen)析,系統(tong)的(de)(de)(de)(de)(de)工程和(he)(he)(he)技(ji)術(shu)應(ying)用,大量使(shi)用神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)(wang)絡(luo)的(de)(de)(de)(de)(de)概(gai)念(nian)和(he)(he)(he)技(ji)術(shu)。這一(yi)(yi)獨(du)特而廣泛的(de)(de)(de)(de)(de)范圍促(cu)進了生物(wu)和(he)(he)(he)技(ji)術(shu)研究(jiu)(jiu)之間的(de)(de)(de)(de)(de)思(si)想交流(liu),并有(you)助于促(cu)進對生物(wu)啟發(fa)的(de)(de)(de)(de)(de)計(ji)算(suan)智(zhi)能感興趣(qu)的(de)(de)(de)(de)(de)跨學(xue)(xue)科(ke)社(she)區的(de)(de)(de)(de)(de)發(fa)展。因此,神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)(wang)絡(luo)編委會代表的(de)(de)(de)(de)(de)專(zhuan)(zhuan)家領域(yu)包括心理學(xue)(xue),神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)生物(wu)學(xue)(xue),計(ji)算(suan)機(ji)科(ke)學(xue)(xue),工程,數(shu)學(xue)(xue),物(wu)理。該雜(za)志(zhi)發(fa)表文(wen)章、信件和(he)(he)(he)評論以及給(gei)編輯的(de)(de)(de)(de)(de)信件、社(she)論、時事、軟件調查和(he)(he)(he)專(zhuan)(zhuan)利信息。文(wen)章發(fa)表在五個部分(fen)之一(yi)(yi):認知科(ke)學(xue)(xue),神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)科(ke)學(xue)(xue),學(xue)(xue)習(xi)系統(tong),數(shu)學(xue)(xue)和(he)(he)(he)計(ji)算(suan)分(fen)析、工程和(he)(he)(he)應(ying)用。 官(guan)網(wang)(wang)(wang)地(di)址:

Graph neural networks, a popular class of models effective in a wide range of graph-based learning tasks, have been shown to be vulnerable to adversarial attacks. While the majority of the literature focuses on such vulnerability in node-level classification tasks, little effort has been dedicated to analysing adversarial attacks on graph-level classification, an important problem with numerous real-life applications such as biochemistry and social network analysis. The few existing methods often require unrealistic setups, such as access to internal information of the victim models, or an impractically-large number of queries. We present a novel Bayesian optimisation-based attack method for graph classification models. Our method is black-box, query-efficient and parsimonious with respect to the perturbation applied. We empirically validate the effectiveness and flexibility of the proposed method on a wide range of graph classification tasks involving varying graph properties, constraints and modes of attack. Finally, we analyse common interpretable patterns behind the adversarial samples produced, which may shed further light on the adversarial robustness of graph classification models.

Deep Learning (DL) is the most widely used tool in the contemporary field of computer vision. Its ability to accurately solve complex problems is employed in vision research to learn deep neural models for a variety of tasks, including security critical applications. However, it is now known that DL is vulnerable to adversarial attacks that can manipulate its predictions by introducing visually imperceptible perturbations in images and videos. Since the discovery of this phenomenon in 2013~[1], it has attracted significant attention of researchers from multiple sub-fields of machine intelligence. In [2], we reviewed the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses) until the advent of year 2018. Many of those contributions have inspired new directions in this area, which has matured significantly since witnessing the first generation methods. Hence, as a legacy sequel of [2], this literature review focuses on the advances in this area since 2018. To ensure authenticity, we mainly consider peer-reviewed contributions published in the prestigious sources of computer vision and machine learning research. Besides a comprehensive literature review, the article also provides concise definitions of technical terminologies for non-experts in this domain. Finally, this article discusses challenges and future outlook of this direction based on the literature reviewed herein and [2].

Deep neural networks are vulnerable to adversarial examples that mislead the models with imperceptible perturbations. Though adversarial attacks have achieved incredible success rates in the white-box setting, most existing adversaries often exhibit weak transferability in the black-box setting, especially under the scenario of attacking models with defense mechanisms. In this work, we propose a new method called variance tuning to enhance the class of iterative gradient based attack methods and improve their attack transferability. Specifically, at each iteration for the gradient calculation, instead of directly using the current gradient for the momentum accumulation, we further consider the gradient variance of the previous iteration to tune the current gradient so as to stabilize the update direction and escape from poor local optima. Empirical results on the standard ImageNet dataset demonstrate that our method could significantly improve the transferability of gradient-based adversarial attacks. Besides, our method could be used to attack ensemble models or be integrated with various input transformations. Incorporating variance tuning with input transformations on iterative gradient-based attacks in the multi-model setting, the integrated method could achieve an average success rate of 90.1% against nine advanced defense methods, improving the current best attack performance significantly by 85.1% . Code is available at //github.com/JHL-HUST/VT.

Adversarial attack is a technique for deceiving Machine Learning (ML) models, which provides a way to evaluate the adversarial robustness. In practice, attack algorithms are artificially selected and tuned by human experts to break a ML system. However, manual selection of attackers tends to be sub-optimal, leading to a mistakenly assessment of model security. In this paper, a new procedure called Composite Adversarial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms and their hyper-parameters from a candidate pool of \textbf{32 base attackers}. We design a search space where attack policy is represented as an attacking sequence, i.e., the output of the previous attacker is used as the initialization input for successors. Multi-objective NSGA-II genetic algorithm is adopted for finding the strongest attack policy with minimum complexity. The experimental result shows CAA beats 10 top attackers on 11 diverse defenses with less elapsed time (\textbf{6 $\times$ faster than AutoAttack}), and achieves the new state-of-the-art on $l_{\infty}$, $l_{2}$ and unrestricted adversarial attacks.

While existing work in robust deep learning has focused on small pixel-level $\ell_p$ norm-based perturbations, this may not account for perturbations encountered in several real world settings. In many such cases although test data might not be available, broad specifications about the types of perturbations (such as an unknown degree of rotation) may be known. We consider a setup where robustness is expected over an unseen test domain that is not i.i.d. but deviates from the training domain. While this deviation may not be exactly known, its broad characterization is specified a priori, in terms of attributes. We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space, without having access to the data from the test domain. Our adversarial training solves a min-max optimization problem, with the inner maximization generating adversarial perturbations, and the outer minimization finding model parameters by optimizing the loss on adversarial perturbations generated from the inner maximization. We demonstrate the applicability of our approach on three types of naturally occurring perturbations -- object-related shifts, geometric transformations, and common image corruptions. Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations. We demonstrate the usefulness of the proposed approach by showing the robustness gains of deep neural networks trained using our adversarial training on MNIST, CIFAR-10, and a new variant of the CLEVR dataset.

There has been an ongoing cycle where stronger defenses against adversarial attacks are subsequently broken by a more advanced defense-aware attack. We present a new approach towards ending this cycle where we "deflect'' adversarial attacks by causing the attacker to produce an input that semantically resembles the attack's target class. To this end, we first propose a stronger defense based on Capsule Networks that combines three detection mechanisms to achieve state-of-the-art detection performance on both standard and defense-aware attacks. We then show that undetected attacks against our defense often perceptually resemble the adversarial target class by performing a human study where participants are asked to label images produced by the attack. These attack images can no longer be called "adversarial'' because our network classifies them the same way as humans do.

Person re-identification (re-ID) has attracted much attention recently due to its great importance in video surveillance. In general, distance metrics used to identify two person images are expected to be robust under various appearance changes. However, our work observes the extreme vulnerability of existing distance metrics to adversarial examples, generated by simply adding human-imperceptible perturbations to person images. Hence, the security danger is dramatically increased when deploying commercial re-ID systems in video surveillance, especially considering the highly strict requirement of public safety. Although adversarial examples have been extensively applied for classification analysis, it is rarely studied in metric analysis like person re-identification. The most likely reason is the natural gap between the training and testing of re-ID networks, that is, the predictions of a re-ID network cannot be directly used during testing without an effective metric. In this work, we bridge the gap by proposing Adversarial Metric Attack, a parallel methodology to adversarial classification attacks, which can effectively generate adversarial examples for re-ID. Comprehensive experiments clearly reveal the adversarial effects in re-ID systems. Moreover, by benchmarking various adversarial settings, we expect that our work can facilitate the development of robust feature learning with the experimental conclusions we have drawn.

There is a rising interest in studying the robustness of deep neural network classifiers against adversaries, with both advanced attack and defence techniques being actively developed. However, most recent work focuses on discriminative classifiers, which only model the conditional distribution of the labels given the inputs. In this paper we propose the deep Bayes classifier, which improves classical naive Bayes with conditional deep generative models. We further develop detection methods for adversarial examples, which reject inputs that have negative log-likelihood under the generative model exceeding a threshold pre-specified using training data. Experimental results suggest that deep Bayes classifiers are more robust than deep discriminative classifiers, and the proposed detection methods achieve high detection rates against many recently proposed attacks.

Meta-learning enables a model to learn from very limited data to undertake a new task. In this paper, we study the general meta-learning with adversarial samples. We present a meta-learning algorithm, ADML (ADversarial Meta-Learner), which leverages clean and adversarial samples to optimize the initialization of a learning model in an adversarial manner. ADML leads to the following desirable properties: 1) it turns out to be very effective even in the cases with only clean samples; 2) it is model-agnostic, i.e., it is compatible with any learning model that can be trained with gradient descent; and most importantly, 3) it is robust to adversarial samples, i.e., unlike other meta-learning methods, it only leads to a minor performance degradation when there are adversarial samples. We show via extensive experiments that ADML delivers the state-of-the-art performance on two widely-used image datasets, MiniImageNet and CIFAR100, in terms of both accuracy and robustness.

Network embedding has become a hot research topic recently which can provide low-dimensional feature representations for many machine learning applications. Current work focuses on either (1) whether the embedding is designed as an unsupervised learning task by explicitly preserving the structural connectivity in the network, or (2) whether the embedding is a by-product during the supervised learning of a specific discriminative task in a deep neural network. In this paper, we focus on bridging the gap of the two lines of the research. We propose to adapt the Generative Adversarial model to perform network embedding, in which the generator is trying to generate vertex pairs, while the discriminator tries to distinguish the generated vertex pairs from real connections (edges) in the network. Wasserstein-1 distance is adopted to train the generator to gain better stability. We develop three variations of models, including GANE which applies cosine similarity, GANE-O1 which preserves the first-order proximity, and GANE-O2 which tries to preserves the second-order proximity of the network in the low-dimensional embedded vector space. We later prove that GANE-O2 has the same objective function as GANE-O1 when negative sampling is applied to simplify the training process in GANE-O2. Experiments with real-world network datasets demonstrate that our models constantly outperform state-of-the-art solutions with significant improvements on precision in link prediction, as well as on visualizations and accuracy in clustering tasks.

北京阿比特科技有限公司