亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Deep learning algorithms are widely used in fields such as computer vision and natural language processing, but they are vulnerable to security threats from adversarial attacks because of their internal presence of a large number of nonlinear functions and parameters leading to their uninterpretability. In this paper, we propose a neural network adversarial attack method based on an improved genetic algorithm. The improved genetic algorithm improves the variation and crossover links based on the original genetic optimization algorithm, which greatly improves the iteration efficiency and shortens the running time. The method does not need the internal structure and parameter information of the neural network model, and it can obtain the adversarial samples with high confidence in a short time by the classification and confidence information of the neural network. The experimental results show that the method in this paper has a wide range of applicability and high efficiency for the model, and provides a new idea for the adversarial attack.

相關內容

神(shen)(shen)(shen)(shen)(shen)(shen)經(jing)網(wang)絡(luo)(luo)(Neural Networks)是世界(jie)上(shang)三個最古(gu)老的(de)(de)(de)神(shen)(shen)(shen)(shen)(shen)(shen)經(jing)建(jian)(jian)模(mo)學(xue)(xue)(xue)(xue)(xue)會的(de)(de)(de)檔案期刊:國際(ji)神(shen)(shen)(shen)(shen)(shen)(shen)經(jing)網(wang)絡(luo)(luo)學(xue)(xue)(xue)(xue)(xue)會(INNS)、歐洲神(shen)(shen)(shen)(shen)(shen)(shen)經(jing)網(wang)絡(luo)(luo)學(xue)(xue)(xue)(xue)(xue)會(ENNS)和日本神(shen)(shen)(shen)(shen)(shen)(shen)經(jing)網(wang)絡(luo)(luo)學(xue)(xue)(xue)(xue)(xue)會(JNNS)。神(shen)(shen)(shen)(shen)(shen)(shen)經(jing)網(wang)絡(luo)(luo)提供了一(yi)個論(lun)壇,以發(fa)展(zhan)(zhan)和培育(yu)一(yi)個國際(ji)社(she)會的(de)(de)(de)學(xue)(xue)(xue)(xue)(xue)者和實(shi)踐者感(gan)興趣的(de)(de)(de)所有方面的(de)(de)(de)神(shen)(shen)(shen)(shen)(shen)(shen)經(jing)網(wang)絡(luo)(luo)和相關方法的(de)(de)(de)計算(suan)(suan)(suan)智能。神(shen)(shen)(shen)(shen)(shen)(shen)經(jing)網(wang)絡(luo)(luo)歡迎高質(zhi)量(liang)論(lun)文(wen)的(de)(de)(de)提交,有助(zhu)于全(quan)面的(de)(de)(de)神(shen)(shen)(shen)(shen)(shen)(shen)經(jing)網(wang)絡(luo)(luo)研(yan)究(jiu),從行為和大(da)腦建(jian)(jian)模(mo),學(xue)(xue)(xue)(xue)(xue)習算(suan)(suan)(suan)法,通過數學(xue)(xue)(xue)(xue)(xue)和計算(suan)(suan)(suan)分(fen)析,系統的(de)(de)(de)工(gong)程(cheng)和技術應用(yong),大(da)量(liang)使用(yong)神(shen)(shen)(shen)(shen)(shen)(shen)經(jing)網(wang)絡(luo)(luo)的(de)(de)(de)概念(nian)和技術。這一(yi)獨(du)特而廣泛的(de)(de)(de)范圍促進了生物和技術研(yan)究(jiu)之間的(de)(de)(de)思想(xiang)交流(liu),并有助(zhu)于促進對(dui)生物啟發(fa)的(de)(de)(de)計算(suan)(suan)(suan)智能感(gan)興趣的(de)(de)(de)跨學(xue)(xue)(xue)(xue)(xue)科社(she)區的(de)(de)(de)發(fa)展(zhan)(zhan)。因(yin)此,神(shen)(shen)(shen)(shen)(shen)(shen)經(jing)網(wang)絡(luo)(luo)編委會代表的(de)(de)(de)專家(jia)領(ling)域包括心理學(xue)(xue)(xue)(xue)(xue),神(shen)(shen)(shen)(shen)(shen)(shen)經(jing)生物學(xue)(xue)(xue)(xue)(xue),計算(suan)(suan)(suan)機科學(xue)(xue)(xue)(xue)(xue),工(gong)程(cheng),數學(xue)(xue)(xue)(xue)(xue),物理。該雜志發(fa)表文(wen)章(zhang)、信件和評(ping)論(lun)以及(ji)給編輯的(de)(de)(de)信件、社(she)論(lun)、時事、軟(ruan)件調(diao)查和專利信息。文(wen)章(zhang)發(fa)表在五個部分(fen)之一(yi):認知(zhi)科學(xue)(xue)(xue)(xue)(xue),神(shen)(shen)(shen)(shen)(shen)(shen)經(jing)科學(xue)(xue)(xue)(xue)(xue),學(xue)(xue)(xue)(xue)(xue)習系統,數學(xue)(xue)(xue)(xue)(xue)和計算(suan)(suan)(suan)分(fen)析、工(gong)程(cheng)和應用(yong)。 官網(wang)地(di)址:

In this study, we introduce a measure for machine perception, inspired by the concept of Just Noticeable Difference (JND) of human perception. Based on this measure, we suggest an adversarial image generation algorithm, which iteratively distorts an image by an additive noise until the model detects the change in the image by outputting a false label. The noise added to the original image is defined as the gradient of the cost function of the model. A novel cost function is defined to explicitly minimize the amount of perturbation applied to the input image while enforcing the perceptual similarity between the adversarial and input images. For this purpose, the cost function is regularized by the well-known total variation and bounded range terms to meet the natural appearance of the adversarial image. We evaluate the adversarial images generated by our algorithm both qualitatively and quantitatively on CIFAR10, ImageNet, and MS COCO datasets. Our experiments on image classification and object detection tasks show that adversarial images generated by our JND method are both more successful in deceiving the recognition/detection models and less perturbed compared to the images generated by the state-of-the-art methods, namely, FGV, FSGM, and DeepFool methods.

Graph neural networks, a popular class of models effective in a wide range of graph-based learning tasks, have been shown to be vulnerable to adversarial attacks. While the majority of the literature focuses on such vulnerability in node-level classification tasks, little effort has been dedicated to analysing adversarial attacks on graph-level classification, an important problem with numerous real-life applications such as biochemistry and social network analysis. The few existing methods often require unrealistic setups, such as access to internal information of the victim models, or an impractically-large number of queries. We present a novel Bayesian optimisation-based attack method for graph classification models. Our method is black-box, query-efficient and parsimonious with respect to the perturbation applied. We empirically validate the effectiveness and flexibility of the proposed method on a wide range of graph classification tasks involving varying graph properties, constraints and modes of attack. Finally, we analyse common interpretable patterns behind the adversarial samples produced, which may shed further light on the adversarial robustness of graph classification models.

Adversarial attack is a technique for deceiving Machine Learning (ML) models, which provides a way to evaluate the adversarial robustness. In practice, attack algorithms are artificially selected and tuned by human experts to break a ML system. However, manual selection of attackers tends to be sub-optimal, leading to a mistakenly assessment of model security. In this paper, a new procedure called Composite Adversarial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms and their hyper-parameters from a candidate pool of \textbf{32 base attackers}. We design a search space where attack policy is represented as an attacking sequence, i.e., the output of the previous attacker is used as the initialization input for successors. Multi-objective NSGA-II genetic algorithm is adopted for finding the strongest attack policy with minimum complexity. The experimental result shows CAA beats 10 top attackers on 11 diverse defenses with less elapsed time (\textbf{6 $\times$ faster than AutoAttack}), and achieves the new state-of-the-art on $l_{\infty}$, $l_{2}$ and unrestricted adversarial attacks.

Sufficient supervised information is crucial for any machine learning models to boost performance. However, labeling data is expensive and sometimes difficult to obtain. Active learning is an approach to acquire annotations for data from a human oracle by selecting informative samples with a high probability to enhance performance. In recent emerging studies, a generative adversarial network (GAN) has been integrated with active learning to generate good candidates to be presented to the oracle. In this paper, we propose a novel model that is able to obtain labels for data in a cheaper manner without the need to query an oracle. In the model, a novel reward for each sample is devised to measure the degree of uncertainty, which is obtained from a classifier trained with existing labeled data. This reward is used to guide a conditional GAN to generate informative samples with a higher probability for a certain label. With extensive evaluations, we have confirmed the effectiveness of the model, showing that the generated samples are capable of improving the classification performance in popular image classification tasks.

Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9% accuracy, our method achieves 55.7%; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6% accuracy. A network based on our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 --- it achieved 50.6% classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by ~10%. Code and models will be made publicly available.

In this paper, we propose Generative Adversarial Network (GAN) architectures that use Capsule Networks for image-synthesis. Based on the principal of positional-equivariance of features, Capsule Network's ability to encode spatial relationships between the features of the image helps it become a more powerful critic in comparison to Convolutional Neural Networks (CNNs) used in current architectures for image synthesis. Our proposed GAN architectures learn the data manifold much faster and therefore, synthesize visually accurate images in significantly lesser number of training samples and training epochs in comparison to GANs and its variants that use CNNs. Apart from analyzing the quantitative results corresponding the images generated by different architectures, we also explore the reasons for the lower coverage and diversity explored by the GAN architectures that use CNN critics.

Reinforcement learning (RL) has advanced greatly in the past few years with the employment of effective deep neural networks (DNNs) on the policy networks. With the great effectiveness came serious vulnerability issues with DNNs that small adversarial perturbations on the input can change the output of the network. Several works have pointed out that learned agents with a DNN policy network can be manipulated against achieving the original task through a sequence of small perturbations on the input states. In this paper, we demonstrate furthermore that it is also possible to impose an arbitrary adversarial reward on the victim policy network through a sequence of attacks. Our method involves the latest adversarial attack technique, Adversarial Transformer Network (ATN), that learns to generate the attack and is easy to integrate into the policy network. As a result of our attack, the victim agent is misguided to optimise for the adversarial reward over time. Our results expose serious security threats for RL applications in safety-critical systems including drones, medical analysis, and self-driving cars.

Visual language grounding is widely studied in modern neural image captioning systems, which typically adopts an encoder-decoder framework consisting of two principal components: a convolutional neural network (CNN) for image feature extraction and a recurrent neural network (RNN) for language caption generation. To study the robustness of language grounding to adversarial perturbations in machine vision and perception, we propose Show-and-Fool, a novel algorithm for crafting adversarial examples in neural image captioning. The proposed algorithm provides two evaluation approaches, which check whether neural image captioning systems can be mislead to output some randomly chosen captions or keywords. Our extensive experiments show that our algorithm can successfully craft visually-similar adversarial examples with randomly targeted captions or keywords, and the adversarial examples can be made highly transferable to other image captioning systems. Consequently, our approach leads to new robustness implications of neural image captioning and novel insights in visual language grounding.

Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. In this paper, we propose AdvGAN to generate adversarial examples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. For AdvGAN, once the generator is trained, it can generate adversarial perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. We apply AdvGAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly. Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76% accuracy on a public MNIST black-box attack challenge.

We construct targeted audio adversarial examples on automatic speech recognition. Given any audio waveform, we can produce another that is over 99.9% similar, but transcribes as any phrase we choose (at a rate of up to 50 characters per second). We apply our iterative optimization-based attack to Mozilla's implementation DeepSpeech end-to-end, and show it has a 100% success rate. The feasibility of this attack introduce a new domain to study adversarial examples.

北京阿比特科技有限公司