亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Recent researches demonstrate that Deep Neural Networks (DNN) models are vulnerable to backdoor attacks. The backdoored DNN model will behave maliciously when images containing backdoor triggers arrive. To date, existing backdoor attacks are single-trigger and single-target attacks, and the triggers of most existing backdoor attacks are obvious thus are easy to be detected or noticed. In this paper, we propose a novel imperceptible and multi-channel backdoor attack against Deep Neural Networks by exploiting Discrete Cosine Transform (DCT) steganography. Based on the proposed backdoor attack method, we implement two variants of backdoor attacks, i.e., N-to-N backdoor attack and N-to-One backdoor attack. Specifically, for a colored image, we utilize DCT steganography to construct the trigger on different channels of the image. As a result, the trigger is stealthy and natural. Based on the proposed method, we implement multi-target and multi-trigger backdoor attacks. Experimental results demonstrate that the average attack success rate of the N-to-N backdoor attack is 93.95% on CIFAR-10 dataset and 91.55% on TinyImageNet dataset, respectively. The average attack success rate of N-to-One attack is 90.22% and 89.53% on CIFAR-10 and TinyImageNet datasets, respectively. Meanwhile, the proposed backdoor attack does not affect the classification accuracy of the DNN model. Moreover, the proposed attack is demonstrated to be robust to the state-of-the-art backdoor defense (Neural Cleanse).

相關內容

神(shen)經(jing)(jing)(jing)(jing)網絡(Neural Networks)是世界上三個(ge)(ge)最古老(lao)的(de)(de)(de)(de)(de)神(shen)經(jing)(jing)(jing)(jing)建模(mo)學(xue)(xue)(xue)(xue)(xue)(xue)會(hui)(hui)(hui)的(de)(de)(de)(de)(de)檔案期刊:國(guo)際神(shen)經(jing)(jing)(jing)(jing)網絡學(xue)(xue)(xue)(xue)(xue)(xue)會(hui)(hui)(hui)(INNS)、歐洲神(shen)經(jing)(jing)(jing)(jing)網絡學(xue)(xue)(xue)(xue)(xue)(xue)會(hui)(hui)(hui)(ENNS)和(he)(he)(he)(he)(he)日本神(shen)經(jing)(jing)(jing)(jing)網絡學(xue)(xue)(xue)(xue)(xue)(xue)會(hui)(hui)(hui)(JNNS)。神(shen)經(jing)(jing)(jing)(jing)網絡提供了一個(ge)(ge)論(lun)壇,以發(fa)(fa)展和(he)(he)(he)(he)(he)培育一個(ge)(ge)國(guo)際社會(hui)(hui)(hui)的(de)(de)(de)(de)(de)學(xue)(xue)(xue)(xue)(xue)(xue)者和(he)(he)(he)(he)(he)實踐者感(gan)興趣的(de)(de)(de)(de)(de)所(suo)有(you)方(fang)面的(de)(de)(de)(de)(de)神(shen)經(jing)(jing)(jing)(jing)網絡和(he)(he)(he)(he)(he)相(xiang)關方(fang)法(fa)的(de)(de)(de)(de)(de)計算(suan)(suan)(suan)智(zhi)能。神(shen)經(jing)(jing)(jing)(jing)網絡歡迎高質量論(lun)文(wen)(wen)的(de)(de)(de)(de)(de)提交,有(you)助于(yu)全面的(de)(de)(de)(de)(de)神(shen)經(jing)(jing)(jing)(jing)網絡研究,從行為和(he)(he)(he)(he)(he)大(da)腦(nao)建模(mo),學(xue)(xue)(xue)(xue)(xue)(xue)習(xi)算(suan)(suan)(suan)法(fa),通過數學(xue)(xue)(xue)(xue)(xue)(xue)和(he)(he)(he)(he)(he)計算(suan)(suan)(suan)分(fen)(fen)析,系統的(de)(de)(de)(de)(de)工程(cheng)和(he)(he)(he)(he)(he)技(ji)(ji)術(shu)應(ying)用(yong),大(da)量使用(yong)神(shen)經(jing)(jing)(jing)(jing)網絡的(de)(de)(de)(de)(de)概念和(he)(he)(he)(he)(he)技(ji)(ji)術(shu)。這一獨特(te)而廣泛的(de)(de)(de)(de)(de)范圍促(cu)進(jin)了生物和(he)(he)(he)(he)(he)技(ji)(ji)術(shu)研究之(zhi)間的(de)(de)(de)(de)(de)思想交流,并(bing)有(you)助于(yu)促(cu)進(jin)對(dui)生物啟發(fa)(fa)的(de)(de)(de)(de)(de)計算(suan)(suan)(suan)智(zhi)能感(gan)興趣的(de)(de)(de)(de)(de)跨學(xue)(xue)(xue)(xue)(xue)(xue)科社區的(de)(de)(de)(de)(de)發(fa)(fa)展。因此(ci),神(shen)經(jing)(jing)(jing)(jing)網絡編委(wei)會(hui)(hui)(hui)代表(biao)的(de)(de)(de)(de)(de)專(zhuan)家(jia)領域(yu)包括心理(li)學(xue)(xue)(xue)(xue)(xue)(xue),神(shen)經(jing)(jing)(jing)(jing)生物學(xue)(xue)(xue)(xue)(xue)(xue),計算(suan)(suan)(suan)機科學(xue)(xue)(xue)(xue)(xue)(xue),工程(cheng),數學(xue)(xue)(xue)(xue)(xue)(xue),物理(li)。該雜志發(fa)(fa)表(biao)文(wen)(wen)章(zhang)、信(xin)(xin)件(jian)和(he)(he)(he)(he)(he)評論(lun)以及給編輯(ji)的(de)(de)(de)(de)(de)信(xin)(xin)件(jian)、社論(lun)、時(shi)事、軟(ruan)件(jian)調查和(he)(he)(he)(he)(he)專(zhuan)利(li)信(xin)(xin)息。文(wen)(wen)章(zhang)發(fa)(fa)表(biao)在五(wu)個(ge)(ge)部分(fen)(fen)之(zhi)一:認知科學(xue)(xue)(xue)(xue)(xue)(xue),神(shen)經(jing)(jing)(jing)(jing)科學(xue)(xue)(xue)(xue)(xue)(xue),學(xue)(xue)(xue)(xue)(xue)(xue)習(xi)系統,數學(xue)(xue)(xue)(xue)(xue)(xue)和(he)(he)(he)(he)(he)計算(suan)(suan)(suan)分(fen)(fen)析、工程(cheng)和(he)(he)(he)(he)(he)應(ying)用(yong)。 官網地址:

Data poisoning attacks, in which a malicious adversary aims to influence a model by injecting "poisoned" data into the training process, have attracted significant recent attention. In this work, we take a closer look at existing poisoning attacks and connect them with old and new algorithms for solving sequential Stackelberg games. By choosing an appropriate loss function for the attacker and optimizing with algorithms that exploit second-order information, we design poisoning attacks that are effective on neural networks. We present efficient implementations that exploit modern auto-differentiation packages and allow simultaneous and coordinated generation of tens of thousands of poisoned points, in contrast to existing methods that generate poisoned points one by one. We further perform extensive experiments that empirically explore the effect of data poisoning attacks on deep neural networks.

Deep neural networks have become an integral part of our software infrastructure and are being deployed in many widely-used and safety-critical applications. However, their integration into many systems also brings with it the vulnerability to test time attacks in the form of Universal Adversarial Perturbations (UAPs). UAPs are a class of perturbations that when applied to any input causes model misclassification. Although there is an ongoing effort to defend models against these adversarial attacks, it is often difficult to reconcile the trade-offs in model accuracy and robustness to adversarial attacks. Jacobian regularization has been shown to improve the robustness of models against UAPs, whilst model ensembles have been widely adopted to improve both predictive performance and model robustness. In this work, we propose a novel approach, Jacobian Ensembles-a combination of Jacobian regularization and model ensembles to significantly increase the robustness against UAPs whilst maintaining or improving model accuracy. Our results show that Jacobian Ensembles achieves previously unseen levels of accuracy and robustness, greatly improving over previous methods that tend to skew towards only either accuracy or robustness.

A rising number of botnet families have been successfully detected using deep learning architectures. While the variety of attacks increases, these architectures should become more robust against attacks. They have been proven to be very sensitive to small but well constructed perturbations in the input. Botnet detection requires extremely low false-positive rates (FPR), which are not commonly attainable in contemporary deep learning. Attackers try to increase the FPRs by making poisoned samples. The majority of recent research has focused on the use of model loss functions to build adversarial examples and robust models. In this paper, two LSTM-based classification algorithms for botnet classification with an accuracy higher than 98\% are presented. Then, the adversarial attack is proposed, which reduces the accuracy to about30\%. Then, by examining the methods for computing the uncertainty, the defense method is proposed to increase the accuracy to about 70\%. By using the deep ensemble and stochastic weight averaging quantification methods it has been investigated the uncertainty of the accuracy in the proposed methods.

The success of deep learning has enabled advances in multimodal tasks that require non-trivial fusion of multiple input domains. Although multimodal models have shown potential in many problems, their increased complexity makes them more vulnerable to attacks. A Backdoor (or Trojan) attack is a class of security vulnerability wherein an attacker embeds a malicious secret behavior into a network (e.g. targeted misclassification) that is activated when an attacker-specified trigger is added to an input. In this work, we show that multimodal networks are vulnerable to a novel type of attack that we refer to as Dual-Key Multimodal Backdoors. This attack exploits the complex fusion mechanisms used by state-of-the-art networks to embed backdoors that are both effective and stealthy. Instead of using a single trigger, the proposed attack embeds a trigger in each of the input modalities and activates the malicious behavior only when both the triggers are present. We present an extensive study of multimodal backdoors on the Visual Question Answering (VQA) task with multiple architectures and visual feature backbones. A major challenge in embedding backdoors in VQA models is that most models use visual features extracted from a fixed pretrained object detector. This is challenging for the attacker as the detector can distort or ignore the visual trigger entirely, which leads to models where backdoors are over-reliant on the language trigger. We tackle this problem by proposing a visual trigger optimization strategy designed for pretrained object detectors. Through this method, we create Dual-Key Backdoors with over a 98% attack success rate while only poisoning 1% of the training data. Finally, we release TrojVQA, a large collection of clean and trojan VQA models to enable research in defending against multimodal backdoors.

Deep Neural Networks (DNNs) are vulnerable to invisible perturbations on the images generated by adversarial attacks, which raises researches on the adversarial robustness of DNNs. A series of methods represented by the adversarial training and its variants have proven as one of the most effective techniques in enhancing the DNN robustness. Generally, adversarial training focuses on enriching the training data by involving perturbed data. Despite of the efficiency in defending specific attacks, adversarial training is benefited from the data augmentation, which does not contribute to the robustness of DNN itself and usually suffers from accuracy drop on clean data as well as inefficiency in unknown attacks. Towards the robustness of DNN itself, we propose a novel defense that aims at augmenting the model in order to learn features adaptive to diverse inputs, including adversarial examples. Specifically, we introduce multiple paths to augment the network, and impose orthogonality constraints on these paths. In addition, a margin-maximization loss is designed to further boost DIversity via Orthogonality (DIO). Extensive empirical results on various data sets, architectures, and attacks demonstrate the adversarial robustness of the proposed DIO.

In recent years, channel attention mechanism has been widely investigated due to its great potential in improving the performance of deep convolutional neural networks (CNNs) in many vision tasks. However, in most of the existing methods, only the output of the adjacent convolution layer is fed into the attention layer for calculating the channel weights. Information from other convolution layers has been ignored. With these observations, a simple strategy, named Bridge Attention Net (BA-Net), is proposed in this paper for better performance with channel attention mechanisms. The core idea of this design is to bridge the outputs of the previous convolution layers through skip connections for channel weights generation. Based on our experiment and theory analysis, we find that features from previous layers also contribute to the weights significantly. The Comprehensive evaluation demonstrates that the proposed approach achieves state-of-the-art(SOTA) performance compared with the existing methods in accuracy and speed. which shows that Bridge Attention provides a new perspective on the design of neural network architectures with great potential in improving performance. The code is available at //github.com/zhaoy376/Bridge-Attention.

Backdoor attacks insert malicious data into a training set so that, during inference time, it misclassifies inputs that have been patched with a backdoor trigger as the malware specified label. For backdoor attacks to bypass human inspection, it is essential that the injected data appear to be correctly labeled. The attacks with such property are often referred to as "clean-label attacks." Existing clean-label backdoor attacks require knowledge of the entire training set to be effective. Obtaining such knowledge is difficult or impossible because training data are often gathered from multiple sources (e.g., face images from different users). It remains a question whether backdoor attacks still present a real threat. This paper provides an affirmative answer to this question by designing an algorithm to mount clean-label backdoor attacks based only on the knowledge of representative examples from the target class. With poisoning equal to or less than 0.5% of the target-class data and 0.05% of the training set, we can train a model to classify test examples from arbitrary classes into the target class when the examples are patched with a backdoor trigger. Our attack works well across datasets and models, even when the trigger presents in the physical world. We explore the space of defenses and find that, surprisingly, our attack can evade the latest state-of-the-art defenses in their vanilla form, or after a simple twist, we can adapt to the downstream defenses. We study the cause of the intriguing effectiveness and find that because the trigger synthesized by our attack contains features as persistent as the original semantic features of the target class, any attempt to remove such triggers would inevitably hurt the model accuracy first.

Backdoor attack intends to embed hidden backdoor into deep neural networks (DNNs), such that the attacked model performs well on benign samples, whereas its prediction will be maliciously changed if the hidden backdoor is activated by the attacker-defined trigger. Backdoor attack could happen when the training process is not fully controlled by the user, such as training on third-party datasets or adopting third-party models, which poses a new and realistic threat. Although backdoor learning is an emerging and rapidly growing research area, its systematic review, however, remains blank. In this paper, we present the first comprehensive survey of this realm. We summarize and categorize existing backdoor attacks and defenses based on their characteristics, and provide a unified framework for analyzing poisoning-based backdoor attacks. Besides, we also analyze the relation between backdoor attacks and the relevant fields ($i.e.,$ adversarial attack and data poisoning), and summarize the benchmark datasets. Finally, we briefly outline certain future research directions relying upon reviewed works.

Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9% accuracy, our method achieves 55.7%; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6% accuracy. A network based on our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 --- it achieved 50.6% classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by ~10%. Code and models will be made publicly available.

While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on the ImageNet classification task has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called "perceptual losses"? What elements are critical for their success? To answer these questions, we introduce a new Full Reference Image Quality Assessment (FR-IQA) dataset of perceptual human judgments, orders of magnitude larger than previous datasets. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by huge margins. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations.

北京阿比特科技有限公司