亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The choice of crossover and mutation strategies plays a crucial role in the search ability, convergence efficiency and precision of genetic algorithms. In this paper, a novel improved genetic algorithm is proposed by improving the crossover and mutation operation of the simple genetic algorithm, and it is verified by four test functions. Simulation results show that, comparing with three other mainstream swarm intelligence optimization algorithms, the algorithm can not only improve the global search ability, convergence efficiency and precision, but also increase the success rate of convergence to the optimal value under the same experimental conditions. Finally, the algorithm is applied to neural networks adversarial attacks. The applied results show that the method does not need the structure and parameter information inside the neural network model, and it can obtain the adversarial samples with high confidence in a brief time just by the classification and confidence information output from the neural network.

相關內容

神(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)(jing)網(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)(luo)(Neural Networks)是世界(jie)上三個最(zui)古老的(de)(de)(de)神(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)(jing)建模學(xue)(xue)會(hui)(hui)的(de)(de)(de)檔案期刊(kan):國際神(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)(jing)網(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)(luo)學(xue)(xue)會(hui)(hui)(INNS)、歐洲神(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)(jing)網(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)(luo)學(xue)(xue)會(hui)(hui)(ENNS)和(he)(he)(he)(he)日本神(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)(jing)網(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)(luo)學(xue)(xue)會(hui)(hui)(JNNS)。神(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)(jing)網(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)(luo)提(ti)供了一個論(lun)壇,以發展和(he)(he)(he)(he)培育一個國際社(she)會(hui)(hui)的(de)(de)(de)學(xue)(xue)者和(he)(he)(he)(he)實踐者感(gan)興(xing)趣的(de)(de)(de)所有(you)方面(mian)的(de)(de)(de)神(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)(jing)網(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)(luo)和(he)(he)(he)(he)相關方法的(de)(de)(de)計(ji)算智能。神(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)(jing)網(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)(luo)歡迎(ying)高質量論(lun)文的(de)(de)(de)提(ti)交,有(you)助于全面(mian)的(de)(de)(de)神(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)(jing)網(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)(luo)研(yan)究(jiu),從行為和(he)(he)(he)(he)大腦建模,學(xue)(xue)習(xi)算法,通過數學(xue)(xue)和(he)(he)(he)(he)計(ji)算分(fen)析,系(xi)統(tong)的(de)(de)(de)工(gong)程和(he)(he)(he)(he)技(ji)術(shu)(shu)應(ying)用(yong),大量使用(yong)神(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)(jing)網(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)(luo)的(de)(de)(de)概(gai)念和(he)(he)(he)(he)技(ji)術(shu)(shu)。這一獨特而廣泛的(de)(de)(de)范圍促(cu)進(jin)了生(sheng)物(wu)和(he)(he)(he)(he)技(ji)術(shu)(shu)研(yan)究(jiu)之間(jian)的(de)(de)(de)思想交流,并(bing)有(you)助于促(cu)進(jin)對生(sheng)物(wu)啟發的(de)(de)(de)計(ji)算智能感(gan)興(xing)趣的(de)(de)(de)跨(kua)學(xue)(xue)科(ke)社(she)區的(de)(de)(de)發展。因(yin)此,神(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)(jing)網(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)(luo)編委(wei)會(hui)(hui)代(dai)表(biao)的(de)(de)(de)專家(jia)領域包(bao)括心理學(xue)(xue),神(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)(jing)生(sheng)物(wu)學(xue)(xue),計(ji)算機科(ke)學(xue)(xue),工(gong)程,數學(xue)(xue),物(wu)理。該雜志發表(biao)文章、信(xin)件和(he)(he)(he)(he)評(ping)論(lun)以及給編輯的(de)(de)(de)信(xin)件、社(she)論(lun)、時(shi)事(shi)、軟件調查和(he)(he)(he)(he)專利信(xin)息。文章發表(biao)在五(wu)個部(bu)分(fen)之一:認(ren)知(zhi)科(ke)學(xue)(xue),神(shen)(shen)(shen)(shen)經(jing)(jing)(jing)(jing)(jing)科(ke)學(xue)(xue),學(xue)(xue)習(xi)系(xi)統(tong),數學(xue)(xue)和(he)(he)(he)(he)計(ji)算分(fen)析、工(gong)程和(he)(he)(he)(he)應(ying)用(yong)。 官網(wang)(wang)(wang)(wang)(wang)地址:

Stochastic Gradient Descent (SGD) is the workhorse algorithm of deep learning technology. At each step of the training phase, a mini batch of samples is drawn from the training dataset and the weights of the neural network are adjusted according to the performance on this specific subset of examples. The mini-batch sampling procedure introduces a stochastic dynamics to the gradient descent, with a non-trivial state-dependent noise. We characterize the stochasticity of SGD and a recently-introduced variant, \emph{persistent} SGD, in a prototypical neural network model. In the under-parametrized regime, where the final training error is positive, the SGD dynamics reaches a stationary state and we define an effective temperature from the fluctuation-dissipation theorem, computed from dynamical mean-field theory. We use the effective temperature to quantify the magnitude of the SGD noise as a function of the problem parameters. In the over-parametrized regime, where the training error vanishes, we measure the noise magnitude of SGD by computing the average distance between two replicas of the system with the same initialization and two different realizations of SGD noise. We find that the two noise measures behave similarly as a function of the problem parameters. Moreover, we observe that noisier algorithms lead to wider decision boundaries of the corresponding constraint satisfaction problem.

We study a new generative modeling technique based on adversarial training (AT). We show that in a setting where the model is trained to discriminate in-distribution data from adversarial examples perturbed from out-distribution samples, the model learns the support of the in-distribution data. The learning process is also closely related to MCMC-based maximum likelihood learning of energy-based models (EBMs), and can be considered as an approximate maximum likelihood learning method. We show that this AT generative model achieves competitive image generation performance to state-of-the-art EBMs, and at the same time is stable to train and has better sampling efficiency. We demonstrate that the AT generative model is well-suited for the task of image translation and worst-case out-of-distribution detection.

Most prior state-of-the-art adversarial detection works assume that the underlying vulnerable model is accessible, i,e., the model can be trained or its outputs are visible. However, this is not a practical assumption due to factors like model encryption, model information leakage and so on. In this work, we propose a model independent adversarial detection method using a simple energy function to distinguish between adversarial and natural inputs. We train a standalone detector independent of the underlying model, with sequential layer-wise training to increase the energy separation corresponding to natural and adversarial inputs. With this, we perform energy distribution-based adversarial detection. Our method achieves state-of-the-art detection performance (ROC-AUC > 0.9) across a wide range of gradient, score and decision-based adversarial attacks on CIFAR10, CIFAR100 and TinyImagenet datasets. Compared to prior approaches, our method requires ~10-100x less number of operations and parameters for adversarial detection. Further, we show that our detection method is transferable across different datasets and adversarial attacks. For reproducibility, we provide code in the supplementary material.

We consider decentralized machine learning over a network where the training data is distributed across $n$ agents, each of which can compute stochastic model updates on their local data. The agent's common goal is to find a model that minimizes the average of all local loss functions. While gradient tracking (GT) algorithms can overcome a key challenge, namely accounting for differences between workers' local data distributions, the known convergence rates for GT algorithms are not optimal with respect to their dependence on the mixing parameter $p$ (related to the spectral gap of the connectivity matrix). We provide a tighter analysis of the GT method in the stochastic strongly convex, convex and non-convex settings. We improve the dependency on $p$ from $\mathcal{O}(p^{-2})$ to $\mathcal{O}(p^{-1}c^{-1})$ in the noiseless case and from $\mathcal{O}(p^{-3/2})$ to $\mathcal{O}(p^{-1/2}c^{-1})$ in the general stochastic case, where $c \geq p$ is related to the negative eigenvalues of the connectivity matrix (and is a constant in most practical applications). This improvement was possible due to a new proof technique which could be of independent interest.

Graph neural networks, a popular class of models effective in a wide range of graph-based learning tasks, have been shown to be vulnerable to adversarial attacks. While the majority of the literature focuses on such vulnerability in node-level classification tasks, little effort has been dedicated to analysing adversarial attacks on graph-level classification, an important problem with numerous real-life applications such as biochemistry and social network analysis. The few existing methods often require unrealistic setups, such as access to internal information of the victim models, or an impractically-large number of queries. We present a novel Bayesian optimisation-based attack method for graph classification models. Our method is black-box, query-efficient and parsimonious with respect to the perturbation applied. We empirically validate the effectiveness and flexibility of the proposed method on a wide range of graph classification tasks involving varying graph properties, constraints and modes of attack. Finally, we analyse common interpretable patterns behind the adversarial samples produced, which may shed further light on the adversarial robustness of graph classification models.

Adversarial attack is a technique for deceiving Machine Learning (ML) models, which provides a way to evaluate the adversarial robustness. In practice, attack algorithms are artificially selected and tuned by human experts to break a ML system. However, manual selection of attackers tends to be sub-optimal, leading to a mistakenly assessment of model security. In this paper, a new procedure called Composite Adversarial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms and their hyper-parameters from a candidate pool of \textbf{32 base attackers}. We design a search space where attack policy is represented as an attacking sequence, i.e., the output of the previous attacker is used as the initialization input for successors. Multi-objective NSGA-II genetic algorithm is adopted for finding the strongest attack policy with minimum complexity. The experimental result shows CAA beats 10 top attackers on 11 diverse defenses with less elapsed time (\textbf{6 $\times$ faster than AutoAttack}), and achieves the new state-of-the-art on $l_{\infty}$, $l_{2}$ and unrestricted adversarial attacks.

Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9% accuracy, our method achieves 55.7%; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6% accuracy. A network based on our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 --- it achieved 50.6% classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by ~10%. Code and models will be made publicly available.

Reinforcement learning (RL) has advanced greatly in the past few years with the employment of effective deep neural networks (DNNs) on the policy networks. With the great effectiveness came serious vulnerability issues with DNNs that small adversarial perturbations on the input can change the output of the network. Several works have pointed out that learned agents with a DNN policy network can be manipulated against achieving the original task through a sequence of small perturbations on the input states. In this paper, we demonstrate furthermore that it is also possible to impose an arbitrary adversarial reward on the victim policy network through a sequence of attacks. Our method involves the latest adversarial attack technique, Adversarial Transformer Network (ATN), that learns to generate the attack and is easy to integrate into the policy network. As a result of our attack, the victim agent is misguided to optimise for the adversarial reward over time. Our results expose serious security threats for RL applications in safety-critical systems including drones, medical analysis, and self-driving cars.

Recently introduced generative adversarial network (GAN) has been shown numerous promising results to generate realistic samples. The essential task of GAN is to control the features of samples generated from a random distribution. While the current GAN structures, such as conditional GAN, successfully generate samples with desired major features, they often fail to produce detailed features that bring specific differences among samples. To overcome this limitation, here we propose a controllable GAN (ControlGAN) structure. By separating a feature classifier from a discriminator, the generator of ControlGAN is designed to learn generating synthetic samples with the specific detailed features. Evaluated with multiple image datasets, ControlGAN shows a power to generate improved samples with well-controlled features. Furthermore, we demonstrate that ControlGAN can generate intermediate features and opposite features for interpolated and extrapolated input labels that are not used in the training process. It implies that ControlGAN can significantly contribute to the variety of generated samples.

Raindrops adhered to a glass window or camera lens can severely hamper the visibility of a background scene and degrade an image considerably. In this paper, we address the problem by visually removing raindrops, and thus transforming a raindrop degraded image into a clean one. The problem is intractable, since first the regions occluded by raindrops are not given. Second, the information about the background scene of the occluded regions is completely lost for most part. To resolve the problem, we apply an attentive generative network using adversarial training. Our main idea is to inject visual attention into both the generative and discriminative networks. During the training, our visual attention learns about raindrop regions and their surroundings. Hence, by injecting this information, the generative network will pay more attention to the raindrop regions and the surrounding structures, and the discriminative network will be able to assess the local consistency of the restored regions. This injection of visual attention to both generative and discriminative networks is the main contribution of this paper. Our experiments show the effectiveness of our approach, which outperforms the state of the art methods quantitatively and qualitatively.

北京阿比特科技有限公司