亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Although Deep Neural Networks (DNNs) have been widely applied in various real-world scenarios, they are vulnerable to adversarial examples. The current adversarial attacks in computer vision can be divided into digital attacks and physical attacks according to their different attack forms. Compared with digital attacks, which generate perturbations in the digital pixels, physical attacks are more practical in the real world. Owing to the serious security problem caused by physically adversarial examples, many works have been proposed to evaluate the physically adversarial robustness of DNNs in the past years. In this paper, we summarize a survey versus the current physically adversarial attacks and physically adversarial defenses in computer vision. To establish a taxonomy, we organize the current physical attacks from attack tasks, attack forms, and attack methods, respectively. Thus, readers can have a systematic knowledge of this topic from different aspects. For the physical defenses, we establish the taxonomy from pre-processing, in-processing, and post-processing for the DNN models to achieve full coverage of the adversarial defenses. Based on the above survey, we finally discuss the challenges of this research field and further outlook on the future direction.

相關內容

分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)學是(shi)分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)的(de)(de)(de)(de)實踐和(he)科(ke)學。Wikipedia類(lei)(lei)(lei)(lei)(lei)別(bie)(bie)說(shuo)明了一種(zhong)分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)(fa),可(ke)以(yi)通過自動方(fang)式提取(qu)Wikipedia類(lei)(lei)(lei)(lei)(lei)別(bie)(bie)的(de)(de)(de)(de)完(wan)整(zheng)分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)(fa)。截(jie)至(zhi)2009年,已經證(zheng)明,可(ke)以(yi)使用(yong)人工(gong)構建(jian)的(de)(de)(de)(de)分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(例如像(xiang)WordNet這(zhe)樣的(de)(de)(de)(de)計算詞典的(de)(de)(de)(de)分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)(fa))來改(gai)進(jin)和(he)重組(zu)Wikipedia類(lei)(lei)(lei)(lei)(lei)別(bie)(bie)分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)(fa)。 從廣義上(shang)(shang)講(jiang),分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)(fa)還適(shi)用(yong)于(yu)除父子(zi)層(ceng)次結(jie)(jie)構以(yi)外的(de)(de)(de)(de)關系方(fang)案,例如網絡(luo)結(jie)(jie)構。然后分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)(fa)可(ke)能包括(kuo)有多父母的(de)(de)(de)(de)單(dan)身孩子(zi),例如,“汽(qi)車”可(ke)能與(yu)父母雙(shuang)方(fang)一起出現“車輛”和(he)“鋼結(jie)(jie)構”;但是(shi)對某些(xie)人而言,這(zhe)僅(jin)意味著(zhu)“汽(qi)車”是(shi)幾種(zhong)不(bu)同分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)(fa)的(de)(de)(de)(de)一部(bu)分(fen)(fen)(fen)。分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)(fa)也(ye)可(ke)能只是(shi)將事物組(zu)織(zhi)成組(zu),或者是(shi)按字母順序排列(lie)的(de)(de)(de)(de)列(lie)表;但是(shi)在這(zhe)里,術(shu)語詞匯更(geng)合適(shi)。在知(zhi)識管理中的(de)(de)(de)(de)當前用(yong)法(fa)(fa)中,分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)(fa)被認(ren)為比本體(ti)論窄(zhai),因為本體(ti)論應(ying)用(yong)了各(ge)種(zhong)各(ge)樣的(de)(de)(de)(de)關系類(lei)(lei)(lei)(lei)(lei)型。 在數(shu)學上(shang)(shang),分(fen)(fen)(fen)層(ceng)分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)(fa)是(shi)給定對象(xiang)集(ji)的(de)(de)(de)(de)分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)樹結(jie)(jie)構。該結(jie)(jie)構的(de)(de)(de)(de)頂部(bu)是(shi)適(shi)用(yong)于(yu)所有對象(xiang)的(de)(de)(de)(de)單(dan)個分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei),即(ji)根節點。此(ci)根下的(de)(de)(de)(de)節點是(shi)更(geng)具(ju)體(ti)的(de)(de)(de)(de)分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei),適(shi)用(yong)于(yu)總(zong)分(fen)(fen)(fen)類(lei)(lei)(lei)(lei)(lei)對象(xiang)集(ji)的(de)(de)(de)(de)子(zi)集(ji)。推(tui)理的(de)(de)(de)(de)進(jin)展(zhan)從一般到(dao)更(geng)具(ju)體(ti)。

知識薈萃

精(jing)品(pin)入(ru)門(men)和進階教程、論文(wen)和代碼整理等

更多

查看相關VIP內容、論文、資訊(xun)等(deng)

The rapid advancement of large language models, such as the Generative Pre-trained Transformer (GPT) series, has had significant implications across various disciplines. In this study, we investigate the potential of the state-of-the-art large language model (GPT-4) for planning tasks. We explore its effectiveness in multiple planning subfields, highlighting both its strengths and limitations. Through a comprehensive examination, we identify areas where large language models excel in solving planning problems and reveal the constraints that limit their applicability. Our empirical analysis focuses on GPT-4's performance in planning domain extraction, graph search path planning, and adversarial planning. We then propose a way of fine-tuning a domain-specific large language model to improve its Chain of Thought (CoT) capabilities for the above-mentioned tasks. The results provide valuable insights into the potential applications of large language models in the planning domain and pave the way for future research to overcome their limitations and expand their capabilities.

Mobile Internet has profoundly reshaped modern lifestyles in various aspects. Encrypted Traffic Classification (ETC) naturally plays a crucial role in managing mobile Internet, especially with the explosive growth of mobile apps using encrypted communication. Despite some existing learning-based ETC methods showing promising results, three-fold limitations still remain in real-world network environments, 1) label bias caused by traffic class imbalance, 2) traffic homogeneity caused by component sharing, and 3) training with reliance on sufficient labeled traffic. None of the existing ETC methods can address all these limitations. In this paper, we propose a novel Pre-trAining Semi-Supervised ETC framework, dubbed PASS. Our key insight is to resample the original train dataset and perform contrastive pre-training without using individual app labels directly to avoid label bias issues caused by class imbalance, while obtaining a robust feature representation to differentiate overlapping homogeneous traffic by pulling positive traffic pairs closer and pushing negative pairs away. Meanwhile, PASS designs a semi-supervised optimization strategy based on pseudo-label iteration and dynamic loss weighting algorithms in order to effectively utilize massive unlabeled traffic data and alleviate manual train dataset annotation workload. PASS outperforms state-of-the-art ETC methods and generic sampling approaches on four public datasets with significant class imbalance and traffic homogeneity, remarkably pushing the F1 of Cross-Platform215 with 1.31%, ISCX-17 with 9.12%. Furthermore, we validate the generality of the contrastive pre-training and pseudo-label iteration components of PASS, which can adaptively benefit ETC methods with diverse feature extractors.

The protection of Industrial Control Systems (ICS) that are employed in public critical infrastructures is of utmost importance due to catastrophic physical damages cyberattacks may cause. The research community requires testbeds for validation and comparing various intrusion detection algorithms to protect ICS. However, there exist high barriers to entry for research and education in the ICS cybersecurity domain due to expensive hardware, software, and inherent dangers of manipulating real-world systems. To close the gap, built upon recently developed 3D high-fidelity simulators, we further showcase our integrated framework to automatically launch cyberattacks, collect data, train machine learning models, and evaluate for practical chemical and manufacturing processes. On our testbed, we validate our proposed intrusion detection model called Minimal Threshold and Window SVM (MinTWin SVM) that utilizes unsupervised machine learning via a one-class SVM in combination with a sliding window and classification threshold. Results show that MinTWin SVM minimizes false positives and is responsive to physical process anomalies. Furthermore, we incorporate our framework with ICS cybersecurity education by using our dataset in an undergraduate machine learning course where students gain hands-on experience in practicing machine learning theory with a practical ICS dataset. All of our implementations have been open-sourced.

During Automated Program Repair (APR), it can be challenging to synthesize correct patches for real-world systems in general-purpose programming languages. Recent Large Language Models (LLMs) have been shown to be helpful "copilots" in assisting developers with various coding tasks, and have also been directly applied for patch synthesis. However, most LLMs treat programs as sequences of tokens, meaning that they are ignorant of the underlying semantics constraints of the target programming language. This results in plenty of statically invalid generated patches, impeding the practicality of the technique. Therefore, we propose Repilot, a framework to further copilot the AI "copilots" (i.e., LLMs) by synthesizing more valid patches during the repair process. Our key insight is that many LLMs produce outputs autoregressively (i.e., token by token), resembling human writing programs, which can be significantly boosted and guided through a Completion Engine. Repilot synergistically synthesizes a candidate patch through the interaction between an LLM and a Completion Engine, which 1) prunes away infeasible tokens suggested by the LLM and 2) proactively completes the token based on the suggestions provided by the Completion Engine. Our evaluation on a subset of the widely-used Defects4j 1.2 and 2.0 datasets shows that Repilot fixes 66 and 50 bugs, respectively, surpassing the best-performing baseline by 14 and 16 bugs fixed. More importantly, Repilot is capable of producing more valid and correct patches than the base LLM when given the same generation budget.

Recently, there has been a growing interest in text-to-speech (TTS) methods that can be trained with minimal supervision by combining two types of discrete speech representations and using two sequence-to-sequence tasks to decouple TTS. However, existing methods suffer from three problems: the high dimensionality and waveform distortion of discrete speech representations, the prosodic averaging problem caused by the duration prediction model in non-autoregressive frameworks, and the information redundancy and dimension explosion problems of existing semantic encoding methods. To address these problems, three progressive methods are proposed. First, we propose Diff-LM-Speech, an autoregressive structure consisting of a language model and diffusion models, which models the semantic embedding into the mel-spectrogram based on a diffusion model to achieve higher audio quality. We also introduce a prompt encoder structure based on a variational autoencoder and a prosody bottleneck to improve prompt representation ability. Second, we propose Tetra-Diff-Speech, a non-autoregressive structure consisting of four diffusion model-based modules that design a duration diffusion model to achieve diverse prosodic expressions. Finally, we propose Tri-Diff-Speech, a non-autoregressive structure consisting of three diffusion model-based modules that verify the non-necessity of existing semantic encoding models and achieve the best results. Experimental results show that our proposed methods outperform baseline methods. We provide a website with audio samples.

Honeypots are essential tools in cybersecurity. However, most of them (even the high-interaction ones) lack the required realism to engage and fool human attackers. This limitation makes them easily discernible, hindering their effectiveness. This work introduces a novel method to create dynamic and realistic software honeypots based on Large Language Models. Preliminary results indicate that LLMs can create credible and dynamic honeypots capable of addressing important limitations of previous honeypots, such as deterministic responses, lack of adaptability, etc. We evaluated the realism of each command by conducting an experiment with human attackers who needed to say if the answer from the honeypot was fake or not. Our proposed honeypot, called shelLM, reached an accuracy rate of 0.92.

Large Language Models (LLM) have become sophisticated enough that complex computer programs can be created through interpretation of plain English sentences and implemented in a variety of modern languages such as Python, Java Script, C++ and Spreadsheets. These tools are powerful and relatively accurate and therefore provide broad access to computer programming regardless of the background or knowledge of the individual using them. This paper presents a series of experiments with ChatGPT to explore the tool's ability to produce valid spreadsheet formulae and related computational outputs in situations where ChatGPT has to deduce, infer and problem solve the answer. The results show that in certain circumstances, ChatGPT can produce correct spreadsheet formulae with correct reasoning, deduction and inference. However, when information is limited, uncertain or the problem is too complex, the accuracy of ChatGPT breaks down as does its ability to reason, infer and deduce. This can also result in false statements and "hallucinations" that all subvert the process of creating spreadsheet formulae.

Knowledge Graph Embedding (KGE) aims to learn representations for entities and relations. Most KGE models have gained great success, especially on extrapolation scenarios. Specifically, given an unseen triple (h, r, t), a trained model can still correctly predict t from (h, r, ?), or h from (?, r, t), such extrapolation ability is impressive. However, most existing KGE works focus on the design of delicate triple modeling function, which mainly tells us how to measure the plausibility of observed triples, but offers limited explanation of why the methods can extrapolate to unseen data, and what are the important factors to help KGE extrapolate. Therefore in this work, we attempt to study the KGE extrapolation of two problems: 1. How does KGE extrapolate to unseen data? 2. How to design the KGE model with better extrapolation ability? For the problem 1, we first discuss the impact factors for extrapolation and from relation, entity and triple level respectively, propose three Semantic Evidences (SEs), which can be observed from train set and provide important semantic information for extrapolation. Then we verify the effectiveness of SEs through extensive experiments on several typical KGE methods. For the problem 2, to make better use of the three levels of SE, we propose a novel GNN-based KGE model, called Semantic Evidence aware Graph Neural Network (SE-GNN). In SE-GNN, each level of SE is modeled explicitly by the corresponding neighbor pattern, and merged sufficiently by the multi-layer aggregation, which contributes to obtaining more extrapolative knowledge representation. Finally, through extensive experiments on FB15k-237 and WN18RR datasets, we show that SE-GNN achieves state-of-the-art performance on Knowledge Graph Completion task and performs a better extrapolation ability.

Graph Neural Networks (GNNs) have been studied from the lens of expressive power and generalization. However, their optimization properties are less well understood. We take the first step towards analyzing GNN training by studying the gradient dynamics of GNNs. First, we analyze linearized GNNs and prove that despite the non-convexity of training, convergence to a global minimum at a linear rate is guaranteed under mild assumptions that we validate on real-world graphs. Second, we study what may affect the GNNs' training speed. Our results show that the training of GNNs is implicitly accelerated by skip connections, more depth, and/or a good label distribution. Empirical results confirm that our theoretical results for linearized GNNs align with the training behavior of nonlinear GNNs. Our results provide the first theoretical support for the success of GNNs with skip connections in terms of optimization, and suggest that deep GNNs with skip connections would be promising in practice.

Graph Neural Networks (GNNs) have recently become increasingly popular due to their ability to learn complex systems of relations or interactions arising in a broad spectrum of problems ranging from biology and particle physics to social networks and recommendation systems. Despite the plethora of different models for deep learning on graphs, few approaches have been proposed thus far for dealing with graphs that present some sort of dynamic nature (e.g. evolving features or connectivity over time). In this paper, we present Temporal Graph Networks (TGNs), a generic, efficient framework for deep learning on dynamic graphs represented as sequences of timed events. Thanks to a novel combination of memory modules and graph-based operators, TGNs are able to significantly outperform previous approaches being at the same time more computationally efficient. We furthermore show that several previous models for learning on dynamic graphs can be cast as specific instances of our framework. We perform a detailed ablation study of different components of our framework and devise the best configuration that achieves state-of-the-art performance on several transductive and inductive prediction tasks for dynamic graphs.

北京阿比特科技有限公司