亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Voice Recognition Systems (VRSs) employ deep learning for speech recognition and speaker recognition. They have been widely deployed in various real-world applications, from intelligent voice assistance to telephony surveillance and biometric authentication. However, prior research has revealed the vulnerability of VRSs to backdoor attacks, which pose a significant threat to the security and privacy of VRSs. Unfortunately, existing literature lacks a thorough review on this topic. This paper fills this research gap by conducting a comprehensive survey on backdoor attacks against VRSs. We first present an overview of VRSs and backdoor attacks, elucidating their basic knowledge. Then we propose a set of evaluation criteria to assess the performance of backdoor attack methods. Next, we present a comprehensive taxonomy of backdoor attacks against VRSs from different perspectives and analyze the characteristic of different categories. After that, we comprehensively review existing attack methods and analyze their pros and cons based on the proposed criteria. Furthermore, we review classic backdoor defense methods and generic audio defense techniques. Then we discuss the feasibility of deploying them on VRSs. Finally, we figure out several open issues and further suggest future research directions to motivate the research of VRSs security.

相關內容

分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)學是(shi)(shi)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)的(de)(de)(de)實踐和(he)科學。Wikipedia類(lei)(lei)(lei)別(bie)說明(ming)(ming)了一(yi)種(zhong)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)(fa),可以(yi)通(tong)過(guo)自(zi)動方式提取Wikipedia類(lei)(lei)(lei)別(bie)的(de)(de)(de)完(wan)整分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)(fa)。截(jie)至2009年,已經證明(ming)(ming),可以(yi)使用(yong)人(ren)工構(gou)建的(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)(fa)(例如像WordNet這(zhe)樣(yang)的(de)(de)(de)計(ji)算(suan)詞典的(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)(fa))來改進和(he)重組Wikipedia類(lei)(lei)(lei)別(bie)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)(fa)。 從廣義上講(jiang),分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)(fa)還適(shi)用(yong)于(yu)除父子層次結構(gou)以(yi)外(wai)的(de)(de)(de)關(guan)系方案(an),例如網(wang)絡結構(gou)。然后分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)(fa)可能(neng)包(bao)括有多(duo)父母(mu)的(de)(de)(de)單身孩子,例如,“汽(qi)車”可能(neng)與父母(mu)雙方一(yi)起出現“車輛”和(he)“鋼結構(gou)”;但是(shi)(shi)對(dui)某些人(ren)而言(yan),這(zhe)僅意味著“汽(qi)車”是(shi)(shi)幾(ji)種(zhong)不同(tong)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)(fa)的(de)(de)(de)一(yi)部分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)。分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)(fa)也可能(neng)只是(shi)(shi)將事物(wu)組織成組,或者是(shi)(shi)按字母(mu)順序排列(lie)的(de)(de)(de)列(lie)表;但是(shi)(shi)在(zai)這(zhe)里,術語詞匯(hui)更(geng)合(he)適(shi)。在(zai)知識管(guan)理中的(de)(de)(de)當前用(yong)法(fa)(fa)(fa)(fa)中,分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)(fa)被認為比本(ben)體(ti)論(lun)窄,因為本(ben)體(ti)論(lun)應用(yong)了各種(zhong)各樣(yang)的(de)(de)(de)關(guan)系類(lei)(lei)(lei)型。 在(zai)數學上,分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)層分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)(fa)是(shi)(shi)給定對(dui)象(xiang)集的(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)樹結構(gou)。該結構(gou)的(de)(de)(de)頂部是(shi)(shi)適(shi)用(yong)于(yu)所有對(dui)象(xiang)的(de)(de)(de)單個(ge)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei),即根節點。此根下(xia)的(de)(de)(de)節點是(shi)(shi)更(geng)具體(ti)的(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei),適(shi)用(yong)于(yu)總分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)對(dui)象(xiang)集的(de)(de)(de)子集。推理的(de)(de)(de)進展(zhan)從一(yi)般到更(geng)具體(ti)。

知識薈萃

精品入門和進階教程、論文和代碼整理等

更多

查(cha)看相關VIP內容(rong)、論文、資訊等

Masked Image Modeling (MIM) methods, like Masked Autoencoders (MAE), efficiently learn a rich representation of the input. However, for adapting to downstream tasks, they require a sufficient amount of labeled data since their rich features code not only objects but also less relevant image background. In contrast, Instance Discrimination (ID) methods focus on objects. In this work, we study how to combine the efficiency and scalability of MIM with the ability of ID to perform downstream classification in the absence of large amounts of labeled data. To this end, we introduce Masked Autoencoder Contrastive Tuning (MAE-CT), a sequential approach that utilizes the implicit clustering of the Nearest Neighbor Contrastive Learning (NNCLR) objective to induce abstraction in the topmost layers of a pre-trained MAE. MAE-CT tunes the rich features such that they form semantic clusters of objects without using any labels. Notably, MAE-CT does not rely on hand-crafted augmentations and frequently achieves its best performances while using only minimal augmentations (crop & flip). Further, MAE-CT is compute efficient as it requires at most 10% overhead compared to MAE re-training. Applied to large and huge Vision Transformer (ViT) models, MAE-CT excels over previous self-supervised methods trained on ImageNet in linear probing, k-NN and low-shot classification accuracy as well as in unsupervised clustering accuracy. With ViT-H/16 MAE-CT achieves a new state-of-the-art in linear probing of 82.2%.

Deep Neural Networks (DNNs) have led to unprecedented progress in various natural language processing (NLP) tasks. Owing to limited data and computation resources, using third-party data and models has become a new paradigm for adapting various tasks. However, research shows that it has some potential security vulnerabilities because attackers can manipulate the training process and data source. Such a way can set specific triggers, making the model exhibit expected behaviors that have little inferior influence on the model's performance for primitive tasks, called backdoor attacks. Hence, it could have dire consequences, especially considering that the backdoor attack surfaces are broad. To get a precise grasp and understanding of this problem, a systematic and comprehensive review is required to confront various security challenges from different phases and attack purposes. Additionally, there is a dearth of analysis and comparison of the various emerging backdoor countermeasures in this situation. In this paper, we conduct a timely review of backdoor attacks and countermeasures to sound the red alarm for the NLP security community. According to the affected stage of the machine learning pipeline, the attack surfaces are recognized to be wide and then formalized into three categorizations: attacking pre-trained model with fine-tuning (APMF) or prompt-tuning (APMP), and attacking final model with training (AFMT), where AFMT can be subdivided into different attack aims. Thus, attacks under each categorization are combed. The countermeasures are categorized into two general classes: sample inspection and model inspection. Overall, the research on the defense side is far behind the attack side, and there is no single defense that can prevent all types of backdoor attacks. An attacker can intelligently bypass existing defenses with a more invisible attack. ......

Knowledge plays a critical role in artificial intelligence. Recently, the extensive success of pre-trained language models (PLMs) has raised significant attention about how knowledge can be acquired, maintained, updated and used by language models. Despite the enormous amount of related studies, there still lacks a unified view of how knowledge circulates within language models throughout the learning, tuning, and application processes, which may prevent us from further understanding the connections between current progress or realizing existing limitations. In this survey, we revisit PLMs as knowledge-based systems by dividing the life circle of knowledge in PLMs into five critical periods, and investigating how knowledge circulates when it is built, maintained and used. To this end, we systematically review existing studies of each period of the knowledge life cycle, summarize the main challenges and current limitations, and discuss future directions.

The past few years have seen rapid progress in combining reinforcement learning (RL) with deep learning. Various breakthroughs ranging from games to robotics have spurred the interest in designing sophisticated RL algorithms and systems. However, the prevailing workflow in RL is to learn tabula rasa, which may incur computational inefficiency. This precludes continuous deployment of RL algorithms and potentially excludes researchers without large-scale computing resources. In many other areas of machine learning, the pretraining paradigm has shown to be effective in acquiring transferable knowledge, which can be utilized for a variety of downstream tasks. Recently, we saw a surge of interest in Pretraining for Deep RL with promising results. However, much of the research has been based on different experimental settings. Due to the nature of RL, pretraining in this field is faced with unique challenges and hence requires new design principles. In this survey, we seek to systematically review existing works in pretraining for deep reinforcement learning, provide a taxonomy of these methods, discuss each sub-field, and bring attention to open problems and future directions.

Deep learning has been the mainstream technique in natural language processing (NLP) area. However, the techniques require many labeled data and are less generalizable across domains. Meta-learning is an arising field in machine learning studying approaches to learn better learning algorithms. Approaches aim at improving algorithms in various aspects, including data efficiency and generalizability. Efficacy of approaches has been shown in many NLP tasks, but there is no systematic survey of these approaches in NLP, which hinders more researchers from joining the field. Our goal with this survey paper is to offer researchers pointers to relevant meta-learning works in NLP and attract more attention from the NLP community to drive future innovation. This paper first introduces the general concepts of meta-learning and the common approaches. Then we summarize task construction settings and application of meta-learning for various NLP problems and review the development of meta-learning in NLP community.

Knowledge Graph Embedding (KGE) aims to learn representations for entities and relations. Most KGE models have gained great success, especially on extrapolation scenarios. Specifically, given an unseen triple (h, r, t), a trained model can still correctly predict t from (h, r, ?), or h from (?, r, t), such extrapolation ability is impressive. However, most existing KGE works focus on the design of delicate triple modeling function, which mainly tells us how to measure the plausibility of observed triples, but offers limited explanation of why the methods can extrapolate to unseen data, and what are the important factors to help KGE extrapolate. Therefore in this work, we attempt to study the KGE extrapolation of two problems: 1. How does KGE extrapolate to unseen data? 2. How to design the KGE model with better extrapolation ability? For the problem 1, we first discuss the impact factors for extrapolation and from relation, entity and triple level respectively, propose three Semantic Evidences (SEs), which can be observed from train set and provide important semantic information for extrapolation. Then we verify the effectiveness of SEs through extensive experiments on several typical KGE methods. For the problem 2, to make better use of the three levels of SE, we propose a novel GNN-based KGE model, called Semantic Evidence aware Graph Neural Network (SE-GNN). In SE-GNN, each level of SE is modeled explicitly by the corresponding neighbor pattern, and merged sufficiently by the multi-layer aggregation, which contributes to obtaining more extrapolative knowledge representation. Finally, through extensive experiments on FB15k-237 and WN18RR datasets, we show that SE-GNN achieves state-of-the-art performance on Knowledge Graph Completion task and performs a better extrapolation ability.

Deep neural networks (DNNs) have become a proven and indispensable machine learning tool. As a black-box model, it remains difficult to diagnose what aspects of the model's input drive the decisions of a DNN. In countless real-world domains, from legislation and law enforcement to healthcare, such diagnosis is essential to ensure that DNN decisions are driven by aspects appropriate in the context of its use. The development of methods and studies enabling the explanation of a DNN's decisions has thus blossomed into an active, broad area of research. A practitioner wanting to study explainable deep learning may be intimidated by the plethora of orthogonal directions the field has taken. This complexity is further exacerbated by competing definitions of what it means ``to explain'' the actions of a DNN and to evaluate an approach's ``ability to explain''. This article offers a field guide to explore the space of explainable deep learning aimed at those uninitiated in the field. The field guide: i) Introduces three simple dimensions defining the space of foundational methods that contribute to explainable deep learning, ii) discusses the evaluations for model explanations, iii) places explainability in the context of other related deep learning research areas, and iv) finally elaborates on user-oriented explanation designing and potential future directions on explainable deep learning. We hope the guide is used as an easy-to-digest starting point for those just embarking on research in this field.

Deep learning has become the dominant approach in coping with various tasks in Natural LanguageProcessing (NLP). Although text inputs are typically represented as a sequence of tokens, there isa rich variety of NLP problems that can be best expressed with a graph structure. As a result, thereis a surge of interests in developing new deep learning techniques on graphs for a large numberof NLP tasks. In this survey, we present a comprehensive overview onGraph Neural Networks(GNNs) for Natural Language Processing. We propose a new taxonomy of GNNs for NLP, whichsystematically organizes existing research of GNNs for NLP along three axes: graph construction,graph representation learning, and graph based encoder-decoder models. We further introducea large number of NLP applications that are exploiting the power of GNNs and summarize thecorresponding benchmark datasets, evaluation metrics, and open-source codes. Finally, we discussvarious outstanding challenges for making the full use of GNNs for NLP as well as future researchdirections. To the best of our knowledge, this is the first comprehensive overview of Graph NeuralNetworks for Natural Language Processing.

This paper surveys the field of transfer learning in the problem setting of Reinforcement Learning (RL). RL has been the key solution to sequential decision-making problems. Along with the fast advance of RL in various domains. including robotics and game-playing, transfer learning arises as an important technique to assist RL by leveraging and transferring external expertise to boost the learning process. In this survey, we review the central issues of transfer learning in the RL domain, providing a systematic categorization of its state-of-the-art techniques. We analyze their goals, methodologies, applications, and the RL frameworks under which these transfer learning techniques would be approachable. We discuss the relationship between transfer learning and other relevant topics from an RL perspective and also explore the potential challenges as well as future development directions for transfer learning in RL.

Convolutional Neural Networks (CNNs) have gained significant traction in the field of machine learning, particularly due to their high accuracy in visual recognition. Recent works have pushed the performance of GPU implementations of CNNs to significantly improve their classification and training times. With these improvements, many frameworks have become available for implementing CNNs on both CPUs and GPUs, with no support for FPGA implementations. In this work we present a modified version of the popular CNN framework Caffe, with FPGA support. This allows for classification using CNN models and specialized FPGA implementations with the flexibility of reprogramming the device when necessary, seamless memory transactions between host and device, simple-to-use test benches, and the ability to create pipelined layer implementations. To validate the framework, we use the Xilinx SDAccel environment to implement an FPGA-based Winograd convolution engine and show that the FPGA layer can be used alongside other layers running on a host processor to run several popular CNNs (AlexNet, GoogleNet, VGG A, Overfeat). The results show that our framework achieves 50 GFLOPS across 3x3 convolutions in the benchmarks. This is achieved within a practical framework, which will aid in future development of FPGA-based CNNs.

北京阿比特科技有限公司