亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Adversarial phenomenon has been widely observed in machine learning (ML) systems, especially in those using deep neural networks, describing that ML systems may produce inconsistent and incomprehensible predictions with humans at some particular cases. This phenomenon poses a serious security threat to the practical application of ML systems, and several advanced attack paradigms have been developed to explore it, mainly including backdoor attacks, weight attacks, and adversarial examples. For each individual attack paradigm, various defense paradigms have been developed to improve the model robustness against the corresponding attack paradigm. However, due to the independence and diversity of these defense paradigms, it is difficult to examine the overall robustness of an ML system against different kinds of attacks.This survey aims to build a systematic review of all existing defense paradigms from a unified perspective. Specifically, from the life-cycle perspective, we factorize a complete machine learning system into five stages, including pre-training, training, post-training, deployment, and inference stages, respectively. Then, we present a clear taxonomy to categorize and review representative defense methods at each individual stage. The unified perspective and presented taxonomies not only facilitate the analysis of the mechanism of each defense paradigm but also help us to understand connections and differences among different defense paradigms, which may inspire future research to develop more advanced, comprehensive defenses.

相關內容

分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)學(xue)是(shi)分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)的(de)(de)(de)(de)實(shi)踐和科學(xue)。Wikipedia類(lei)(lei)(lei)(lei)(lei)別(bie)說明了一種(zhong)分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(fa)(fa),可以(yi)(yi)(yi)通過自動方式提取(qu)Wikipedia類(lei)(lei)(lei)(lei)(lei)別(bie)的(de)(de)(de)(de)完整分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(fa)(fa)。截至(zhi)2009年,已經證明,可以(yi)(yi)(yi)使用(yong)人(ren)(ren)工構建的(de)(de)(de)(de)分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(fa)(fa)(例(li)如(ru)(ru)像WordNet這(zhe)樣的(de)(de)(de)(de)計(ji)算詞典(dian)的(de)(de)(de)(de)分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(fa)(fa))來改(gai)進和重組(zu)(zu)Wikipedia類(lei)(lei)(lei)(lei)(lei)別(bie)分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(fa)(fa)。 從廣義上講(jiang),分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(fa)(fa)還適(shi)用(yong)于除父子(zi)層(ceng)次結構以(yi)(yi)(yi)外的(de)(de)(de)(de)關系方案,例(li)如(ru)(ru)網絡結構。然后分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(fa)(fa)可能包(bao)括有多父母的(de)(de)(de)(de)單身孩子(zi),例(li)如(ru)(ru),“汽車”可能與父母雙方一起出現“車輛”和“鋼結構”;但(dan)是(shi)對(dui)某些人(ren)(ren)而(er)言,這(zhe)僅意味著(zhu)“汽車”是(shi)幾種(zhong)不同(tong)分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(fa)(fa)的(de)(de)(de)(de)一部(bu)分(fen)(fen)。分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(fa)(fa)也可能只是(shi)將事物組(zu)(zu)織成(cheng)組(zu)(zu),或者是(shi)按字母順(shun)序排(pai)列的(de)(de)(de)(de)列表;但(dan)是(shi)在這(zhe)里,術語詞匯更合適(shi)。在知識管理中的(de)(de)(de)(de)當(dang)前(qian)用(yong)法(fa)(fa)(fa)(fa)中,分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(fa)(fa)被(bei)認(ren)為比本(ben)體論窄,因為本(ben)體論應用(yong)了各種(zhong)各樣的(de)(de)(de)(de)關系類(lei)(lei)(lei)(lei)(lei)型(xing)。 在數(shu)學(xue)上,分(fen)(fen)層(ceng)分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(fa)(fa)是(shi)給定對(dui)象(xiang)集(ji)的(de)(de)(de)(de)分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)樹結構。該結構的(de)(de)(de)(de)頂部(bu)是(shi)適(shi)用(yong)于所有對(dui)象(xiang)的(de)(de)(de)(de)單個分(fen)(fen)類(lei)(lei)(lei)(lei)(lei),即根節點(dian)。此根下(xia)的(de)(de)(de)(de)節點(dian)是(shi)更具(ju)體的(de)(de)(de)(de)分(fen)(fen)類(lei)(lei)(lei)(lei)(lei),適(shi)用(yong)于總分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)對(dui)象(xiang)集(ji)的(de)(de)(de)(de)子(zi)集(ji)。推理的(de)(de)(de)(de)進展從一般到更具(ju)體。

知識薈萃

精品入門和(he)進階(jie)教程、論文和(he)代(dai)碼整理等

更多

查看相關VIP內容、論文、資訊等(deng)

Supervised fairness-aware machine learning under distribution shifts is an emerging field that addresses the challenge of maintaining equitable and unbiased predictions when faced with changes in data distributions from source to target domains. In real-world applications, machine learning models are often trained on a specific dataset but deployed in environments where the data distribution may shift over time due to various factors. This shift can lead to unfair predictions, disproportionately affecting certain groups characterized by sensitive attributes, such as race and gender. In this survey, we provide a summary of various types of distribution shifts and comprehensively investigate existing methods based on these shifts, highlighting six commonly used approaches in the literature. Additionally, this survey lists publicly available datasets and evaluation metrics for empirical studies. We further explore the interconnection with related research fields, discuss the significant challenges, and identify potential directions for future studies.

The advances of deep learning (DL) have paved the way for automatic software vulnerability repair approaches, which effectively learn the mapping from the vulnerable code to the fixed code. Nevertheless, existing DL-based vulnerability repair methods face notable limitations: 1) they struggle to handle lengthy vulnerable code, 2) they treat code as natural language texts, neglecting its inherent structure, and 3) they do not tap into the valuable expert knowledge present in the expert system. To address this, we propose VulMaster, a Transformer-based neural network model that excels at generating vulnerability repairs by comprehensively understanding the entire vulnerable code, irrespective of its length. This model also integrates diverse information, encompassing vulnerable code structures and expert knowledge from the CWE system. We evaluated VulMaster on a real-world C/C++ vulnerability repair dataset comprising 1,754 projects with 5,800 vulnerable functions. The experimental results demonstrated that VulMaster exhibits substantial improvements compared to the learning-based state-of-the-art vulnerability repair approach. Specifically, VulMaster improves the EM, BLEU, and CodeBLEU scores from 10.2\% to 20.0\%, 21.3\% to 29.3\%, and 32.5\% to 40.9\%, respectively.

The digitization of healthcare data coupled with advances in computational capabilities has propelled the adoption of machine learning (ML) in healthcare. However, these methods can perpetuate or even exacerbate existing disparities, leading to fairness concerns such as the unequal distribution of resources and diagnostic inaccuracies among different demographic groups. Addressing these fairness problem is paramount to prevent further entrenchment of social injustices. In this survey, we analyze the intersection of fairness in machine learning and healthcare disparities. We adopt a framework based on the principles of distributive justice to categorize fairness concerns into two distinct classes: equal allocation and equal performance. We provide a critical review of the associated fairness metrics from a machine learning standpoint and examine biases and mitigation strategies across the stages of the ML lifecycle, discussing the relationship between biases and their countermeasures. The paper concludes with a discussion on the pressing challenges that remain unaddressed in ensuring fairness in healthcare ML, and proposes several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.

Deep learning-based algorithms have seen a massive popularity in different areas of remote sensing image analysis over the past decade. Recently, transformers-based architectures, originally introduced in natural language processing, have pervaded computer vision field where the self-attention mechanism has been utilized as a replacement to the popular convolution operator for capturing long-range dependencies. Inspired by recent advances in computer vision, remote sensing community has also witnessed an increased exploration of vision transformers for a diverse set of tasks. Although a number of surveys have focused on transformers in computer vision in general, to the best of our knowledge we are the first to present a systematic review of recent advances based on transformers in remote sensing. Our survey covers more than 60 recent transformers-based methods for different remote sensing problems in sub-areas of remote sensing: very high-resolution (VHR), hyperspectral (HSI) and synthetic aperture radar (SAR) imagery. We conclude the survey by discussing different challenges and open issues of transformers in remote sensing. Additionally, we intend to frequently update and maintain the latest transformers in remote sensing papers with their respective code at: //github.com/VIROBO-15/Transformer-in-Remote-Sensing

The existence of representative datasets is a prerequisite of many successful artificial intelligence and machine learning models. However, the subsequent application of these models often involves scenarios that are inadequately represented in the data used for training. The reasons for this are manifold and range from time and cost constraints to ethical considerations. As a consequence, the reliable use of these models, especially in safety-critical applications, is a huge challenge. Leveraging additional, already existing sources of knowledge is key to overcome the limitations of purely data-driven approaches, and eventually to increase the generalization capability of these models. Furthermore, predictions that conform with knowledge are crucial for making trustworthy and safe decisions even in underrepresented scenarios. This work provides an overview of existing techniques and methods in the literature that combine data-based models with existing knowledge. The identified approaches are structured according to the categories integration, extraction and conformity. Special attention is given to applications in the field of autonomous driving.

Following unprecedented success on the natural language tasks, Transformers have been successfully applied to several computer vision problems, achieving state-of-the-art results and prompting researchers to reconsider the supremacy of convolutional neural networks (CNNs) as {de facto} operators. Capitalizing on these advances in computer vision, the medical imaging field has also witnessed growing interest for Transformers that can capture global context compared to CNNs with local receptive fields. Inspired from this transition, in this survey, we attempt to provide a comprehensive review of the applications of Transformers in medical imaging covering various aspects, ranging from recently proposed architectural designs to unsolved issues. Specifically, we survey the use of Transformers in medical image segmentation, detection, classification, reconstruction, synthesis, registration, clinical report generation, and other tasks. In particular, for each of these applications, we develop taxonomy, identify application-specific challenges as well as provide insights to solve them, and highlight recent trends. Further, we provide a critical discussion of the field's current state as a whole, including the identification of key challenges, open problems, and outlining promising future directions. We hope this survey will ignite further interest in the community and provide researchers with an up-to-date reference regarding applications of Transformer models in medical imaging. Finally, to cope with the rapid development in this field, we intend to regularly update the relevant latest papers and their open-source implementations at \url{//github.com/fahadshamshad/awesome-transformers-in-medical-imaging}.

As an effective strategy, data augmentation (DA) alleviates data scarcity scenarios where deep learning techniques may fail. It is widely applied in computer vision then introduced to natural language processing and achieves improvements in many tasks. One of the main focuses of the DA methods is to improve the diversity of training data, thereby helping the model to better generalize to unseen testing data. In this survey, we frame DA methods into three categories based on the diversity of augmented data, including paraphrasing, noising, and sampling. Our paper sets out to analyze DA methods in detail according to the above categories. Further, we also introduce their applications in NLP tasks as well as the challenges.

Deep learning methods are achieving ever-increasing performance on many artificial intelligence tasks. A major limitation of deep models is that they are not amenable to interpretability. This limitation can be circumvented by developing post hoc techniques to explain the predictions, giving rise to the area of explainability. Recently, explainability of deep models on images and texts has achieved significant progress. In the area of graph data, graph neural networks (GNNs) and their explainability are experiencing rapid developments. However, there is neither a unified treatment of GNN explainability methods, nor a standard benchmark and testbed for evaluations. In this survey, we provide a unified and taxonomic view of current GNN explainability methods. Our unified and taxonomic treatments of this subject shed lights on the commonalities and differences of existing methods and set the stage for further methodological developments. To facilitate evaluations, we generate a set of benchmark graph datasets specifically for GNN explainability. We summarize current datasets and metrics for evaluating GNN explainability. Altogether, this work provides a unified methodological treatment of GNN explainability and a standardized testbed for evaluations.

The difficulty of deploying various deep learning (DL) models on diverse DL hardwares has boosted the research and development of DL compilers in the community. Several DL compilers have been proposed from both industry and academia such as Tensorflow XLA and TVM. Similarly, the DL compilers take the DL models described in different DL frameworks as input, and then generate optimized codes for diverse DL hardwares as output. However, none of the existing survey has analyzed the unique design of the DL compilers comprehensively. In this paper, we perform a comprehensive survey of existing DL compilers by dissecting the commonly adopted design in details, with emphasis on the DL oriented multi-level IRs, and frontend/backend optimizations. Specifically, we provide a comprehensive comparison among existing DL compilers from various aspects. In addition, we present detailed analysis of the multi-level IR design and compiler optimization techniques. Finally, several insights are highlighted as the potential research directions of DL compiler. This is the first survey paper focusing on the unique design of DL compiler, which we hope can pave the road for future research towards the DL compiler.

With the advent of deep neural networks, learning-based approaches for 3D reconstruction have gained popularity. However, unlike for images, in 3D there is no canonical representation which is both computationally and memory efficient yet allows for representing high-resolution geometry of arbitrary topology. Many of the state-of-the-art learning-based 3D reconstruction approaches can hence only represent very coarse 3D geometry or are limited to a restricted domain. In this paper, we propose occupancy networks, a new representation for learning-based 3D reconstruction methods. Occupancy networks implicitly represent the 3D surface as the continuous decision boundary of a deep neural network classifier. In contrast to existing approaches, our representation encodes a description of the 3D output at infinite resolution without excessive memory footprint. We validate that our representation can efficiently encode 3D structure and can be inferred from various kinds of input. Our experiments demonstrate competitive results, both qualitatively and quantitatively, for the challenging tasks of 3D reconstruction from single images, noisy point clouds and coarse discrete voxel grids. We believe that occupancy networks will become a useful tool in a wide variety of learning-based 3D tasks.

北京阿比特科技有限公司