亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Federated learning is a machine learning paradigm that emerges as a solution to the privacy-preservation demands in artificial intelligence. As machine learning, federated learning is threatened by adversarial attacks against the integrity of the learning model and the privacy of data via a distributed approach to tackle local and global learning. This weak point is exacerbated by the inaccessibility of data in federated learning, which makes harder the protection against adversarial attacks and evidences the need to furtherance the research on defence methods to make federated learning a real solution for safeguarding data privacy. In this paper, we present an extensive review of the threats of federated learning, as well as as their corresponding countermeasures, attacks versus defences. This survey provides a taxonomy of adversarial attacks and a taxonomy of defence methods that depict a general picture of this vulnerability of federated learning and how to overcome it. Likewise, we expound guidelines for selecting the most adequate defence method according to the category of the adversarial attack. Besides, we carry out an extensive experimental study from which we draw further conclusions about the behaviour of attacks and defences and the guidelines for selecting the most adequate defence method according to the category of the adversarial attack. This study is finished leading to meditated learned lessons and challenges.

相關內容

分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)學(xue)是(shi)(shi)(shi)(shi)分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)的(de)(de)(de)(de)(de)(de)(de)(de)實踐和科(ke)學(xue)。Wikipedia類(lei)(lei)(lei)(lei)(lei)(lei)別說(shuo)明了一種分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)法(fa),可(ke)以(yi)通過自動方式(shi)提取Wikipedia類(lei)(lei)(lei)(lei)(lei)(lei)別的(de)(de)(de)(de)(de)(de)(de)(de)完(wan)整分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)法(fa)。截至2009年,已經證(zheng)明,可(ke)以(yi)使用(yong)人(ren)工構(gou)(gou)建的(de)(de)(de)(de)(de)(de)(de)(de)分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(例(li)(li)如(ru)像WordNet這(zhe)樣(yang)的(de)(de)(de)(de)(de)(de)(de)(de)計算詞典的(de)(de)(de)(de)(de)(de)(de)(de)分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)法(fa))來改進和重組Wikipedia類(lei)(lei)(lei)(lei)(lei)(lei)別分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)法(fa)。 從廣義上講,分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)法(fa)還(huan)適用(yong)于(yu)除父子(zi)層次結(jie)(jie)構(gou)(gou)以(yi)外(wai)的(de)(de)(de)(de)(de)(de)(de)(de)關系方案,例(li)(li)如(ru)網絡結(jie)(jie)構(gou)(gou)。然(ran)后分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)法(fa)可(ke)能(neng)包括有(you)多(duo)父母(mu)的(de)(de)(de)(de)(de)(de)(de)(de)單身孩(hai)子(zi),例(li)(li)如(ru),“汽車(che)”可(ke)能(neng)與父母(mu)雙方一起出現“車(che)輛”和“鋼結(jie)(jie)構(gou)(gou)”;但是(shi)(shi)(shi)(shi)對(dui)(dui)某些人(ren)而言,這(zhe)僅(jin)意味著“汽車(che)”是(shi)(shi)(shi)(shi)幾(ji)種不(bu)同分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)法(fa)的(de)(de)(de)(de)(de)(de)(de)(de)一部(bu)分(fen)(fen)。分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)法(fa)也(ye)可(ke)能(neng)只(zhi)是(shi)(shi)(shi)(shi)將事物組織成(cheng)組,或者是(shi)(shi)(shi)(shi)按字(zi)母(mu)順(shun)序排(pai)列的(de)(de)(de)(de)(de)(de)(de)(de)列表;但是(shi)(shi)(shi)(shi)在(zai)這(zhe)里,術語詞匯更(geng)合適。在(zai)知識管理(li)(li)中的(de)(de)(de)(de)(de)(de)(de)(de)當前用(yong)法(fa)中,分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)法(fa)被(bei)認為比本(ben)體(ti)(ti)論窄,因(yin)為本(ben)體(ti)(ti)論應用(yong)了各種各樣(yang)的(de)(de)(de)(de)(de)(de)(de)(de)關系類(lei)(lei)(lei)(lei)(lei)(lei)型。 在(zai)數學(xue)上,分(fen)(fen)層分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)法(fa)是(shi)(shi)(shi)(shi)給定對(dui)(dui)象(xiang)集的(de)(de)(de)(de)(de)(de)(de)(de)分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)樹結(jie)(jie)構(gou)(gou)。該結(jie)(jie)構(gou)(gou)的(de)(de)(de)(de)(de)(de)(de)(de)頂部(bu)是(shi)(shi)(shi)(shi)適用(yong)于(yu)所有(you)對(dui)(dui)象(xiang)的(de)(de)(de)(de)(de)(de)(de)(de)單個分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei),即根節點。此根下的(de)(de)(de)(de)(de)(de)(de)(de)節點是(shi)(shi)(shi)(shi)更(geng)具體(ti)(ti)的(de)(de)(de)(de)(de)(de)(de)(de)分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei),適用(yong)于(yu)總分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)對(dui)(dui)象(xiang)集的(de)(de)(de)(de)(de)(de)(de)(de)子(zi)集。推理(li)(li)的(de)(de)(de)(de)(de)(de)(de)(de)進展(zhan)從一般到更(geng)具體(ti)(ti)。

知識薈萃

精(jing)品入門(men)和(he)進階教程、論文和(he)代碼整理等

更多

查看相(xiang)關VIP內(nei)容(rong)、論文、資訊等

In this perspective paper we study the effect of non independent and identically distributed (non-IID) data on federated online learning to rank (FOLTR) and chart directions for future work in this new and largely unexplored research area of Information Retrieval. In the FOLTR process, clients join a federation to jointly create an effective ranker from the implicit click signal originating in each client, without the need to share data (documents, queries, clicks). A well-known factor that affects the performance of federated learning systems, and that poses serious challenges to these approaches, is the fact that there may be some type of bias in the way the data is distributed across clients. While FOLTR systems are on their own rights a type of federated learning system, the presence and effect of non-IID data in FOLTR has not been studied. To this aim, we first enumerate possible data distribution settings that may showcase data bias across clients and thus give rise to the non-IID problem. Then, we study the impact of each of these settings on the performance of the current state-of-the-art FOLTR approach, the Federated Pairwise Differentiable Gradient Descent (FPDGD), and we highlight which data distributions may pose a problem for FOLTR methods. We also explore how common approaches proposed in the federated learning literature address non-IID issues in FOLTR. This allows us to unveil new research gaps that, we argue, future research in FOLTR should consider. This is an important contribution to the current state of the field of FOLTR because, for FOLTR systems to be deployed, the factors affecting their performance, including the impact of non-IID data, need to thoroughly be understood.

As machine learning algorithms become increasingly integrated in crucial decision-making scenarios, such as healthcare, recruitment, and risk assessment, there have been increasing concerns about the privacy and fairness of such systems. Federated learning has been viewed as a promising solution for collaboratively training of machine learning models among multiple parties while maintaining the privacy of their local data. However, federated learning also poses new challenges in mitigating the potential bias against certain populations (e.g., demographic groups), as this typically requires centralized access to the sensitive information (e.g., race, gender) of each data point. Motivated by the importance and challenges of group fairness in federated learning, in this work, we propose FairFed, a novel algorithm to enhance group fairness via a fairness-aware aggregation method, which aims to provide fair model performance across different sensitive groups (e.g., racial, gender groups) while maintaining high utility. This formulation can further provide more flexibility in the customized local debiasing strategies for each client. We build our FairFed algorithm around the secure aggregation protocol of federated learning. When running federated training on widely investigated fairness datasets, we demonstrate that our proposed method outperforms the state-of-the-art fair federated learning frameworks under a high heterogeneous sensitive attribute distribution. We also investigate the performance of FairFed on naturally distributed real-life data collected from different geographical locations or departments within an organization.

In the context of personalized federated learning (FL), the critical challenge is to balance local model improvement and global model tuning when the personal and global objectives may not be exactly aligned. Inspired by Bayesian hierarchical models, we develop a self-aware personalized FL method where each client can automatically balance the training of its local personal model and the global model that implicitly contributes to other clients' training. Such a balance is derived from the inter-client and intra-client uncertainty quantification. A larger inter-client variation implies more personalization is needed. Correspondingly, our method uses uncertainty-driven local training steps and aggregation rule instead of conventional local fine-tuning and sample size-based aggregation. With experimental studies on synthetic data, Amazon Alexa audio data, and public datasets such as MNIST, FEMNIST, CIFAR10, and Sent140, we show that our proposed method can achieve significantly improved personalization performance compared with the existing counterparts.

Despite its technological benefits, Internet of Things (IoT) has cyber weaknesses due to the vulnerabilities in the wireless medium. Machine learning (ML)-based methods are widely used against cyber threats in IoT networks with promising performance. Advanced persistent threat (APT) is prominent for cybercriminals to compromise networks, and it is crucial to long-term and harmful characteristics. However, it is difficult to apply ML-based approaches to identify APT attacks to obtain a promising detection performance due to an extremely small percentage among normal traffic. There are limited surveys to fully investigate APT attacks in IoT networks due to the lack of public datasets with all types of APT attacks. It is worth to bridge the state-of-the-art in network attack detection with APT attack detection in a comprehensive review article. This survey article reviews the security challenges in IoT networks and presents the well-known attacks, APT attacks, and threat models in IoT systems. Meanwhile, signature-based, anomaly-based, and hybrid intrusion detection systems are summarized for IoT networks. The article highlights statistical insights regarding frequently applied ML-based methods against network intrusion alongside the number of attacks types detected. Finally, open issues and challenges for common network intrusion and APT attacks are presented for future research.

Recently, federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data. Nevertheless, directly applying federated learning to real-world tasks faces two challenges: (1) heterogeneity in the data among different organizations; and (2) data noises inside individual organizations. In this paper, we propose a general framework to solve the above two challenges simultaneously. Specifically, we propose using distributionally robust optimization to mitigate the negative effects caused by data heterogeneity paradigm to sample clients based on a learnable distribution at each iteration. Additionally, we observe that this optimization paradigm is easily affected by data noises inside local clients, which has a significant performance degradation in terms of global model prediction accuracy. To solve this problem, we propose to incorporate mixup techniques into the local training process of federated learning. We further provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability. Furthermore, we conduct empirical studies across different drug discovery tasks, such as ADMET property prediction and drug-target affinity prediction.

With its powerful capability to deal with graph data widely found in practical applications, graph neural networks (GNNs) have received significant research attention. However, as societies become increasingly concerned with data privacy, GNNs face the need to adapt to this new normal. This has led to the rapid development of federated graph neural networks (FedGNNs) research in recent years. Although promising, this interdisciplinary field is highly challenging for interested researchers to enter into. The lack of an insightful survey on this topic only exacerbates this problem. In this paper, we bridge this gap by offering a comprehensive survey of this emerging field. We propose a unique 3-tiered taxonomy of the FedGNNs literature to provide a clear view into how GNNs work in the context of Federated Learning (FL). It puts existing works into perspective by analyzing how graph data manifest themselves in FL settings, how GNN training is performed under different FL system architectures and degrees of graph data overlap across data silo, and how GNN aggregation is performed under various FL settings. Through discussions of the advantages and limitations of existing works, we envision future research directions that can help build more robust, dynamic, efficient, and interpretable FedGNNs.

As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continue to thrive in this new reality. Existing FL protocol design has been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this paper, we conduct the first comprehensive survey on this topic. Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions towards robust and privacy-preserving federated learning.

Backdoor attack intends to embed hidden backdoor into deep neural networks (DNNs), such that the attacked model performs well on benign samples, whereas its prediction will be maliciously changed if the hidden backdoor is activated by the attacker-defined trigger. Backdoor attack could happen when the training process is not fully controlled by the user, such as training on third-party datasets or adopting third-party models, which poses a new and realistic threat. Although backdoor learning is an emerging and rapidly growing research area, its systematic review, however, remains blank. In this paper, we present the first comprehensive survey of this realm. We summarize and categorize existing backdoor attacks and defenses based on their characteristics, and provide a unified framework for analyzing poisoning-based backdoor attacks. Besides, we also analyze the relation between backdoor attacks and the relevant fields ($i.e.,$ adversarial attack and data poisoning), and summarize the benchmark datasets. Finally, we briefly outline certain future research directions relying upon reviewed works.

Deep learning models on graphs have achieved remarkable performance in various graph analysis tasks, e.g., node classification, link prediction and graph clustering. However, they expose uncertainty and unreliability against the well-designed inputs, i.e., adversarial examples. Accordingly, various studies have emerged for both attack and defense addressed in different graph analysis tasks, leading to the arms race in graph adversarial learning. For instance, the attacker has poisoning and evasion attack, and the defense group correspondingly has preprocessing- and adversarial- based methods. Despite the booming works, there still lacks a unified problem definition and a comprehensive review. To bridge this gap, we investigate and summarize the existing works on graph adversarial learning tasks systemically. Specifically, we survey and unify the existing works w.r.t. attack and defense in graph analysis tasks, and give proper definitions and taxonomies at the same time. Besides, we emphasize the importance of related evaluation metrics, and investigate and summarize them comprehensively. Hopefully, our works can serve as a reference for the relevant researchers, thus providing assistance for their studies. More details of our works are available at //github.com/gitgiter/Graph-Adversarial-Learning.

Transfer learning aims at improving the performance of target learners on target domains by transferring the knowledge contained in different but related source domains. In this way, the dependence on a large number of target domain data can be reduced for constructing target learners. Due to the wide application prospects, transfer learning has become a popular and promising area in machine learning. Although there are already some valuable and impressive surveys on transfer learning, these surveys introduce approaches in a relatively isolated way and lack the recent advances in transfer learning. As the rapid expansion of the transfer learning area, it is both necessary and challenging to comprehensively review the relevant studies. This survey attempts to connect and systematize the existing transfer learning researches, as well as to summarize and interpret the mechanisms and the strategies in a comprehensive way, which may help readers have a better understanding of the current research status and ideas. Different from previous surveys, this survey paper reviews over forty representative transfer learning approaches from the perspectives of data and model. The applications of transfer learning are also briefly introduced. In order to show the performance of different transfer learning models, twenty representative transfer learning models are used for experiments. The models are performed on three different datasets, i.e., Amazon Reviews, Reuters-21578, and Office-31. And the experimental results demonstrate the importance of selecting appropriate transfer learning models for different applications in practice.

北京阿比特科技有限公司