亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The advent of deep learning and its astonishing performance in perception tasks, such as object recognition and classification, has enabled its usage in complex systems, including autonomous vehicles. On the other hand, deep learning models are susceptible to mis-predictions when small, adversarial changes are introduced into their input. Such mis-predictions can be triggered in the real world and can propagate to a failure of the entire system, as opposed to a localized mis-prediction. In recent years, a growing number of research works have investigated ways to mount attacks against autonomous vehicles that exploit deep learning components for perception tasks. Such attacks are directed toward elements of the environment where these systems operate and their effectiveness is assessed in terms of system-level failures triggered by them. There has been however no systematic attempt to analyze and categorize such attacks. In this paper, we present the first taxonomy of system-level attacks against autonomous vehicles. We constructed our taxonomy by first collecting 8,831 papers, then filtering them down to 1,125 candidates and eventually selecting a set of 19 highly relevant papers that satisfy all inclusion criteria. Then, we tagged them with taxonomy categories, involving three assessors per paper. The resulting taxonomy includes 12 top-level categories and several sub-categories. The taxonomy allowed us to investigate the attack features, the most attacked components, the underlying threat models, and the propagation chains from input perturbation to system-level failure. We distilled several lessons for practitioners and identified possible directions for future work for researchers.

相關內容

分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)學(xue)是(shi)(shi)分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)的(de)(de)(de)(de)實踐和(he)科學(xue)。Wikipedia類(lei)(lei)(lei)別說明(ming)了(le)(le)一種分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa),可(ke)以通過自動方式提取Wikipedia類(lei)(lei)(lei)別的(de)(de)(de)(de)完整分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)。截至(zhi)2009年,已經證明(ming),可(ke)以使(shi)用(yong)(yong)人(ren)工構建的(de)(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)(例(li)如(ru)像WordNet這(zhe)樣(yang)的(de)(de)(de)(de)計算詞典的(de)(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa))來改進(jin)和(he)重組(zu)(zu)Wikipedia類(lei)(lei)(lei)別分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)。 從(cong)廣義上講(jiang),分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)還適用(yong)(yong)于(yu)除(chu)父子層次結構以外(wai)的(de)(de)(de)(de)關系(xi)方案(an),例(li)如(ru)網絡結構。然后分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)可(ke)能包括有多父母的(de)(de)(de)(de)單身孩(hai)子,例(li)如(ru),“汽車”可(ke)能與父母雙(shuang)方一起(qi)出現“車輛”和(he)“鋼結構”;但是(shi)(shi)對某些人(ren)而(er)言(yan),這(zhe)僅意味著“汽車”是(shi)(shi)幾種不同分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)的(de)(de)(de)(de)一部(bu)(bu)分(fen)(fen)(fen)(fen)(fen)。分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)也可(ke)能只(zhi)是(shi)(shi)將事物組(zu)(zu)織成組(zu)(zu),或者是(shi)(shi)按字母順(shun)序(xu)排列的(de)(de)(de)(de)列表;但是(shi)(shi)在(zai)這(zhe)里,術語(yu)詞匯(hui)更(geng)合適。在(zai)知識(shi)管理(li)中(zhong)的(de)(de)(de)(de)當前(qian)用(yong)(yong)法(fa)(fa)(fa)中(zhong),分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)被認為比本體論(lun)(lun)窄,因為本體論(lun)(lun)應用(yong)(yong)了(le)(le)各(ge)種各(ge)樣(yang)的(de)(de)(de)(de)關系(xi)類(lei)(lei)(lei)型。 在(zai)數(shu)學(xue)上,分(fen)(fen)(fen)(fen)(fen)層分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(fa)(fa)是(shi)(shi)給定對象(xiang)(xiang)集的(de)(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)樹(shu)結構。該結構的(de)(de)(de)(de)頂部(bu)(bu)是(shi)(shi)適用(yong)(yong)于(yu)所(suo)有對象(xiang)(xiang)的(de)(de)(de)(de)單個分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei),即根(gen)節(jie)點(dian)。此根(gen)下的(de)(de)(de)(de)節(jie)點(dian)是(shi)(shi)更(geng)具體的(de)(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei),適用(yong)(yong)于(yu)總分(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)對象(xiang)(xiang)集的(de)(de)(de)(de)子集。推理(li)的(de)(de)(de)(de)進(jin)展從(cong)一般到更(geng)具體。

知識薈萃

精品入門和進階教程、論文和代碼整理等(deng)

更多

查看相關(guan)VIP內容、論文、資訊等

The fusion of causal models with deep learning introducing increasingly intricate data sets, such as the causal associations within images or between textual components, has surfaced as a focal research area. Nonetheless, the broadening of original causal concepts and theories to such complex, non-statistical data has been met with serious challenges. In response, our study proposes redefinitions of causal data into three distinct categories from the standpoint of causal structure and representation: definite data, semi-definite data, and indefinite data. Definite data chiefly pertains to statistical data used in conventional causal scenarios, while semi-definite data refers to a spectrum of data formats germane to deep learning, including time-series, images, text, and others. Indefinite data is an emergent research sphere inferred from the progression of data forms by us. To comprehensively present these three data paradigms, we elaborate on their formal definitions, differences manifested in datasets, resolution pathways, and development of research. We summarize key tasks and achievements pertaining to definite and semi-definite data from myriad research undertakings, present a roadmap for indefinite data, beginning with its current research conundrums. Lastly, we classify and scrutinize the key datasets presently utilized within these three paradigms.

Mathematical reasoning is a fundamental aspect of human intelligence and is applicable in various fields, including science, engineering, finance, and everyday life. The development of artificial intelligence (AI) systems capable of solving math problems and proving theorems has garnered significant interest in the fields of machine learning and natural language processing. For example, mathematics serves as a testbed for aspects of reasoning that are challenging for powerful deep learning models, driving new algorithmic and modeling advances. On the other hand, recent advances in large-scale neural language models have opened up new benchmarks and opportunities to use deep learning for mathematical reasoning. In this survey paper, we review the key tasks, datasets, and methods at the intersection of mathematical reasoning and deep learning over the past decade. We also evaluate existing benchmarks and methods, and discuss future research directions in this domain.

In pace with developments in the research field of artificial intelligence, knowledge graphs (KGs) have attracted a surge of interest from both academia and industry. As a representation of semantic relations between entities, KGs have proven to be particularly relevant for natural language processing (NLP), experiencing a rapid spread and wide adoption within recent years. Given the increasing amount of research work in this area, several KG-related approaches have been surveyed in the NLP research community. However, a comprehensive study that categorizes established topics and reviews the maturity of individual research streams remains absent to this day. Contributing to closing this gap, we systematically analyzed 507 papers from the literature on KGs in NLP. Our survey encompasses a multifaceted review of tasks, research types, and contributions. As a result, we present a structured overview of the research landscape, provide a taxonomy of tasks, summarize our findings, and highlight directions for future work.

Understanding causality helps to structure interventions to achieve specific goals and enables predictions under interventions. With the growing importance of learning causal relationships, causal discovery tasks have transitioned from using traditional methods to infer potential causal structures from observational data to the field of pattern recognition involved in deep learning. The rapid accumulation of massive data promotes the emergence of causal search methods with brilliant scalability. Existing summaries of causal discovery methods mainly focus on traditional methods based on constraints, scores and FCMs, there is a lack of perfect sorting and elaboration for deep learning-based methods, also lacking some considers and exploration of causal discovery methods from the perspective of variable paradigms. Therefore, we divide the possible causal discovery tasks into three types according to the variable paradigm and give the definitions of the three tasks respectively, define and instantiate the relevant datasets for each task and the final causal model constructed at the same time, then reviews the main existing causal discovery methods for different tasks. Finally, we propose some roadmaps from different perspectives for the current research gaps in the field of causal discovery and point out future research directions.

In contrast to batch learning where all training data is available at once, continual learning represents a family of methods that accumulate knowledge and learn continuously with data available in sequential order. Similar to the human learning process with the ability of learning, fusing, and accumulating new knowledge coming at different time steps, continual learning is considered to have high practical significance. Hence, continual learning has been studied in various artificial intelligence tasks. In this paper, we present a comprehensive review of the recent progress of continual learning in computer vision. In particular, the works are grouped by their representative techniques, including regularization, knowledge distillation, memory, generative replay, parameter isolation, and a combination of the above techniques. For each category of these techniques, both its characteristics and applications in computer vision are presented. At the end of this overview, several subareas, where continuous knowledge accumulation is potentially helpful while continual learning has not been well studied, are discussed.

Data augmentation, the artificial creation of training data for machine learning by transformations, is a widely studied research field across machine learning disciplines. While it is useful for increasing the generalization capabilities of a model, it can also address many other challenges and problems, from overcoming a limited amount of training data over regularizing the objective to limiting the amount data used to protect privacy. Based on a precise description of the goals and applications of data augmentation (C1) and a taxonomy for existing works (C2), this survey is concerned with data augmentation methods for textual classification and aims to achieve a concise and comprehensive overview for researchers and practitioners (C3). Derived from the taxonomy, we divided more than 100 methods into 12 different groupings and provide state-of-the-art references expounding which methods are highly promising (C4). Finally, research perspectives that may constitute a building block for future work are given (C5).

Influenced by the stunning success of deep learning in computer vision and language understanding, research in recommendation has shifted to inventing new recommender models based on neural networks. In recent years, we have witnessed significant progress in developing neural recommender models, which generalize and surpass traditional recommender models owing to the strong representation power of neural networks. In this survey paper, we conduct a systematic review on neural recommender models, aiming to summarize the field to facilitate future progress. Distinct from existing surveys that categorize existing methods based on the taxonomy of deep learning techniques, we instead summarize the field from the perspective of recommendation modeling, which could be more instructive to researchers and practitioners working on recommender systems. Specifically, we divide the work into three types based on the data they used for recommendation modeling: 1) collaborative filtering models, which leverage the key source of user-item interaction data; 2) content enriched models, which additionally utilize the side information associated with users and items, like user profile and item knowledge graph; and 3) context enriched models, which account for the contextual information associated with an interaction, such as time, location, and the past interactions. After reviewing representative works for each type, we finally discuss some promising directions in this field, including benchmarking recommender systems, graph reasoning based recommendation models, and explainable and fair recommendations for social good.

Deep neural networks have revolutionized many machine learning tasks in power systems, ranging from pattern recognition to signal processing. The data in these tasks is typically represented in Euclidean domains. Nevertheless, there is an increasing number of applications in power systems, where data are collected from non-Euclidean domains and represented as the graph-structured data with high dimensional features and interdependency among nodes. The complexity of graph-structured data has brought significant challenges to the existing deep neural networks defined in Euclidean domains. Recently, many studies on extending deep neural networks for graph-structured data in power systems have emerged. In this paper, a comprehensive overview of graph neural networks (GNNs) in power systems is proposed. Specifically, several classical paradigms of GNNs structures (e.g., graph convolutional networks, graph recurrent neural networks, graph attention networks, graph generative networks, spatial-temporal graph convolutional networks, and hybrid forms of GNNs) are summarized, and key applications in power systems such as fault diagnosis, power prediction, power flow calculation, and data generation are reviewed in detail. Furthermore, main issues and some research trends about the applications of GNNs in power systems are discussed.

We address the task of automatically scoring the competency of candidates based on textual features, from the automatic speech recognition (ASR) transcriptions in the asynchronous video job interview (AVI). The key challenge is how to construct the dependency relation between questions and answers, and conduct the semantic level interaction for each question-answer (QA) pair. However, most of the recent studies in AVI focus on how to represent questions and answers better, but ignore the dependency information and interaction between them, which is critical for QA evaluation. In this work, we propose a Hierarchical Reasoning Graph Neural Network (HRGNN) for the automatic assessment of question-answer pairs. Specifically, we construct a sentence-level relational graph neural network to capture the dependency information of sentences in or between the question and the answer. Based on these graphs, we employ a semantic-level reasoning graph attention network to model the interaction states of the current QA session. Finally, we propose a gated recurrent unit encoder to represent the temporal question-answer pairs for the final prediction. Empirical results conducted on CHNAT (a real-world dataset) validate that our proposed model significantly outperforms text-matching based benchmark models. Ablation studies and experimental results with 10 random seeds also show the effectiveness and stability of our models.

Neural machine translation (NMT) is a deep learning based approach for machine translation, which yields the state-of-the-art translation performance in scenarios where large-scale parallel corpora are available. Although the high-quality and domain-specific translation is crucial in the real world, domain-specific corpora are usually scarce or nonexistent, and thus vanilla NMT performs poorly in such scenarios. Domain adaptation that leverages both out-of-domain parallel corpora as well as monolingual corpora for in-domain translation, is very important for domain-specific translation. In this paper, we give a comprehensive survey of the state-of-the-art domain adaptation techniques for NMT.

北京阿比特科技有限公司