亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Graph Neural Networks (GNNs), a generalization of deep neural networks on graph data have been widely used in various domains, ranging from drug discovery to recommender systems. However, GNNs on such applications are limited when there are few available samples. Meta-learning has been an important framework to address the lack of samples in machine learning, and in recent years, researchers have started to apply meta-learning to GNNs. In this work, we provide a comprehensive survey of different meta-learning approaches involving GNNs on various graph problems showing the power of using these two approaches together. We categorize the literature based on proposed architectures, shared representations, and applications. Finally, we discuss several exciting future research directions and open problems.

相關內容

神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)(Neural Networks)是世界上三個(ge)(ge)最古老的(de)(de)(de)(de)神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)建(jian)模(mo)學(xue)(xue)(xue)(xue)會的(de)(de)(de)(de)檔案期(qi)刊(kan):國(guo)際(ji)神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)學(xue)(xue)(xue)(xue)會(INNS)、歐洲神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)學(xue)(xue)(xue)(xue)會(ENNS)和(he)日本神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)學(xue)(xue)(xue)(xue)會(JNNS)。神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)提供了一個(ge)(ge)論(lun)壇,以發(fa)展(zhan)和(he)培育一個(ge)(ge)國(guo)際(ji)社會的(de)(de)(de)(de)學(xue)(xue)(xue)(xue)者和(he)實踐(jian)者感興趣的(de)(de)(de)(de)所有方面(mian)的(de)(de)(de)(de)神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)和(he)相(xiang)關方法的(de)(de)(de)(de)計(ji)(ji)算(suan)(suan)智能。神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)歡迎高質量論(lun)文(wen)的(de)(de)(de)(de)提交,有助于全面(mian)的(de)(de)(de)(de)神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)研究,從行為和(he)大(da)腦建(jian)模(mo),學(xue)(xue)(xue)(xue)習(xi)算(suan)(suan)法,通過數學(xue)(xue)(xue)(xue)和(he)計(ji)(ji)算(suan)(suan)分(fen)析(xi),系統的(de)(de)(de)(de)工(gong)程(cheng)(cheng)和(he)技術(shu)應用(yong),大(da)量使用(yong)神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)的(de)(de)(de)(de)概(gai)念和(he)技術(shu)。這一獨(du)特而廣泛的(de)(de)(de)(de)范圍促(cu)進(jin)了生物(wu)(wu)和(he)技術(shu)研究之間的(de)(de)(de)(de)思(si)想交流,并有助于促(cu)進(jin)對(dui)生物(wu)(wu)啟發(fa)的(de)(de)(de)(de)計(ji)(ji)算(suan)(suan)智能感興趣的(de)(de)(de)(de)跨學(xue)(xue)(xue)(xue)科(ke)(ke)社區的(de)(de)(de)(de)發(fa)展(zhan)。因此,神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)編(bian)委會代表的(de)(de)(de)(de)專(zhuan)家領域包(bao)括心理(li)學(xue)(xue)(xue)(xue),神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)生物(wu)(wu)學(xue)(xue)(xue)(xue),計(ji)(ji)算(suan)(suan)機科(ke)(ke)學(xue)(xue)(xue)(xue),工(gong)程(cheng)(cheng),數學(xue)(xue)(xue)(xue),物(wu)(wu)理(li)。該雜志(zhi)發(fa)表文(wen)章、信(xin)件(jian)和(he)評論(lun)以及給編(bian)輯的(de)(de)(de)(de)信(xin)件(jian)、社論(lun)、時(shi)事(shi)、軟件(jian)調查和(he)專(zhuan)利(li)信(xin)息(xi)。文(wen)章發(fa)表在五個(ge)(ge)部分(fen)之一:認知科(ke)(ke)學(xue)(xue)(xue)(xue),神(shen)(shen)(shen)經(jing)(jing)(jing)(jing)科(ke)(ke)學(xue)(xue)(xue)(xue),學(xue)(xue)(xue)(xue)習(xi)系統,數學(xue)(xue)(xue)(xue)和(he)計(ji)(ji)算(suan)(suan)分(fen)析(xi)、工(gong)程(cheng)(cheng)和(he)應用(yong)。 官網(wang)(wang)地(di)址:

In the last decade or so, we have witnessed deep learning reinvigorating the machine learning field. It has solved many problems in the domains of computer vision, speech recognition, natural language processing, and various other tasks with state-of-the-art performance. The data is generally represented in the Euclidean space in these domains. Various other domains conform to non-Euclidean space, for which graph is an ideal representation. Graphs are suitable for representing the dependencies and interrelationships between various entities. Traditionally, handcrafted features for graphs are incapable of providing the necessary inference for various tasks from this complex data representation. Recently, there is an emergence of employing various advances in deep learning to graph data-based tasks. This article provides a comprehensive survey of graph neural networks (GNNs) in each learning setting: supervised, unsupervised, semi-supervised, and self-supervised learning. Taxonomy of each graph based learning setting is provided with logical divisions of methods falling in the given learning setting. The approaches for each learning task are analyzed from both theoretical as well as empirical standpoints. Further, we provide general architecture guidelines for building GNNs. Various applications and benchmark datasets are also provided, along with open challenges still plaguing the general applicability of GNNs.

Deep learning on graphs has attracted significant interests recently. However, most of the works have focused on (semi-) supervised learning, resulting in shortcomings including heavy label reliance, poor generalization, and weak robustness. To address these issues, self-supervised learning (SSL), which extracts informative knowledge through well-designed pretext tasks without relying on manual labels, has become a promising and trending learning paradigm for graph data. Different from SSL on other domains like computer vision and natural language processing, SSL on graphs has an exclusive background, design ideas, and taxonomies. Under the umbrella of graph self-supervised learning, we present a timely and comprehensive review of the existing approaches which employ SSL techniques for graph data. We construct a unified framework that mathematically formalizes the paradigm of graph SSL. According to the objectives of pretext tasks, we divide these approaches into four categories: generation-based, auxiliary property-based, contrast-based, and hybrid approaches. We further conclude the applications of graph SSL across various research fields and summarize the commonly used datasets, evaluation benchmark, performance comparison and open-source codes of graph SSL. Finally, we discuss the remaining challenges and potential future directions in this research field.

Graph neural networks (GNNs) is widely used to learn a powerful representation of graph-structured data. Recent work demonstrates that transferring knowledge from self-supervised tasks to downstream tasks could further improve graph representation. However, there is an inherent gap between self-supervised tasks and downstream tasks in terms of optimization objective and training data. Conventional pre-training methods may be not effective enough on knowledge transfer since they do not make any adaptation for downstream tasks. To solve such problems, we propose a new transfer learning paradigm on GNNs which could effectively leverage self-supervised tasks as auxiliary tasks to help the target task. Our methods would adaptively select and combine different auxiliary tasks with the target task in the fine-tuning stage. We design an adaptive auxiliary loss weighting model to learn the weights of auxiliary tasks by quantifying the consistency between auxiliary tasks and the target task. In addition, we learn the weighting model through meta-learning. Our methods can be applied to various transfer learning approaches, it performs well not only in multi-task learning but also in pre-training and fine-tuning. Comprehensive experiments on multiple downstream tasks demonstrate that the proposed methods can effectively combine auxiliary tasks with the target task and significantly improve the performance compared to state-of-the-art methods.

Graphs are widely used as a popular representation of the network structure of connected data. Graph data can be found in a broad spectrum of application domains such as social systems, ecosystems, biological networks, knowledge graphs, and information systems. With the continuous penetration of artificial intelligence technologies, graph learning (i.e., machine learning on graphs) is gaining attention from both researchers and practitioners. Graph learning proves effective for many tasks, such as classification, link prediction, and matching. Generally, graph learning methods extract relevant features of graphs by taking advantage of machine learning algorithms. In this survey, we present a comprehensive overview on the state-of-the-art of graph learning. Special attention is paid to four categories of existing graph learning methods, including graph signal processing, matrix factorization, random walk, and deep learning. Major models and algorithms under these categories are reviewed respectively. We examine graph learning applications in areas such as text, images, science, knowledge graphs, and combinatorial optimization. In addition, we discuss several promising research directions in this field.

Dynamic neural network is an emerging research topic in deep learning. Compared to static models which have fixed computational graphs and parameters at the inference stage, dynamic networks can adapt their structures or parameters to different inputs, leading to notable advantages in terms of accuracy, computational efficiency, adaptiveness, etc. In this survey, we comprehensively review this rapidly developing area by dividing dynamic networks into three main categories: 1) instance-wise dynamic models that process each instance with data-dependent architectures or parameters; 2) spatial-wise dynamic networks that conduct adaptive computation with respect to different spatial locations of image data and 3) temporal-wise dynamic models that perform adaptive inference along the temporal dimension for sequential data such as videos and texts. The important research problems of dynamic networks, e.g., architecture design, decision making scheme, optimization technique and applications, are reviewed systematically. Finally, we discuss the open problems in this field together with interesting future research directions.

Graph neural networks provide a powerful toolkit for embedding real-world graphs into low-dimensional spaces according to specific tasks. Up to now, there have been several surveys on this topic. However, they usually lay emphasis on different angles so that the readers can not see a panorama of the graph neural networks. This survey aims to overcome this limitation, and provide a comprehensive review on the graph neural networks. First of all, we provide a novel taxonomy for the graph neural networks, and then refer to up to 400 relevant literatures to show the panorama of the graph neural networks. All of them are classified into the corresponding categories. In order to drive the graph neural networks into a new stage, we summarize four future research directions so as to overcome the facing challenges. It is expected that more and more scholars can understand and exploit the graph neural networks, and use them in their research community.

Lots of learning tasks require dealing with graph data which contains rich relation information among elements. Modeling physics system, learning molecular fingerprints, predicting protein interface, and classifying diseases require a model to learn from graph inputs. In other domains such as learning from non-structural data like texts and images, reasoning on extracted structures, like the dependency tree of sentences and the scene graph of images, is an important research topic which also needs graph reasoning models. Graph neural networks (GNNs) are connectionist models that capture the dependence of graphs via message passing between the nodes of graphs. Unlike standard neural networks, graph neural networks retain a state that can represent information from its neighborhood with arbitrary depth. Although the primitive GNNs have been found difficult to train for a fixed point, recent advances in network architectures, optimization techniques, and parallel computation have enabled successful learning with them. In recent years, systems based on variants of graph neural networks such as graph convolutional network (GCN), graph attention network (GAT), gated graph neural network (GGNN) have demonstrated ground-breaking performance on many tasks mentioned above. In this survey, we provide a detailed review over existing graph neural network models, systematically categorize the applications, and propose four open problems for future research.

Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into different categories. With a focus on graph convolutional networks, we review alternative architectures that have recently been developed; these learning paradigms include graph attention networks, graph autoencoders, graph generative networks, and graph spatial-temporal networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes and benchmarks of the existing algorithms on different learning tasks. Finally, we propose potential research directions in this fast-growing field.

The era of big data provides researchers with convenient access to copious data. However, people often have little knowledge about it. The increasing prevalence of big data is challenging the traditional methods of learning causality because they are developed for the cases with limited amount of data and solid prior causal knowledge. This survey aims to close the gap between big data and learning causality with a comprehensive and structured review of traditional and frontier methods and a discussion about some open problems of learning causality. We begin with preliminaries of learning causality. Then we categorize and revisit methods of learning causality for the typical problems and data types. After that, we discuss the connections between learning causality and machine learning. At the end, some open problems are presented to show the great potential of learning causality with data.

While advances in computing resources have made processing enormous amounts of data possible, human ability to identify patterns in such data has not scaled accordingly. Thus, efficient computational methods for condensing and simplifying data are becoming vital for extracting actionable insights. In particular, while data summarization techniques have been studied extensively, only recently has summarizing interconnected data, or graphs, become popular. This survey is a structured, comprehensive overview of the state-of-the-art methods for summarizing graph data. We first broach the motivation behind and the challenges of graph summarization. We then categorize summarization approaches by the type of graphs taken as input and further organize each category by core methodology. Finally, we discuss applications of summarization on real-world graphs and conclude by describing some open problems in the field.

北京阿比特科技有限公司