亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Physics-informed neural networks solve partial differential equations by training neural networks. Since this method approximates infinite-dimensional PDE solutions with finite collocation points, minimizing discretization errors by selecting suitable points is essential for accelerating the learning process. Inspired by number theoretic methods for numerical analysis, we introduce good lattice training and periodization tricks, which ensure the conditions required by the theory. Our experiments demonstrate that GLT requires 2-7 times fewer collocation points, resulting in lower computational cost, while achieving competitive performance compared to typical sampling methods.

相關內容

神(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)(Neural Networks)是世界上三個(ge)(ge)最古(gu)老的(de)(de)神(shen)(shen)經(jing)(jing)建(jian)(jian)模學(xue)(xue)(xue)(xue)會(hui)(hui)的(de)(de)檔案期刊:國(guo)際神(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)學(xue)(xue)(xue)(xue)會(hui)(hui)(INNS)、歐洲神(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)學(xue)(xue)(xue)(xue)會(hui)(hui)(ENNS)和(he)日(ri)本(ben)神(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)學(xue)(xue)(xue)(xue)會(hui)(hui)(JNNS)。神(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)提供了(le)一個(ge)(ge)論(lun)(lun)壇,以發(fa)展和(he)培育一個(ge)(ge)國(guo)際社會(hui)(hui)的(de)(de)學(xue)(xue)(xue)(xue)者和(he)實踐者感興趣的(de)(de)所(suo)有方面的(de)(de)神(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)和(he)相關方法的(de)(de)計(ji)算智(zhi)能(neng)。神(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)歡迎(ying)高(gao)質量論(lun)(lun)文的(de)(de)提交(jiao),有助(zhu)于(yu)全面的(de)(de)神(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)研究,從行為和(he)大腦建(jian)(jian)模,學(xue)(xue)(xue)(xue)習(xi)算法,通過數(shu)學(xue)(xue)(xue)(xue)和(he)計(ji)算分析,系統(tong)的(de)(de)工程(cheng)(cheng)和(he)技術應用(yong),大量使用(yong)神(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)的(de)(de)概念(nian)和(he)技術。這一獨特(te)而廣泛的(de)(de)范圍促進了(le)生物(wu)和(he)技術研究之間的(de)(de)思想交(jiao)流,并有助(zhu)于(yu)促進對生物(wu)啟發(fa)的(de)(de)計(ji)算智(zhi)能(neng)感興趣的(de)(de)跨(kua)學(xue)(xue)(xue)(xue)科社區的(de)(de)發(fa)展。因此,神(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)編(bian)(bian)委會(hui)(hui)代表的(de)(de)專家(jia)領域(yu)包括(kuo)心理(li)學(xue)(xue)(xue)(xue),神(shen)(shen)經(jing)(jing)生物(wu)學(xue)(xue)(xue)(xue),計(ji)算機(ji)科學(xue)(xue)(xue)(xue),工程(cheng)(cheng),數(shu)學(xue)(xue)(xue)(xue),物(wu)理(li)。該雜志(zhi)發(fa)表文章、信件和(he)評(ping)論(lun)(lun)以及給編(bian)(bian)輯(ji)的(de)(de)信件、社論(lun)(lun)、時(shi)事、軟(ruan)件調查和(he)專利信息。文章發(fa)表在五個(ge)(ge)部分之一:認知科學(xue)(xue)(xue)(xue),神(shen)(shen)經(jing)(jing)科學(xue)(xue)(xue)(xue),學(xue)(xue)(xue)(xue)習(xi)系統(tong),數(shu)學(xue)(xue)(xue)(xue)和(he)計(ji)算分析、工程(cheng)(cheng)和(he)應用(yong)。 官網(wang)(wang)地址:

Graph neural networks (GNNs) are effective machine learning models for many graph-related applications. Despite their empirical success, many research efforts focus on the theoretical limitations of GNNs, i.e., the GNNs expressive power. Early works in this domain mainly focus on studying the graph isomorphism recognition ability of GNNs, and recent works try to leverage the properties such as subgraph counting and connectivity learning to characterize the expressive power of GNNs, which are more practical and closer to real-world. However, no survey papers and open-source repositories comprehensively summarize and discuss models in this important direction. To fill the gap, we conduct a first survey for models for enhancing expressive power under different forms of definition. Concretely, the models are reviewed based on three categories, i.e., Graph feature enhancement, Graph topology enhancement, and GNNs architecture enhancement.

Graph neural networks (GNNs) is widely used to learn a powerful representation of graph-structured data. Recent work demonstrates that transferring knowledge from self-supervised tasks to downstream tasks could further improve graph representation. However, there is an inherent gap between self-supervised tasks and downstream tasks in terms of optimization objective and training data. Conventional pre-training methods may be not effective enough on knowledge transfer since they do not make any adaptation for downstream tasks. To solve such problems, we propose a new transfer learning paradigm on GNNs which could effectively leverage self-supervised tasks as auxiliary tasks to help the target task. Our methods would adaptively select and combine different auxiliary tasks with the target task in the fine-tuning stage. We design an adaptive auxiliary loss weighting model to learn the weights of auxiliary tasks by quantifying the consistency between auxiliary tasks and the target task. In addition, we learn the weighting model through meta-learning. Our methods can be applied to various transfer learning approaches, it performs well not only in multi-task learning but also in pre-training and fine-tuning. Comprehensive experiments on multiple downstream tasks demonstrate that the proposed methods can effectively combine auxiliary tasks with the target task and significantly improve the performance compared to state-of-the-art methods.

Recent contrastive representation learning methods rely on estimating mutual information (MI) between multiple views of an underlying context. E.g., we can derive multiple views of a given image by applying data augmentation, or we can split a sequence into views comprising the past and future of some step in the sequence. Contrastive lower bounds on MI are easy to optimize, but have a strong underestimation bias when estimating large amounts of MI. We propose decomposing the full MI estimation problem into a sum of smaller estimation problems by splitting one of the views into progressively more informed subviews and by applying the chain rule on MI between the decomposed views. This expression contains a sum of unconditional and conditional MI terms, each measuring modest chunks of the total MI, which facilitates approximation via contrastive bounds. To maximize the sum, we formulate a contrastive lower bound on the conditional MI which can be approximated efficiently. We refer to our general approach as Decomposed Estimation of Mutual Information (DEMI). We show that DEMI can capture a larger amount of MI than standard non-decomposed contrastive bounds in a synthetic setting, and learns better representations in a vision domain and for dialogue generation.

We consider the problem of explaining the predictions of graph neural networks (GNNs), which otherwise are considered as black boxes. Existing methods invariably focus on explaining the importance of graph nodes or edges but ignore the substructures of graphs, which are more intuitive and human-intelligible. In this work, we propose a novel method, known as SubgraphX, to explain GNNs by identifying important subgraphs. Given a trained GNN model and an input graph, our SubgraphX explains its predictions by efficiently exploring different subgraphs with Monte Carlo tree search. To make the tree search more effective, we propose to use Shapley values as a measure of subgraph importance, which can also capture the interactions among different subgraphs. To expedite computations, we propose efficient approximation schemes to compute Shapley values for graph data. Our work represents the first attempt to explain GNNs via identifying subgraphs explicitly and directly. Experimental results show that our SubgraphX achieves significantly improved explanations, while keeping computations at a reasonable level.

Approaches based on deep neural networks have achieved striking performance when testing data and training data share similar distribution, but can significantly fail otherwise. Therefore, eliminating the impact of distribution shifts between training and testing data is crucial for building performance-promising deep models. Conventional methods assume either the known heterogeneity of training data (e.g. domain labels) or the approximately equal capacities of different domains. In this paper, we consider a more challenging case where neither of the above assumptions holds. We propose to address this problem by removing the dependencies between features via learning weights for training samples, which helps deep models get rid of spurious correlations and, in turn, concentrate more on the true connection between discriminative features and labels. Extensive experiments clearly demonstrate the effectiveness of our method on multiple distribution generalization benchmarks compared with state-of-the-art counterparts. Through extensive experiments on distribution generalization benchmarks including PACS, VLCS, MNIST-M, and NICO, we show the effectiveness of our method compared with state-of-the-art counterparts.

Embedding entities and relations into a continuous multi-dimensional vector space have become the dominant method for knowledge graph embedding in representation learning. However, most existing models ignore to represent hierarchical knowledge, such as the similarities and dissimilarities of entities in one domain. We proposed to learn a Domain Representations over existing knowledge graph embedding models, such that entities that have similar attributes are organized into the same domain. Such hierarchical knowledge of domains can give further evidence in link prediction. Experimental results show that domain embeddings give a significant improvement over the most recent state-of-art baseline knowledge graph embedding models.

Graph neural networks (GNNs) are a popular class of machine learning models whose major advantage is their ability to incorporate a sparse and discrete dependency structure between data points. Unfortunately, GNNs can only be used when such a graph-structure is available. In practice, however, real-world graphs are often noisy and incomplete or might not be available at all. With this work, we propose to jointly learn the graph structure and the parameters of graph convolutional networks (GCNs) by approximately solving a bilevel program that learns a discrete probability distribution on the edges of the graph. This allows one to apply GCNs not only in scenarios where the given graph is incomplete or corrupted but also in those where a graph is not available. We conduct a series of experiments that analyze the behavior of the proposed method and demonstrate that it outperforms related methods by a significant margin.

Recently, graph neural networks (GNNs) have revolutionized the field of graph representation learning through effectively learned node embeddings, and achieved state-of-the-art results in tasks such as node classification and link prediction. However, current GNN methods are inherently flat and do not learn hierarchical representations of graphs---a limitation that is especially problematic for the task of graph classification, where the goal is to predict the label associated with an entire graph. Here we propose DiffPool, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion. DiffPool learns a differentiable soft cluster assignment for nodes at each layer of a deep GNN, mapping nodes to a set of clusters, which then form the coarsened input for the next GNN layer. Our experimental results show that combining existing GNN methods with DiffPool yields an average improvement of 5-10% accuracy on graph classification benchmarks, compared to all existing pooling approaches, achieving a new state-of-the-art on four out of five benchmark data sets.

Multi-relation Question Answering is a challenging task, due to the requirement of elaborated analysis on questions and reasoning over multiple fact triples in knowledge base. In this paper, we present a novel model called Interpretable Reasoning Network that employs an interpretable, hop-by-hop reasoning process for question answering. The model dynamically decides which part of an input question should be analyzed at each hop; predicts a relation that corresponds to the current parsed results; utilizes the predicted relation to update the question representation and the state of the reasoning process; and then drives the next-hop reasoning. Experiments show that our model yields state-of-the-art results on two datasets. More interestingly, the model can offer traceable and observable intermediate predictions for reasoning analysis and failure diagnosis, thereby allowing manual manipulation in predicting the final answer.

Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. In this paper, we propose AdvGAN to generate adversarial examples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. For AdvGAN, once the generator is trained, it can generate adversarial perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. We apply AdvGAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly. Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76% accuracy on a public MNIST black-box attack challenge.

北京阿比特科技有限公司