亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We prove a priori and a posteriori error estimates for physics-informed neural networks (PINNs) for linear PDEs. We analyze elliptic equations in primal and mixed form, elasticity, parabolic, hyperbolic and Stokes equations; and a PDE constrained optimization problem. For the analysis, we propose an abstract framework in the common language of bilinear forms, and we show that coercivity and continuity lead to error estimates. The obtained estimates are sharp and reveal that the $L^2$ penalty approach for initial and boundary conditions in the PINN formulation weakens the norm of the error decay. Finally, utilizing recent advances in PINN optimization, we present numerical examples that illustrate the ability of the method to achieve accurate solutions.

相關內容

神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)絡(luo)(luo)(luo)(luo)(Neural Networks)是世(shi)界上三個(ge)最古(gu)老(lao)的(de)(de)(de)神(shen)(shen)(shen)經(jing)(jing)建(jian)模學(xue)(xue)(xue)會(hui)(hui)的(de)(de)(de)檔案期刊(kan):國際神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)絡(luo)(luo)(luo)(luo)學(xue)(xue)(xue)會(hui)(hui)(INNS)、歐(ou)洲神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)絡(luo)(luo)(luo)(luo)學(xue)(xue)(xue)會(hui)(hui)(ENNS)和日本神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)絡(luo)(luo)(luo)(luo)學(xue)(xue)(xue)會(hui)(hui)(JNNS)。神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)絡(luo)(luo)(luo)(luo)提供(gong)了(le)一個(ge)論(lun)(lun)壇,以(yi)發展(zhan)和培育(yu)一個(ge)國際社會(hui)(hui)的(de)(de)(de)學(xue)(xue)(xue)者和實踐者感興(xing)趣的(de)(de)(de)所有方面的(de)(de)(de)神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)絡(luo)(luo)(luo)(luo)和相關方法的(de)(de)(de)計(ji)(ji)算(suan)(suan)智(zhi)能。神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)絡(luo)(luo)(luo)(luo)歡迎高(gao)質量論(lun)(lun)文的(de)(de)(de)提交(jiao),有助于全面的(de)(de)(de)神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)絡(luo)(luo)(luo)(luo)研究(jiu),從行為(wei)和大(da)腦(nao)建(jian)模,學(xue)(xue)(xue)習(xi)算(suan)(suan)法,通過數學(xue)(xue)(xue)和計(ji)(ji)算(suan)(suan)分析(xi),系統的(de)(de)(de)工(gong)程(cheng)和技(ji)(ji)術應(ying)用,大(da)量使用神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)絡(luo)(luo)(luo)(luo)的(de)(de)(de)概念和技(ji)(ji)術。這一獨(du)特而(er)廣泛的(de)(de)(de)范圍促進了(le)生物(wu)和技(ji)(ji)術研究(jiu)之(zhi)間(jian)的(de)(de)(de)思想交(jiao)流(liu),并有助于促進對生物(wu)啟(qi)發的(de)(de)(de)計(ji)(ji)算(suan)(suan)智(zhi)能感興(xing)趣的(de)(de)(de)跨學(xue)(xue)(xue)科(ke)社區(qu)的(de)(de)(de)發展(zhan)。因此,神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)絡(luo)(luo)(luo)(luo)編(bian)委會(hui)(hui)代(dai)表的(de)(de)(de)專(zhuan)家領域包(bao)括(kuo)心(xin)理學(xue)(xue)(xue),神(shen)(shen)(shen)經(jing)(jing)生物(wu)學(xue)(xue)(xue),計(ji)(ji)算(suan)(suan)機(ji)科(ke)學(xue)(xue)(xue),工(gong)程(cheng),數學(xue)(xue)(xue),物(wu)理。該雜志(zhi)發表文章(zhang)、信(xin)件(jian)和評論(lun)(lun)以(yi)及給編(bian)輯的(de)(de)(de)信(xin)件(jian)、社論(lun)(lun)、時事、軟件(jian)調查和專(zhuan)利(li)信(xin)息。文章(zhang)發表在五個(ge)部分之(zhi)一:認知科(ke)學(xue)(xue)(xue),神(shen)(shen)(shen)經(jing)(jing)科(ke)學(xue)(xue)(xue),學(xue)(xue)(xue)習(xi)系統,數學(xue)(xue)(xue)和計(ji)(ji)算(suan)(suan)分析(xi)、工(gong)程(cheng)和應(ying)用。 官(guan)網(wang)(wang)(wang)地址:

Existing knowledge graph (KG) embedding models have primarily focused on static KGs. However, real-world KGs do not remain static, but rather evolve and grow in tandem with the development of KG applications. Consequently, new facts and previously unseen entities and relations continually emerge, necessitating an embedding model that can quickly learn and transfer new knowledge through growth. Motivated by this, we delve into an expanding field of KG embedding in this paper, i.e., lifelong KG embedding. We consider knowledge transfer and retention of the learning on growing snapshots of a KG without having to learn embeddings from scratch. The proposed model includes a masked KG autoencoder for embedding learning and update, with an embedding transfer strategy to inject the learned knowledge into the new entity and relation embeddings, and an embedding regularization method to avoid catastrophic forgetting. To investigate the impacts of different aspects of KG growth, we construct four datasets to evaluate the performance of lifelong KG embedding. Experimental results show that the proposed model outperforms the state-of-the-art inductive and lifelong embedding baselines.

Recently, graph neural networks have been gaining a lot of attention to simulate dynamical systems due to their inductive nature leading to zero-shot generalizability. Similarly, physics-informed inductive biases in deep-learning frameworks have been shown to give superior performance in learning the dynamics of physical systems. There is a growing volume of literature that attempts to combine these two approaches. Here, we evaluate the performance of thirteen different graph neural networks, namely, Hamiltonian and Lagrangian graph neural networks, graph neural ODE, and their variants with explicit constraints and different architectures. We briefly explain the theoretical formulation highlighting the similarities and differences in the inductive biases and graph architecture of these systems. We evaluate these models on spring, pendulum, gravitational, and 3D deformable solid systems to compare the performance in terms of rollout error, conserved quantities such as energy and momentum, and generalizability to unseen system sizes. Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance. Further, all the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.

Graph neural networks (GNNs) have demonstrated a significant boost in prediction performance on graph data. At the same time, the predictions made by these models are often hard to interpret. In that regard, many efforts have been made to explain the prediction mechanisms of these models from perspectives such as GNNExplainer, XGNN and PGExplainer. Although such works present systematic frameworks to interpret GNNs, a holistic review for explainable GNNs is unavailable. In this survey, we present a comprehensive review of explainability techniques developed for GNNs. We focus on explainable graph neural networks and categorize them based on the use of explainable methods. We further provide the common performance metrics for GNNs explanations and point out several future research directions.

The dominating NLP paradigm of training a strong neural predictor to perform one task on a specific dataset has led to state-of-the-art performance in a variety of applications (eg. sentiment classification, span-prediction based question answering or machine translation). However, it builds upon the assumption that the data distribution is stationary, ie. that the data is sampled from a fixed distribution both at training and test time. This way of training is inconsistent with how we as humans are able to learn from and operate within a constantly changing stream of information. Moreover, it is ill-adapted to real-world use cases where the data distribution is expected to shift over the course of a model's lifetime. The first goal of this thesis is to characterize the different forms this shift can take in the context of natural language processing, and propose benchmarks and evaluation metrics to measure its effect on current deep learning architectures. We then proceed to take steps to mitigate the effect of distributional shift on NLP models. To this end, we develop methods based on parametric reformulations of the distributionally robust optimization framework. Empirically, we demonstrate that these approaches yield more robust models as demonstrated on a selection of realistic problems. In the third and final part of this thesis, we explore ways of efficiently adapting existing models to new domains or tasks. Our contribution to this topic takes inspiration from information geometry to derive a new gradient update rule which alleviate catastrophic forgetting issues during adaptation.

Graph Neural Networks (GNNs) have received considerable attention on graph-structured data learning for a wide variety of tasks. The well-designed propagation mechanism which has been demonstrated effective is the most fundamental part of GNNs. Although most of GNNs basically follow a message passing manner, litter effort has been made to discover and analyze their essential relations. In this paper, we establish a surprising connection between different propagation mechanisms with a unified optimization problem, showing that despite the proliferation of various GNNs, in fact, their proposed propagation mechanisms are the optimal solution optimizing a feature fitting function over a wide class of graph kernels with a graph regularization term. Our proposed unified optimization framework, summarizing the commonalities between several of the most representative GNNs, not only provides a macroscopic view on surveying the relations between different GNNs, but also further opens up new opportunities for flexibly designing new GNNs. With the proposed framework, we discover that existing works usually utilize naive graph convolutional kernels for feature fitting function, and we further develop two novel objective functions considering adjustable graph kernels showing low-pass or high-pass filtering capabilities respectively. Moreover, we provide the convergence proofs and expressive power comparisons for the proposed models. Extensive experiments on benchmark datasets clearly show that the proposed GNNs not only outperform the state-of-the-art methods but also have good ability to alleviate over-smoothing, and further verify the feasibility for designing GNNs with our unified optimization framework.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

Deep convolutional neural networks (CNNs) have recently achieved great success in many visual recognition tasks. However, existing deep neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past few years, tremendous progress has been made in this area. In this paper, we survey the recent advanced techniques for compacting and accelerating CNNs model developed. These techniques are roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation. Methods of parameter pruning and sharing will be described at the beginning, after that the other techniques will be introduced. For each scheme, we provide insightful analysis regarding the performance, related applications, advantages, and drawbacks etc. Then we will go through a few very recent additional successful methods, for example, dynamic capacity networks and stochastic depths networks. After that, we survey the evaluation matrix, the main datasets used for evaluating the model performance and recent benchmarking efforts. Finally, we conclude this paper, discuss remaining challenges and possible directions on this topic.

With the advent of deep neural networks, learning-based approaches for 3D reconstruction have gained popularity. However, unlike for images, in 3D there is no canonical representation which is both computationally and memory efficient yet allows for representing high-resolution geometry of arbitrary topology. Many of the state-of-the-art learning-based 3D reconstruction approaches can hence only represent very coarse 3D geometry or are limited to a restricted domain. In this paper, we propose occupancy networks, a new representation for learning-based 3D reconstruction methods. Occupancy networks implicitly represent the 3D surface as the continuous decision boundary of a deep neural network classifier. In contrast to existing approaches, our representation encodes a description of the 3D output at infinite resolution without excessive memory footprint. We validate that our representation can efficiently encode 3D structure and can be inferred from various kinds of input. Our experiments demonstrate competitive results, both qualitatively and quantitatively, for the challenging tasks of 3D reconstruction from single images, noisy point clouds and coarse discrete voxel grids. We believe that occupancy networks will become a useful tool in a wide variety of learning-based 3D tasks.

Within the rapidly developing Internet of Things (IoT), numerous and diverse physical devices, Edge devices, Cloud infrastructure, and their quality of service requirements (QoS), need to be represented within a unified specification in order to enable rapid IoT application development, monitoring, and dynamic reconfiguration. But heterogeneities among different configuration knowledge representation models pose limitations for acquisition, discovery and curation of configuration knowledge for coordinated IoT applications. This paper proposes a unified data model to represent IoT resource configuration knowledge artifacts. It also proposes IoT-CANE (Context-Aware recommendatioN systEm) to facilitate incremental knowledge acquisition and declarative context driven knowledge recommendation.

Visual Question Answering (VQA) models have struggled with counting objects in natural images so far. We identify a fundamental problem due to soft attention in these models as a cause. To circumvent this problem, we propose a neural network component that allows robust counting from object proposals. Experiments on a toy task show the effectiveness of this component and we obtain state-of-the-art accuracy on the number category of the VQA v2 dataset without negatively affecting other categories, even outperforming ensemble models with our single model. On a difficult balanced pair metric, the component gives a substantial improvement in counting over a strong baseline by 6.6%.

北京阿比特科技有限公司