亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Neural networks can be used as surrogates for PDE models. They can be made physics-aware by penalizing underlying equations or the conservation of physical properties in the loss function during training. Current approaches allow to additionally respect data from numerical simulations or experiments in the training process. However, this data is frequently expensive to obtain and thus only scarcely available for complex models. In this work, we investigate how physics-aware models can be enriched with computationally cheaper, but inexact, data from other surrogate models like Reduced-Order Models (ROMs). In order to avoid trusting too-low-fidelity surrogate solutions, we develop an approach that is sensitive to the error in inexact data. As a proof of concept, we consider the one-dimensional wave equation and show that the training accuracy is increased by two orders of magnitude when inexact data from ROMs is incorporated.

相關內容

神(shen)(shen)(shen)(shen)(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)絡(luo)(luo)(luo)(Neural Networks)是世界上三個(ge)最古老的(de)(de)(de)(de)神(shen)(shen)(shen)(shen)(shen)(shen)(shen)經(jing)(jing)建模學(xue)(xue)(xue)會(hui)(hui)的(de)(de)(de)(de)檔案期刊:國際神(shen)(shen)(shen)(shen)(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)絡(luo)(luo)(luo)學(xue)(xue)(xue)會(hui)(hui)(INNS)、歐洲(zhou)神(shen)(shen)(shen)(shen)(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)絡(luo)(luo)(luo)學(xue)(xue)(xue)會(hui)(hui)(ENNS)和(he)(he)日本神(shen)(shen)(shen)(shen)(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)絡(luo)(luo)(luo)學(xue)(xue)(xue)會(hui)(hui)(JNNS)。神(shen)(shen)(shen)(shen)(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)絡(luo)(luo)(luo)提供了(le)一個(ge)論壇,以發展和(he)(he)培育(yu)一個(ge)國際社會(hui)(hui)的(de)(de)(de)(de)學(xue)(xue)(xue)者(zhe)(zhe)和(he)(he)實踐者(zhe)(zhe)感興(xing)趣(qu)的(de)(de)(de)(de)所有方面的(de)(de)(de)(de)神(shen)(shen)(shen)(shen)(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)絡(luo)(luo)(luo)和(he)(he)相(xiang)關方法的(de)(de)(de)(de)計(ji)算(suan)智能(neng)。神(shen)(shen)(shen)(shen)(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)絡(luo)(luo)(luo)歡(huan)迎高質量(liang)(liang)論文的(de)(de)(de)(de)提交,有助(zhu)于(yu)全(quan)面的(de)(de)(de)(de)神(shen)(shen)(shen)(shen)(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)絡(luo)(luo)(luo)研究(jiu),從行(xing)為和(he)(he)大(da)腦(nao)建模,學(xue)(xue)(xue)習(xi)算(suan)法,通(tong)過數學(xue)(xue)(xue)和(he)(he)計(ji)算(suan)分析,系(xi)統(tong)的(de)(de)(de)(de)工(gong)程(cheng)和(he)(he)技(ji)(ji)術應(ying)用(yong),大(da)量(liang)(liang)使用(yong)神(shen)(shen)(shen)(shen)(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)絡(luo)(luo)(luo)的(de)(de)(de)(de)概(gai)念和(he)(he)技(ji)(ji)術。這一獨特而廣(guang)泛的(de)(de)(de)(de)范圍促進了(le)生(sheng)物(wu)和(he)(he)技(ji)(ji)術研究(jiu)之間的(de)(de)(de)(de)思想交流,并有助(zhu)于(yu)促進對生(sheng)物(wu)啟發的(de)(de)(de)(de)計(ji)算(suan)智能(neng)感興(xing)趣(qu)的(de)(de)(de)(de)跨學(xue)(xue)(xue)科社區的(de)(de)(de)(de)發展。因此(ci),神(shen)(shen)(shen)(shen)(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)絡(luo)(luo)(luo)編委會(hui)(hui)代(dai)表(biao)的(de)(de)(de)(de)專家領域包(bao)括(kuo)心理學(xue)(xue)(xue),神(shen)(shen)(shen)(shen)(shen)(shen)(shen)經(jing)(jing)生(sheng)物(wu)學(xue)(xue)(xue),計(ji)算(suan)機科學(xue)(xue)(xue),工(gong)程(cheng),數學(xue)(xue)(xue),物(wu)理。該雜志發表(biao)文章、信件和(he)(he)評論以及給編輯的(de)(de)(de)(de)信件、社論、時事(shi)、軟件調查和(he)(he)專利信息(xi)。文章發表(biao)在五(wu)個(ge)部(bu)分之一:認知(zhi)科學(xue)(xue)(xue),神(shen)(shen)(shen)(shen)(shen)(shen)(shen)經(jing)(jing)科學(xue)(xue)(xue),學(xue)(xue)(xue)習(xi)系(xi)統(tong),數學(xue)(xue)(xue)和(he)(he)計(ji)算(suan)分析、工(gong)程(cheng)和(he)(he)應(ying)用(yong)。 官網(wang)(wang)(wang)地址:

This paper presents a comprehensive review of the design of experiments used in the surrogate models. In particular, this study demonstrates the necessity of the design of experiment schemes for the Physics-Informed Neural Network (PINN), which belongs to the supervised learning class. Many complex partial differential equations (PDEs) do not have any analytical solution; only numerical methods are used to solve the equations, which is computationally expensive. In recent decades, PINN has gained popularity as a replacement for numerical methods to reduce the computational budget. PINN uses physical information in the form of differential equations to enhance the performance of the neural networks. Though it works efficiently, the choice of the design of experiment scheme is important as the accuracy of the predicted responses using PINN depends on the training data. In this study, five different PDEs are used for numerical purposes, i.e., viscous Burger's equation, Shr\"{o}dinger equation, heat equation, Allen-Cahn equation, and Korteweg-de Vries equation. A comparative study is performed to establish the necessity of the selection of a DoE scheme. It is seen that the Hammersley sampling-based PINN performs better than other DoE sample strategies.

Physics-Informed Neural Networks (PINN) are neural networks (NNs) that encode model equations, like Partial Differential Equations (PDE), as a component of the neural network itself. PINNs are nowadays used to solve PDEs, fractional equations, and integral-differential equations. This novel methodology has arisen as a multi-task learning framework in which a NN must fit observed data while reducing a PDE residual. This article provides a comprehensive review of the literature on PINNs: while the primary goal of the study was to characterize these networks and their related advantages and disadvantages, the review also attempts to incorporate publications on a larger variety of issues, including physics-constrained neural networks (PCNN), where the initial or boundary conditions are directly embedded in the NN structure rather than in the loss functions. The study indicates that most research has focused on customizing the PINN through different activation functions, gradient optimization techniques, neural network structures, and loss function structures. Despite the wide range of applications for which PINNs have been used, by demonstrating their ability to be more feasible in some contexts than classical numerical techniques like Finite Element Method (FEM), advancements are still possible, most notably theoretical issues that remain unresolved.

Physics-informed neural networks (PINNs) have been proposed to learn the solution of partial differential equations (PDE). In PINNs, the residual form of the PDE of interest and its boundary conditions are lumped into a composite objective function as soft penalties. Here, we show that this specific way of formulating the objective function is the source of severe limitations in the PINN approach when applied to different kinds of PDEs. To address these limitations, we propose a versatile framework based on a constrained optimization problem formulation, where we use the augmented Lagrangian method (ALM) to constrain the solution of a PDE with its boundary conditions and any high-fidelity data that may be available. Our approach is adept at forward and inverse problems with multi-fidelity data fusion. We demonstrate the efficacy and versatility of our physics- and equality-constrained deep-learning framework by applying it to several forward and inverse problems involving multi-dimensional PDEs. Our framework achieves orders of magnitude improvements in accuracy levels in comparison with state-of-the-art physics-informed neural networks.

Despite the successful implementations of physics-informed neural networks in different scientific domains, it has been shown that for complex nonlinear systems, achieving an accurate model requires extensive hyperparameter tuning, network architecture design, and costly and exhaustive training processes. To avoid such obstacles and make the training of physics-informed models less precarious, in this paper, a data-driven multi-fidelity physics-informed framework is proposed based on transfer learning principles. The framework incorporates the knowledge from low-fidelity auxiliary systems and limited labeled data from target actual system to significantly improve the performance of conventional physics-informed models. While minimizing the efforts of designing a complex task-specific network for the problem at hand, the proposed settings guide the physics-informed model towards a fast and efficient convergence to a global optimum. An adaptive weighting method is utilized to further enhance the optimization of the model composite loss function during the training process. A data-driven strategy is also introduced for maintaining high performance in subdomains with significant divergence between low- and high-fidelity behaviours. The heat transfer of composite materials undergoing a cure cycle is investigated as a case study to demonstrate the proposed framework's performance compared to conventional physics-informed models.

We propose the use of physics-informed neural networks for solving the shallow-water equations on the sphere in the meteorological context. Physics-informed neural networks are trained to satisfy the differential equations along with the prescribed initial and boundary data, and thus can be seen as an alternative approach to solving differential equations compared to traditional numerical approaches such as finite difference, finite volume or spectral methods. We discuss the training difficulties of physics-informed neural networks for the shallow-water equations on the sphere and propose a simple multi-model approach to tackle test cases of comparatively long time intervals. Here we train a sequence of neural networks instead of a single neural network for the entire integration interval. We also avoid the use of a boundary value loss by encoding the boundary conditions in a custom neural network layer. We illustrate the abilities of the method by solving the most prominent test cases proposed by Williamson et al. [J. Comput. Phys. 102 (1992), 211-224].

In this paper, we study the long-time convergence and uniform strong propagation of chaos for a class of nonlinear Markov chains for Markov chain Monte Carlo (MCMC). Our technique is quite simple, making use of recent contraction estimates for linear Markov kernels and basic techniques from Markov theory and analysis. Moreover, the same proof strategy applies to both the long-time convergence and propagation of chaos. We also show, via some experiments, that these nonlinear MCMC techniques are viable for use in real-world high-dimensional inference such as Bayesian neural networks.

A large number of current machine learning methods rely upon deep neural networks. Yet, viewing neural networks as nonlinear dynamical systems, it becomes quickly apparent that mathematically rigorously establishing certain patterns generated by the nodes in the network is extremely difficult. Indeed, it is well-understood in the nonlinear dynamics of complex systems that, even in low-dimensional models, analytical techniques rooted in pencil-and-paper approaches reach their limits quickly. In this work, we propose a completely different perspective via the paradigm of rigorous numerical methods of nonlinear dynamics. The idea is to use computer-assisted proofs to validate mathematically the existence of nonlinear patterns in neural networks. As a case study, we consider a class of recurrent neural networks, where we prove via computer assistance the existence of several hundred Hopf bifurcation points, their non-degeneracy, and hence also the existence of several hundred periodic orbits. Our paradigm has the capability to rigorously verify complex nonlinear behaviour of neural networks, which provides a first step to explain the full abilities, as well as potential sensitivities, of machine learning methods via computer-assisted proofs.

Learning data representations under uncertainty is an important task that emerges in numerous machine learning applications. However, uncertainty quantification (UQ) techniques are computationally intensive and become prohibitively expensive for high-dimensional data. In this paper, we present a novel surrogate model for representation learning and uncertainty quantification, which aims to deal with data of moderate to high dimensions. The proposed model combines a neural network approach for dimensionality reduction of the (potentially high-dimensional) data, with a surrogate model method for learning the data distribution. We first employ a variational autoencoder (VAE) to learn a low-dimensional representation of the data distribution. We then propose to harness polynomial chaos expansion (PCE) formulation to map this distribution to the output target. The coefficients of PCE are learned from the distribution representation of the training data using a maximum mean discrepancy (MMD) approach. Our model enables us to (a) learn a representation of the data, (b) estimate uncertainty in the high-dimensional data system, and (c) match high order moments of the output distribution; without any prior statistical assumptions on the data. Numerical experimental results are presented to illustrate the performance of the proposed method.

Graphs, which describe pairwise relations between objects, are essential representations of many real-world data such as social networks. In recent years, graph neural networks, which extend the neural network models to graph data, have attracted increasing attention. Graph neural networks have been applied to advance many different graph related tasks such as reasoning dynamics of the physical system, graph classification, and node classification. Most of the existing graph neural network models have been designed for static graphs, while many real-world graphs are inherently dynamic. For example, social networks are naturally evolving as new users joining and new relations being created. Current graph neural network models cannot utilize the dynamic information in dynamic graphs. However, the dynamic information has been proven to enhance the performance of many graph analytical tasks such as community detection and link prediction. Hence, it is necessary to design dedicated graph neural networks for dynamic graphs. In this paper, we propose DGNN, a new {\bf D}ynamic {\bf G}raph {\bf N}eural {\bf N}etwork model, which can model the dynamic information as the graph evolving. In particular, the proposed framework can keep updating node information by capturing the sequential information of edges, the time intervals between edges and information propagation coherently. Experimental results on various dynamic graphs demonstrate the effectiveness of the proposed framework.

We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.

北京阿比特科技有限公司