亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose the use of physics-informed neural networks for solving the shallow-water equations on the sphere in the meteorological context. Physics-informed neural networks are trained to satisfy the differential equations along with the prescribed initial and boundary data, and thus can be seen as an alternative approach to solving differential equations compared to traditional numerical approaches such as finite difference, finite volume or spectral methods. We discuss the training difficulties of physics-informed neural networks for the shallow-water equations on the sphere and propose a simple multi-model approach to tackle test cases of comparatively long time intervals. Here we train a sequence of neural networks instead of a single neural network for the entire integration interval. We also avoid the use of a boundary value loss by encoding the boundary conditions in a custom neural network layer. We illustrate the abilities of the method by solving the most prominent test cases proposed by Williamson et al. [J. Comput. Phys. 102 (1992), 211-224].

相關內容

神(shen)(shen)(shen)(shen)經(jing)網(wang)(wang)(wang)(wang)絡(luo)(luo)(Neural Networks)是(shi)世界(jie)上三個最古老的(de)(de)(de)(de)神(shen)(shen)(shen)(shen)經(jing)建模學(xue)(xue)會(hui)的(de)(de)(de)(de)檔(dang)案期刊:國(guo)際神(shen)(shen)(shen)(shen)經(jing)網(wang)(wang)(wang)(wang)絡(luo)(luo)學(xue)(xue)會(hui)(INNS)、歐洲神(shen)(shen)(shen)(shen)經(jing)網(wang)(wang)(wang)(wang)絡(luo)(luo)學(xue)(xue)會(hui)(ENNS)和(he)日(ri)本神(shen)(shen)(shen)(shen)經(jing)網(wang)(wang)(wang)(wang)絡(luo)(luo)學(xue)(xue)會(hui)(JNNS)。神(shen)(shen)(shen)(shen)經(jing)網(wang)(wang)(wang)(wang)絡(luo)(luo)提供了一(yi)(yi)個論(lun)壇,以發展(zhan)和(he)培(pei)育一(yi)(yi)個國(guo)際社會(hui)的(de)(de)(de)(de)學(xue)(xue)者(zhe)和(he)實(shi)踐者(zhe)感興趣的(de)(de)(de)(de)所有方面的(de)(de)(de)(de)神(shen)(shen)(shen)(shen)經(jing)網(wang)(wang)(wang)(wang)絡(luo)(luo)和(he)相關(guan)方法的(de)(de)(de)(de)計算(suan)智(zhi)能(neng)。神(shen)(shen)(shen)(shen)經(jing)網(wang)(wang)(wang)(wang)絡(luo)(luo)歡迎高質(zhi)量論(lun)文的(de)(de)(de)(de)提交(jiao),有助于全面的(de)(de)(de)(de)神(shen)(shen)(shen)(shen)經(jing)網(wang)(wang)(wang)(wang)絡(luo)(luo)研究(jiu),從行為(wei)和(he)大腦建模,學(xue)(xue)習(xi)算(suan)法,通過數(shu)學(xue)(xue)和(he)計算(suan)分析(xi),系(xi)統的(de)(de)(de)(de)工(gong)程(cheng)(cheng)和(he)技術(shu)應用,大量使用神(shen)(shen)(shen)(shen)經(jing)網(wang)(wang)(wang)(wang)絡(luo)(luo)的(de)(de)(de)(de)概念和(he)技術(shu)。這一(yi)(yi)獨特而(er)廣(guang)泛的(de)(de)(de)(de)范圍促(cu)進了生(sheng)(sheng)物(wu)和(he)技術(shu)研究(jiu)之間的(de)(de)(de)(de)思想(xiang)交(jiao)流,并有助于促(cu)進對生(sheng)(sheng)物(wu)啟發的(de)(de)(de)(de)計算(suan)智(zhi)能(neng)感興趣的(de)(de)(de)(de)跨學(xue)(xue)科(ke)(ke)社區(qu)的(de)(de)(de)(de)發展(zhan)。因此,神(shen)(shen)(shen)(shen)經(jing)網(wang)(wang)(wang)(wang)絡(luo)(luo)編委會(hui)代表的(de)(de)(de)(de)專家領域包括心理學(xue)(xue),神(shen)(shen)(shen)(shen)經(jing)生(sheng)(sheng)物(wu)學(xue)(xue),計算(suan)機科(ke)(ke)學(xue)(xue),工(gong)程(cheng)(cheng),數(shu)學(xue)(xue),物(wu)理。該雜(za)志發表文章、信(xin)件和(he)評論(lun)以及給編輯的(de)(de)(de)(de)信(xin)件、社論(lun)、時事、軟件調查和(he)專利(li)信(xin)息(xi)。文章發表在五(wu)個部分之一(yi)(yi):認知科(ke)(ke)學(xue)(xue),神(shen)(shen)(shen)(shen)經(jing)科(ke)(ke)學(xue)(xue),學(xue)(xue)習(xi)系(xi)統,數(shu)學(xue)(xue)和(he)計算(suan)分析(xi)、工(gong)程(cheng)(cheng)和(he)應用。 官(guan)網(wang)(wang)(wang)(wang)地址:

In this paper we present a deep learning method to predict the temporal evolution of dissipative dynamic systems. We propose using both geometric and thermodynamic inductive biases to improve accuracy and generalization of the resulting integration scheme. The first is achieved with Graph Neural Networks, which induces a non-Euclidean geometrical prior with permutation invariant node and edge update functions. The second bias is forced by learning the GENERIC structure of the problem, an extension of the Hamiltonian formalism, to model more general non-conservative dynamics. Several examples are provided in both Eulerian and Lagrangian description in the context of fluid and solid mechanics respectively, achieving relative mean errors of less than 3% in all the tested examples. Two ablation studies are provided based on recent works in both physics-informed and geometric deep learning.

We extend the Deep Galerkin Method (DGM) introduced in Sirignano and Spiliopoulos (2018)} to solve a number of partial differential equations (PDEs) that arise in the context of optimal stochastic control and mean field games. First, we consider PDEs where the function is constrained to be positive and integrate to unity, as is the case with Fokker-Planck equations. Our approach involves reparameterizing the solution as the exponential of a neural network appropriately normalized to ensure both requirements are satisfied. This then gives rise to nonlinear a partial integro-differential equation (PIDE) where the integral appearing in the equation is handled by a novel application of importance sampling. Secondly, we tackle a number of Hamilton-Jacobi-Bellman (HJB) equations that appear in stochastic optimal control problems. The key contribution is that these equations are approached in their unsimplified primal form which includes an optimization problem as part of the equation. We extend the DGM algorithm to solve for the value function and the optimal control \simultaneously by characterizing both as deep neural networks. Training the networks is performed by taking alternating stochastic gradient descent steps for the two functions, a technique inspired by the policy improvement algorithms (PIA).

This paper makes the first attempt to apply newly developed upwind GFDM for the meshless solution of two-phase porous flow equations. In the presented method, node cloud is used to flexibly discretize the computational domain, instead of complicated mesh generation. Combining with moving least square approximation and local Taylor expansion, spatial derivatives of oil-phase pressure at a node are approximated by generalized difference operators in the local influence domain of the node. By introducing the first-order upwind scheme of phase relative permeability, and combining the discrete boundary conditions, fully-implicit GFDM-based nonlinear discrete equations of the immiscible two-phase porous flow are obtained and solved by the nonlinear solver based on the Newton iteration method with the automatic differentiation, to avoid the additional computational cost and possible computational instability caused by sequentially coupled scheme. Two numerical examples are implemented to test the computational performances of the presented method. Detailed error analysis finds the two sources of the calculation error, roughly studies the convergence order thus find that the low-order error of GFDM makes the convergence order of GFDM lower than that of FDM when node spacing is small, and points out the significant effect of the symmetry or uniformity of the node collocation in the node influence domain on the accuracy of generalized difference operators, and the radius of the node influence domain should be small to achieve high calculation accuracy, which is a significant difference between the studied hyperbolic two-phase porous flow problem and the elliptic problems when GFDM is applied.

There has been an arising trend of adopting deep learning methods to study partial differential equations (PDEs). This article is to propose a Deep Learning Galerkin Method (DGM) for the closed-loop geothermal system, which is a new coupled multi-physics PDEs and mainly consists of a framework of underground heat exchange pipelines to extract the geothermal heat from the geothermal reservoir. This method is a natural combination of Galerkin Method and machine learning with the solution approximated by a neural network instead of a linear combination of basis functions. We train the neural network by randomly sampling the spatiotemporal points and minimize loss function to satisfy the differential operators, initial condition, boundary and interface conditions. Moreover, the approximate ability of the neural network is proved by the convergence of the loss function and the convergence of the neural network to the exact solution in L^2 norm under certain conditions. Finally, some numerical examples are carried out to demonstrate the approximation ability of the neural networks intuitively.

Graph Neural Networks (GNNs), neural network architectures targeted to learning representations of graphs, have become a popular learning model for prediction tasks on nodes, graphs and configurations of points, with wide success in practice. This article summarizes a selection of the emerging theoretical results on approximation and learning properties of widely used message passing GNNs and higher-order GNNs, focusing on representation, generalization and extrapolation. Along the way, it summarizes mathematical connections.

One of the main challenges in solving time-dependent partial differential equations is to develop computationally efficient solvers that are accurate and stable. Here, we introduce a graph neural network approach to finding efficient PDE solvers through learning using message-passing models. We first introduce domain invariant features for PDE-data inspired by classical PDE solvers for an efficient physical representation. Next, we use graphs to represent PDE-data on an unstructured mesh and show that message passing graph neural networks (MPGNN) can parameterize governing equations, and as a result, efficiently learn accurate solver schemes for linear/nonlinear PDEs. We further show that the solvers are independent of the initial trained geometry, i.e. the trained solver can find PDE solution on different complex domains. Lastly, we show that a recurrent graph neural network approach can find a temporal sequence of solutions to a PDE.

We design the helicity-conservative physics-informed neural network model for the Navier-Stokes equation in the ideal case. The key is to provide an appropriate PDE model as loss function so that its neural network solutions produce helicity conservation. Physics-informed neural network model is based on the strong form of PDE. We show that the relevant helicity-conservative finite element method based on the weak formulation of PDE can be somewhat different. More precisely, we compares the PINN formulation and the finite element method based on the weak formulation for conserving helicity and argues that for the conservation, strong PDE is more natural. Our result is justified by theory as well. Furthermore, a couple of numerical calculations are demonstrated to confirm our theoretical finding.

Convection-diffusion-reaction equations model the conservation of scalar quantities. From the analytic point of view, solution of these equations satisfy under certain conditions maximum principles, which represent physical bounds of the solution. That the same bounds are respected by numerical approximations of the solution is often of utmost importance in practice. The mathematical formulation of this property, which contributes to the physical consistency of a method, is called Discrete Maximum Principle (DMP). In many applications, convection dominates diffusion by several orders of magnitude. It is well known that standard discretizations typically do not satisfy the DMP in this convection-dominated regime. In fact, in this case, it turns out to be a challenging problem to construct discretizations that, on the one hand, respect the DMP and, on the other hand, compute accurate solutions. This paper presents a survey on finite element methods, with a main focus on the convection-dominated regime, that satisfy a local or a global DMP. The concepts of the underlying numerical analysis are discussed. The survey reveals that for the steady-state problem there are only a few discretizations, all of them nonlinear, that at the same time satisfy the DMP and compute reasonably accurate solutions, e.g., algebraically stabilized schemes. Moreover, most of these discretizations have been developed in recent years, showing the enormous progress that has been achieved lately. Methods based on algebraic stabilization, nonlinear and linear ones, are currently as well the only finite element methods that combine the satisfaction of the global DMP and accurate numerical results for the evolutionary equations in the convection-dominated situation.

We recall some of the history of the information-theoretic approach to deriving core results in probability theory and indicate parts of the recent resurgence of interest in this area with current progress along several interesting directions. Then we give a new information-theoretic proof of a finite version of de Finetti's classical representation theorem for finite-valued random variables. We derive an upper bound on the relative entropy between the distribution of the first $k$ in a sequence of $n$ exchangeable random variables, and an appropriate mixture over product distributions. The mixing measure is characterised as the law of the empirical measure of the original sequence, and de Finetti's result is recovered as a corollary. The proof is nicely motivated by the Gibbs conditioning principle in connection with statistical mechanics, and it follows along an appealing sequence of steps. The technical estimates required for these steps are obtained via the use of a collection of combinatorial tools known within information theory as `the method of types.'

The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.

北京阿比特科技有限公司