亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Numerical solutions to high-dimensional partial differential equations (PDEs) based on neural networks have seen exciting developments. This paper derives complexity estimates of the solutions of $d$-dimensional second-order elliptic PDEs in the Barron space, that is a set of functions admitting the integral of certain parametric ridge function against a probability measure on the parameters. We prove under some appropriate assumptions that if the coefficients and the source term of the elliptic PDE lie in Barron spaces, then the solution of the PDE is $\epsilon$-close with respect to the $H^1$ norm to a Barron function. Moreover, we prove dimension-explicit bounds for the Barron norm of this approximate solution, depending at most polynomially on the dimension $d$ of the PDE. As a direct consequence of the complexity estimates, the solution of the PDE can be approximated on any bounded domain by a two-layer neural network with respect to the $H^1$ norm with a dimension-explicit convergence rate.

相關內容

Entropy-conserving numerical fluxes are a cornerstone of modern high-order entropy-dissipative discretizations of conservation laws. In addition to entropy conservation, other structural properties mimicking the continuous level such as pressure equilibrium and kinetic energy preservation are important. This note proves that there are no numerical fluxes conserving (one of) Harten's entropies for the compressible Euler equations that also preserve pressure equilibria and have a density flux independent of the pressure. This is in contrast to fluxes based on the physical entropy, where even kinetic energy preservation can be achieved in addition.

The filtering equations govern the evolution of the conditional distribution of a signal process given partial, and possibly noisy, observations arriving sequentially in time. Their numerical approximation plays a central role in many real-life applications, including numerical weather prediction, finance and engineering. One of the classical approaches to approximate the solution of the filtering equations is to use a PDE inspired method, called the splitting-up method, initiated by Gyongy, Krylov, LeGland, among other contributors. This method, and other PDE based approaches, have particular applicability for solving low-dimensional problems. In this work we combine this method with a neural network representation. The new methodology is used to produce an approximation of the unnormalised conditional distribution of the signal process. We further develop a recursive normalisation procedure to recover the normalised conditional distribution of the signal process. The new scheme can be iterated over multiple time steps whilst keeping its asymptotic unbiasedness property intact. We test the neural network approximations with numerical approximation results for the Kalman and Benes filter.

In this paper, we propose a semigroup method for solving high-dimensional elliptic partial differential equations (PDEs) and the associated eigenvalue problems based on neural networks. For the PDE problems, we reformulate the original equations as variational problems with the help of semigroup operators and then solve the variational problems with neural network (NN) parameterization. The main advantages are that no mixed second-order derivative computation is needed during the stochastic gradient descent training and that the boundary conditions are taken into account automatically by the semigroup operator. Unlike popular methods like PINN \cite{raissi2019physics} and Deep Ritz \cite{weinan2018deep} where the Dirichlet boundary condition is enforced solely through penalty functions and thus changes the true solution, the proposed method is able to address the boundary conditions without penalty functions and it gives the correct true solution even when penalty functions are added, thanks to the semigroup operator. For eigenvalue problems, a primal-dual method is proposed, efficiently resolving the constraint with a simple scalar dual variable and resulting in a faster algorithm compared with the BSDE solver \cite{han2020solving} in certain problems such as the eigenvalue problem associated with the linear Schr\"odinger operator. Numerical results are provided to demonstrate the performance of the proposed methods.

In this work, we determine the full expression for the global truncation error of hyperbolic partial differential equations (PDEs). In particular, we use theoretical analysis and symbolic algebra to find exact expressions for the coefficients of the generic global truncation error. Our analysis is valid for any hyperbolic PDE, be it linear or non-linear, and employing finite difference, finite volume, or finite element discretization in space, and advanced in time with a predictor-corrector, multistep, or a deferred correction method, belonging to the Method of Lines. Furthermore, we discuss the practical implications of this analysis. If we employ a stable numerical scheme and the orders of accuracy of the global solution error and the global truncation error agree, we make the following asymptotic observations: (a) the order of convergence at constant ratio of $\Delta t$ to $\Delta x$ is governed by the minimum of the orders of the spatial and temporal discretizations, and (b) convergence cannot even be guaranteed under only spatial or temporal refinement. An implication of (a) is that it is impractical to invest in a time-stepping method of order higher than the spatial discretization. In addition to (b), we demonstrate that under certain circumstances, the error can even monotonically increase with refinement only in space or only in time, and explain why this phenomenon occurs. To verify our theoretical findings, we conduct convergence studies of linear and non-linear advection equations using finite difference and finite volume spatial discretizations, and predictor-corrector and multistep time-stepping methods. Finally, we study the effect of slope limiters and monotonicity-preserving strategies on the order of accuracy.

An energy stable finite element scheme within arbitrary Lagrangian Eulerian (ALE) framework is derived for simulating the dynamics of millimetric droplets in contact with solid surfaces. Supporting surfaces considered may exhibit non--homogeneous properties which are incorporated into system through generalized Navier boundary conditions (GNBC). Numerical scheme is constructed such that the counterpart of (continuous) energy balance holds on the discrete level. This ensures that no spurious energy is introduced into the discrete system, i.e. the discrete formulation is stable in the energy norm. The newly proposed scheme is numerically validated to confirm the theoretical predictions. Of a particular interest is the case of droplet on a non-homogeneous inclined surface. This case shows the capabilities of the scheme to capture the complex droplet dynamics (sliding and rolling) while maintaining stability during the long time simulation.

The solution of time fractional partial differential equations in general exhibit a weak singularity near the initial time. In this article we propose a method for solving time fractional diffusion equation with nonlocal diffusion term. The proposed method comprises L1 scheme on graded mesh, finite element method and Newton's method. We discuss the well-posedness of the weak formulation at discrete level and derive \emph{a priori} error estimates for fully-discrete formulation in $L^2(\Omega)$ and $H^1(\Omega)$ norms. Finally, some numerical experiments are conducted to validate the theoretical findings.

The neural network-based approach to solving partial differential equations has attracted considerable attention due to its simplicity and flexibility in representing the solution of the partial differential equation. In training a neural network, the network learns global features corresponding to low-frequency components while high-frequency components are approximated at a much slower rate. For a class of equations in which the solution contains a wide range of scales, the network training process can suffer from slow convergence and low accuracy due to its inability to capture the high-frequency components. In this work, we propose a hierarchical approach to improve the convergence rate and accuracy of the neural network solution to partial differential equations. The proposed method comprises multi-training levels in which a newly introduced neural network is guided to learn the residual of the previous level approximation. By the nature of neural networks' training process, the high-level correction is inclined to capture the high-frequency components. We validate the efficiency and robustness of the proposed hierarchical approach through a suite of linear and nonlinear partial differential equations.

This paper is concerned with the efficient spectral solutions for weakly singular nonlocal diffusion equations with Dirichlet-type volume constraints. This type of equation contains an integral operator which typically has a singularity at the midpoint of the integral domain, and the approximation of such the integral operator is one of the essential difficulties in solving the nonlocal equations. To overcome this problem, two-sided Jacobi spectral quadrature rules are proposed to develop a Jacobi spectral collocation method for the nonlocal diffusion equations. Rigorous convergence analysis of the proposed method is presented in $L^\infty$ norms, and we further prove that the Jacobi collocation solution converges to its corresponding local limit as nonlocal interactions vanish. Numerical examples are given to verify the theoretical results.

We establish summability results for coefficient sequences of Wiener-Hermite polynomial chaos expansions for countably-parametric solutions of linear elliptic and parabolic divergence-form partial differential equations with Gaussian random field inputs. The novel proof technique developed here is based on analytic continuation of parametric solutions into the complex domain. It differs from previous works that used bootstrap arguments and induction on the differentiation order of solution derivatives with respect to the parameters. The present holomorphy-based argument allows a unified, "differentiation-free" sparsity analysis of Wiener-Hermite polynomial chaos expansions in various scales of function spaces. The analysis also implies corresponding results for posterior densities in Bayesian inverse problems subject to Gaussian priors on uncertain inputs from function spaces. Our results furthermore yield dimension-independent convergence rates of various constructive high-dimensional deterministic numerical approximation schemes such as single-level and multi-level versions of anisotropic sparse-grid Hermite-Smolyak interpolation and quadrature in both forward and inverse computational uncertainty quantification.

We propose a novel numerical method for high dimensional Hamilton--Jacobi--Bellman (HJB) type elliptic partial differential equations (PDEs). The HJB PDEs, reformulated as optimal control problems, are tackled by the actor-critic framework inspired by reinforcement learning, based on neural network parametrization of the value and control functions. Within the actor-critic framework, we employ a policy gradient approach to improve the control, while for the value function, we derive a variance reduced least-squares temporal difference method using stochastic calculus. To numerically discretize the stochastic control problem, we employ an adaptive step size scheme to improve the accuracy near the domain boundary. Numerical examples up to $20$ spatial dimensions including the linear quadratic regulators, the stochastic Van der Pol oscillators, the diffusive Eikonal equations, and fully nonlinear elliptic PDEs derived from a regulator problem are presented to validate the effectiveness of our proposed method.

北京阿比特科技有限公司