亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we introduce a Matlab program method to compute Carleman estimate for the fourth order partial differential operator $\gamma\partial_t+\partial_x^4\ (\gamma\in\mathbb{R})$. We obtain two kinds of Carleman estimates with different weight functions, i.e. singular weight function and regular weight function, respectively. Based on Carleman estimate with singular weight function, one can obtain the known controllability and observability results for the 1-d fourth order parabolic-type equation, while based on Carleman estimate with regular weight function, one can deduce not only the known result on conditional stability in the inverse problem of half-order fractional diffusion equation, but also a new result on conditional stability in the inverse problem of half-order fractional Schr\"odinger equation.

相關內容

This work analyzes a high order hybridizable discontinuous Galerkin (HDG) method for the linear elasticity problem in a domain not necessarily polyhedral. The domain is approximated by a polyhedral computational domain where the HDG solution can be computed. The introduction of the rotation as one of the unknowns allows us to use the gradient of the displacements to obtain an explicit representation of the boundary data in the computational domain. The boundary data is transferred from the true boundary to the computational boundary by line integrals, where the integrand depends on the Cauchy stress tensor and the rotation. Under closeness assumptions between the computational and true boundaries, the scheme is shown to be well-posed and optimal error estimates are provided even in the nearly incompressible. Numerical experiments in two-dimensions are presented.

The numerical solution of partial differential equations (PDEs) is difficult, having led to a century of research so far. Recently, there have been pushes to build neural--numerical hybrid solvers, which piggy-backs the modern trend towards fully end-to-end learned systems. Most works so far can only generalize over a subset of properties to which a generic solver would be faced, including: resolution, topology, geometry, boundary conditions, domain discretization regularity, dimensionality, etc. In this work, we build a solver, satisfying these properties, where all the components are based on neural message passing, replacing all heuristically designed components in the computation graph with backprop-optimized neural function approximators. We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes. In order to encourage stability in training autoregressive models, we put forward a method that is based on the principle of zero-stability, posing stability as a domain adaptation problem. We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, discretization, etc. in 1D and 2D. Our model outperforms state-of-the-art numerical solvers in the low resolution regime in terms of speed and accuracy.

For real symmetric matrices that are accessible only through matrix vector products, we present Monte Carlo estimators for computing the diagonal elements. Our probabilistic bounds for normwise absolute and relative errors apply to Monte Carlo estimators based on random Rademacher, sparse Rademacher, normalized and unnormalized Gaussian vectors, and to vectors with bounded fourth moments. The novel use of matrix concentration inequalities in our proofs represents a systematic model for future analyses. Our bounds mostly do not depend on the matrix dimension, target different error measures than existing work, and imply that the accuracy of the estimators increases with the diagonal dominance of the matrix. An application to derivative-based global sensitivity metrics corroborates this, as do numerical experiments on synthetic test matrices. We recommend against the use in practice of sparse Rademacher vectors, which are the basis for many randomized sketching and sampling algorithms, because they tend to deliver barely a digit of accuracy even under large sampling amounts.

Computations of incompressible flows with velocity boundary conditions require solution of a Poisson equation for pressure with all Neumann boundary conditions. Discretization of such a Poisson equation results in a rank-deficient matrix of coefficients. When a non-conservative discretization method such as finite difference, finite element, or spectral scheme is used, such a matrix also generates an inconsistency which makes the residuals in the iterative solution to saturate at a threshold level that depends on the spatial resolution and order of the discretization scheme. In this paper, we examine inconsistency for a high-order meshless discretization scheme suitable for solving the equations on a complex domain. The high order meshless method uses polyharmonic spline radial basis functions (PHS-RBF) with appended polynomials to interpolate scattered data and constructs the discrete equations by collocation. The PHS-RBF provides the flexibility to vary the order of discretization by increasing the degree of the appended polynomial. In this study, we examine the convergence of the inconsistency for different spatial resolutions and for different degrees of the appended polynomials by solving the Poisson equation for a manufactured solution as well as the Navier-Stokes equations for several fluid flows. We observe that the inconsistency decreases faster than the error in the final solution, and eventually becomes vanishing small at sufficient spatial resolution. The rate of convergence of the inconsistency is observed to be similar or better than the rate of convergence of the discretization errors. This beneficial observation makes it unnecessary to regularize the Poisson equation by fixing either the mean pressure or pressure at an arbitrary point. A simple point solver such as the SOR is seen to be well-convergent, although it can be further accelerated using multilevel methods.

We obtain new equitightness and $C([0,T];L^p(\mathbb{R}^N))$-convergence results for numerical approximations of generalized porous medium equations of the form $$ \partial_tu-\mathfrak{L}[\varphi(u)]=g\qquad\text{in $\mathbb{R}^N\times(0,T)$}, $$ where $\varphi:\mathbb{R}\to\mathbb{R}$ is continuous and nondecreasing, and $\mathfrak{L}$ is a local or nonlocal diffusion operator. Our results include slow diffusions, strongly degenerate Stefan problems, and fast diffusions above a critical exponent. These results improve the previous $C([0,T];L_{\text{loc}}^p(\mathbb{R}^N))$-convergence obtained in a series of papers on the topic by the authors. To have equitightness and global $L^p$-convergence, some additional restrictions on $\mathfrak{L}$ and $\varphi$ are needed. Most commonly used symmetric operators $\mathfrak{L}$ are still included: the Laplacian, fractional Laplacians, and other generators of symmetric L\'evy processes with some fractional moment. We also discuss extensions to nonlinear possibly strongly degenerate convection-diffusion equations.

In this article, we deal with the efficient computation of the Wright function in the cases of interest for the expression of solutions of some fractional differential equations. The proposed algorithm is based on the inversion of the Laplace transform of a particular expression of the Wright function for which we discuss in detail the error analysis. We also present a code package that implements the algorithm proposed here in different programming languages. The analysis and implementation are accompanied by an extensive set of numerical experiments that validate both the theoretical estimates of the error and the applicability of the proposed method for representing the solutions of fractional differential equations.

Adaptive spectral (AS) decompositions associated with a piecewise constant function $u$ yield small subspaces where the characteristic functions comprising $u$ are well approximated. When combined with Newton-like optimization methods for the solution of inverse medium problems, AS decompositions have proved remarkably efficient in providing at each nonlinear iteration a low-dimensional search space. Here, we derive $L^2$-error estimates for the AS decomposition of $u$, truncated after $K$ terms, when $u$ is piecewise constant and consists of $K$ characteristic functions over Lipschitz domains and a background. Our estimates apply both to the continuous and the discrete Galerkin finite element setting. Numerical examples illustrate the accuracy of the AS decomposition for media that either do, or do not, satisfy the assumptions of the theory.

Recently a machine learning approach to Monte-Carlo simulations called Neural Markov Chain Monte-Carlo (NMCMC) is gaining traction. In its most popular form it uses the neural networks to construct normalizing flows which are then trained to approximate the desired target distribution. As this distribution is usually defined via a Hamiltonian or action, the standard learning algorithm requires estimation of the action gradient with respect to the fields. In this contribution we present another gradient estimator (and the corresponding [PyTorch implementation) that avoids this calculation, thus potentially speeding up training for models with more complicated actions. We also study the statistical properties of several gradient estimators and show that our formulation leads to better training results.

A general class of KdV-type wave equations regularized with a convolution-type nonlocality in space is considered. The class differs from the class of the nonlinear nonlocal unidirectional wave equations previously studied by the addition of a linear convolution term involving third-order derivative. To solve the Cauchy problem we propose a semi-discrete numerical method based on a uniform spatial discretization, that is an extension of a previously published work of the present authors. We prove uniform convergence of the numerical method as the mesh size goes to zero. We also prove that the localization error resulting from localization to a finite domain is significantly less than a given threshold if the finite domain is large enough. To illustrate the theoretical results, some numerical experiments are carried out for the Rosenau-KdV equation, the Rosenau-BBM-KdV equation and a convolution-type integro-differential equation. The experiments conducted for three particular choices of the kernel function confirm the error estimates that we provide.

Graph Neural Networks (GNN) come in many flavors, but should always be either invariant (permutation of the nodes of the input graph does not affect the output) or equivariant (permutation of the input permutes the output). In this paper, we consider a specific class of invariant and equivariant networks, for which we prove new universality theorems. More precisely, we consider networks with a single hidden layer, obtained by summing channels formed by applying an equivariant linear operator, a pointwise non-linearity and either an invariant or equivariant linear operator. Recently, Maron et al. (2019) showed that by allowing higher-order tensorization inside the network, universal invariant GNNs can be obtained. As a first contribution, we propose an alternative proof of this result, which relies on the Stone-Weierstrass theorem for algebra of real-valued functions. Our main contribution is then an extension of this result to the equivariant case, which appears in many practical applications but has been less studied from a theoretical point of view. The proof relies on a new generalized Stone-Weierstrass theorem for algebra of equivariant functions, which is of independent interest. Finally, unlike many previous settings that consider a fixed number of nodes, our results show that a GNN defined by a single set of parameters can approximate uniformly well a function defined on graphs of varying size.

北京阿比特科技有限公司