亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The paper focuses on a new error analysis of a class of mixed FEMs for stationary incompressible magnetohydrodynamics with the standard inf-sup stable velocity-pressure space pairs to Navier-Stokes equations and the N\'ed\'elec's edge element for the magnetic field. The methods have been widely used in various numerical simulations in the last several decades, while the existing analysis is not optimal due to the strong coupling of system and the pollution of the lower-order N\'ed\'elec's edge approximation in analysis. In terms of a newly modified Maxwell projection we establish new and optimal error estimates. In particular, we prove that the method based on the commonly-used Taylor-Hood/lowest-order N\'ed\'elec's edge element is efficient and the method provides the second-order accuracy for numerical velocity. Two numerical examples for the problem in both convex and nonconvex polygonal domains are presented. Numerical results confirm our theoretical analysis.

相關內容

Code verification plays an important role in establishing the credibility of computational simulations by assessing the correctness of the implementation of the underlying numerical methods. In computational electromagnetics, the numerical solution to integral equations incurs multiple interacting sources of numerical error, as well as other challenges, which render traditional code-verification approaches ineffective. In this paper, we provide approaches to separately measure the numerical errors arising from these different error sources for the method-of-moments implementation of the combined-field integral equation. We demonstrate the effectiveness of these approaches for cases with and without coding errors.

We present a new divergence-free and well-balanced hybrid FV/FE scheme for the incompressible viscous and resistive MHD equations on unstructured mixed-element meshes in 2 and 3 space dimensions. The equations are split into subsystems. The pressure is defined on the vertices of the primary mesh, while the velocity field and the normal components of the magnetic field are defined on an edge-based/face-based dual mesh in two and three space dimensions, respectively. This allows to account for the divergence-free conditions of the velocity field and of the magnetic field in a rather natural manner. The non-linear convective and the viscous terms are solved at the aid of an explicit FV scheme, while the magnetic field is evolved in a divergence-free manner via an explicit FV method based on a discrete form of the Stokes law in the edges/faces of each primary element. To achieve higher order of accuracy, a pw-linear polynomial is reconstructed for the magnetic field, which is guaranteed to be divergence-free via a constrained L2 projection. The pressure subsystem is solved implicitly at the aid of a classical continuous FE method in the vertices of the primary mesh. In order to maintain non-trivial stationary equilibrium solutions of the governing PDE system exactly, which are assumed to be known a priori, each step of the new algorithm takes the known equilibrium solution explicitly into account so that the method becomes exactly well-balanced. This paper includes a very thorough study of the lid-driven MHD cavity problem in the presence of different magnetic fields. We finally present long-time simulations of Soloviev equilibrium solutions in several simplified 3D tokamak configurations even on very coarse unstructured meshes that, in general, do not need to be aligned with the magnetic field lines.

We obtain bounds to quantify the distributional approximation in the delta method for vector statistics (the sample mean of $n$ independent random vectors) for normal and non-normal limits, measured using smooth test functions. For normal limits, we obtain bounds of the optimal order $n^{-1/2}$ rate of convergence, but for a wide class of non-normal limits, which includes quadratic forms amongst others, we achieve bounds with a faster order $n^{-1}$ convergence rate. We apply our general bounds to derive explicit bounds to quantify distributional approximations of an estimator for Bernoulli variance, several statistics of sample moments, order $n^{-1}$ bounds for the chi-square approximation of a family of rank-based statistics, and we also provide an efficient independent derivation of an order $n^{-1}$ bound for the chi-square approximation of Pearson's statistic. In establishing our general results, we generalise recent results on Stein's method for functions of multivariate normal random vectors to vector-valued functions and sums of independent random vectors whose components may be dependent. These bounds are widely applicable and are of independent interest.

An adaptive modified weak Galerkin method (AmWG) for an elliptic problem is studied in this paper, in addition to its convergence and optimality. The modified weak Galerkin bilinear form is simplified without the need of the skeletal variable, and the approximation space is chosen as the discontinuous polynomial space as in the discontinuous Galerkin method. Upon a reliable residual-based a posteriori error estimator, an adaptive algorithm is proposed together with its convergence and quasi-optimality proved for the lowest order case. The primary tool is to bridge the connection between the modified weak Galerkin method and the Crouzeix-Raviart nonconforming finite element. Unlike the traditional convergence analysis for methods with a discontinuous polynomial approximation space, the convergence of AmWG is penalty parameter free. Numerical results are presented to support the theoretical results.

We present a rigorous theoretical analysis of the convergence rate of the deep mixed residual method (MIM) when applied to a linear elliptic equation with various types of boundary conditions. The MIM method has been proposed as a more effective numerical approximation method compared to the deep Galerkin method (DGM) and deep Ritz method (DRM) in various cases. Our analysis shows that MIM outperforms DRM and deep Galerkin method for weak solution (DGMW) in the Dirichlet case due to its ability to enforce the boundary condition. However, for the Neumann and Robin cases, MIM demonstrates similar performance to the other methods. Our results provides valuable insights into the strengths of MIM and its comparative performance in solving linear elliptic equations with different boundary conditions.

The Network Revenue Management (NRM) problem is a well-known challenge in dynamic decision-making under uncertainty. In this problem, fixed resources must be allocated to serve customers over a finite horizon, while customers arrive according to a stochastic process. The typical NRM model assumes that customer arrivals are independent over time. However, in this paper, we explore a more general setting where customer arrivals over different periods can be correlated. We propose a new model that assumes the existence of a system state, which determines customer arrivals for the current period. This system state evolves over time according to a time-inhomogeneous Markov chain. Our model can be used to represent correlation in various settings and synthesizes previous literature on correlation models. To solve the NRM problem under our correlated model, we derive a new linear programming (LP) approximation of the optimal policy. Our approximation provides a tighter upper bound on the total expected value collected by the optimal policy than existing upper bounds. We use our LP to develop a new bid price policy, which computes bid prices for each system state and time period in a backward induction manner. The decision is then made by comparing the reward of the customer against the associated bid prices. Our policy guarantees to collect at least $1/(1+L)$ fraction of the total reward collected by the optimal policy, where $L$ denotes the maximum number of resources required by a customer. In summary, our work presents a new model for correlated customer arrivals in the NRM problem and provides an LP approximation for solving the problem under this model. We derive a new bid price policy and provides a theoretical guarantee on the performance of the policy.

In this paper, we are interested in constructing a scheme solving compressible Navier--Stokes equations, with desired properties including high order spatial accuracy, conservation, and positivity-preserving of density and internal energy under a standard hyperbolic type CFL constraint on the time step size, e.g., $\Delta t=\mathcal O(\Delta x)$. Strang splitting is used to approximate convection and diffusion operators separately. For the convection part, i.e., the compressible Euler equation, the high order accurate postivity-preserving Runge--Kutta discontinuous Galerkin method can be used. For the diffusion part, the equation of internal energy instead of the total energy is considered, and a first order semi-implicit time discretization is used for the ease of achieving positivity. A suitable interior penalty discontinuous Galerkin method for the stress tensor can ensure the conservation of momentum and total energy for any high order polynomial basis. In particular, positivity can be proven with $\Delta t=\mathcal{O}(\Delta x)$ if the Laplacian operator of internal energy is approximated by the $\mathbb{Q}^k$ spectral element method with $k=1,2,3$. So the full scheme with $\mathbb{Q}^k$ ($k=1,2,3$) basis is conservative and positivity-preserving with $\Delta t=\mathcal{O}(\Delta x)$, which is robust for demanding problems such as solutions with low density and low pressure induced by high-speed shock diffraction. Even though the full scheme is only first order accurate in time, numerical tests indicate that higher order polynomial basis produces much better numerical solutions, e.g., better resolution for capturing the roll-ups during shock reflection.

This paper provides an error estimate for the u-series decomposition of the Coulomb interaction in molecular dynamics simulations. We show that the number of truncated Gaussians $M$ in the u-series and the base of interpolation nodes $b$ in the bilateral serial approximation are two key parameters for the algorithm accuracy, and that the errors converge as $\mathcal{O}(b^{-M})$ for the energy and $\mathcal{O}(b^{-3M})$ for the force. Error bounds due to numerical quadrature and cutoff in both the electrostatic energy and forces are obtained. Closed-form formulae are also provided, which are useful in the parameter setup for simulations under a given accuracy. The results are verified by analyzing the errors of two practical systems.

We introduce a priori Sobolev-space error estimates for the solution of nonlinear, and possibly parametric, PDEs using Gaussian process and kernel based methods. The primary assumptions are: (1) a continuous embedding of the reproducing kernel Hilbert space of the kernel into a Sobolev space of sufficient regularity; and (2) the stability of the differential operator and the solution map of the PDE between corresponding Sobolev spaces. The proof is articulated around Sobolev norm error estimates for kernel interpolants and relies on the minimizing norm property of the solution. The error estimates demonstrate dimension-benign convergence rates if the solution space of the PDE is smooth enough. We illustrate these points with applications to high-dimensional nonlinear elliptic PDEs and parametric PDEs. Although some recent machine learning methods have been presented as breaking the curse of dimensionality in solving high-dimensional PDEs, our analysis suggests a more nuanced picture: there is a trade-off between the regularity of the solution and the presence of the curse of dimensionality. Therefore, our results are in line with the understanding that the curse is absent when the solution is regular enough.

Graph Neural Networks (GNNs) have been shown to be effective models for different predictive tasks on graph-structured data. Recent work on their expressive power has focused on isomorphism tasks and countable feature spaces. We extend this theoretical framework to include continuous features - which occur regularly in real-world input domains and within the hidden layers of GNNs - and we demonstrate the requirement for multiple aggregation functions in this context. Accordingly, we propose Principal Neighbourhood Aggregation (PNA), a novel architecture combining multiple aggregators with degree-scalers (which generalize the sum aggregator). Finally, we compare the capacity of different models to capture and exploit the graph structure via a novel benchmark containing multiple tasks taken from classical graph theory, alongside existing benchmarks from real-world domains, all of which demonstrate the strength of our model. With this work, we hope to steer some of the GNN research towards new aggregation methods which we believe are essential in the search for powerful and robust models.

北京阿比特科技有限公司