亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider mixed finite element methods with exact symmetric stress tensors. We derive a new quasi-optimal a priori error estimate uniformly valid with respect to the compressibility. For the a posteriori error analysis we consider the Prager-Synge hypercircle principle and introduce a new estimate uniformly valid in the incompressible limit. All estimates are validated by numerical examples.

相關內容

Several div-conforming and divdiv-conforming finite elements for symmetric tensors on simplexes in arbitrary dimension are constructed in this work. The shape function space is first split as the trace space and the bubble space. The later is further decomposed into the null space of the differential operator and its orthogonal complement. Instead of characterization of these subspaces of the shape function space, characterization of the dual spaces are provided. Vector div-conforming finite elements are firstly constructed as an introductory example. Then new symmetric div-conforming finite elements are constructed. The dual subspaces are then used as build blocks to construct divdiv conforming finite elements.

We study the numerical approximation by space-time finite element methods of a multi-physics system coupling hyperbolic elastodynamics with parabolic transport and modelling poro- and thermoelasticity. The equations are rewritten as a first-order system in time. Discretizations by continuous Galerkin methods in space and time with inf-sup stable pairs of finite elements for the spatial approximation of the unknowns are investigated. Optimal order error estimates of energy-type are proven. Superconvergence at the time nodes is addressed briefly. The error analysis can be extended to discontinuous and enriched Galerkin space discretizations. The error estimates are confirmed by numerical experiments.

We derive and analyze a symmetric interior penalty discontinuous Galerkin scheme for the approximation of the second-order form of the radiative transfer equation in slab geometry. Using appropriate trace lemmas, the analysis can be carried out as for more standard elliptic problems. Supporting examples show the accuracy and stability of the method also numerically. For discretization, we employ quadtree-like grids, which allow for local refinement in phase-space, and we show exemplary that adaptive methods can efficiently approximate discontinuous solutions.

High-order entropy-stable discontinuous Galerkin methods for the compressible Euler and Navier-Stokes equations require the positivity of thermodynamic quantities in order to guarantee their well-posedness. In this work, we introduce a positivity limiting strategy for entropy-stable discontinuous Galerkin discretizations based on convex limiting. The key ingredient in the limiting procedure is a low order positivity-preserving discretization based on graph viscosity terms. The proposed limiting strategy is both positivity preserving and discretely entropy-stable for the compressible Euler and Navier-Stokes equations. Numerical experiments confirm the high order accuracy and robustness of the proposed strategy.

In recent work (Maierhofer & Huybrechs, 2022, Adv. Comput. Math.), the authors showed that least-squares oversampling can improve the convergence properties of collocation methods for boundary integral equations involving operators of certain pseudo-differential form. The underlying principle is that the discrete method approximates a Bubnov$-$Galerkin method in a suitable sense. In the present work, we extend this analysis to the case when the integral operator is perturbed by a compact operator $\mathcal{K}$ which is continuous as a map on Sobolev spaces on the boundary, $\mathcal{K}:H^{p}\rightarrow H^{q}$ for all $p,q\in\mathbb{R}$. This study is complicated by the fact that both the test and trial functions in the discrete Bubnov-Galerkin orthogonality conditions are modified over the unperturbed setting. Our analysis guarantees that previous results concerning optimal convergence rates and sufficient rates of oversampling are preserved in the more general case. Indeed, for the first time, this analysis provides a complete explanation of the advantages of least-squares oversampled collocation for boundary integral formulations of the Laplace equation on arbitrary smooth Jordan curves in 2D. Our theoretical results are shown to be in very good agreement with numerical experiments.

Isogeometric Analysis generalizes classical finite element analysis and intends to integrate it with the field of Computer-Aided Design. A central problem in achieving this objective is the reconstruction of analysis-suitable models from Computer-Aided Design models, which is in general a non-trivial and time-consuming task. In this article, we present a novel spline construction, that enables model reconstruction as well as simulation of high-order PDEs on the reconstructed models. The proposed almost-$C^1$ are biquadratic splines on fully unstructured quadrilateral meshes (without restrictions on placements or number of extraordinary vertices). They are $C^1$ smooth almost everywhere, that is, at all vertices and across most edges, and in addition almost (i.e. approximately) $C^1$ smooth across all other edges. Thus, the splines form $H^2$-nonconforming analysis-suitable discretization spaces. This is the lowest-degree unstructured spline construction that can be used to solve fourth-order problems. The associated spline basis is non-singular and has several B-spline-like properties (e.g., partition of unity, non-negativity, local support), the almost-$C^1$ splines are described in an explicit B\'ezier-extraction-based framework that can be easily implemented. Numerical tests suggest that the basis is well-conditioned and exhibits optimal approximation behavior.

The angular measure on the unit sphere characterizes the first-order dependence structure of the components of a random vector in extreme regions and is defined in terms of standardized margins. Its statistical recovery is an important step in learning problems involving observations far away from the center. In the common situation that the components of the vector have different distributions, the rank transformation offers a convenient and robust way of standardizing data in order to build an empirical version of the angular measure based on the most extreme observations. However, the study of the sampling distribution of the resulting empirical angular measure is challenging. It is the purpose of the paper to establish finite-sample bounds for the maximal deviations between the empirical and true angular measures, uniformly over classes of Borel sets of controlled combinatorial complexity. The bounds are valid with high probability and, up to logarithmic factors, scale as the square root of the effective sample size. The bounds are applied to provide performance guarantees for two statistical learning procedures tailored to extreme regions of the input space and built upon the empirical angular measure: binary classification in extreme regions through empirical risk minimization and unsupervised anomaly detection through minimum-volume sets of the sphere.

This paper is devoted to the study of non-homogeneous Bingham flows. We introduce a second-order, divergence-conforming discretization for the Bingham constitutive equations, coupled with a discontinuous Galerkin scheme for the mass density. One of the main challenges when analyzing viscoplastic materials is the treatment of the yield stress. In order to overcome this issue, in this work we propose a local regularization, based on a Huber smoothing step. We also take advantage of the properties of the divergence conforming and discontinuous Galerkin formulations to incorporate upwind discretizations to stabilize the formulation. The stability of the continuous problem and the full-discrete scheme are analyzed. Further, a semismooth Newton method is proposed for solving the obtained fully-discretized system of equations at each time step. Finally, several numerical examples that illustrate the main features of the problem and the properties of the numerical scheme are presented.

We introduce simple quadrature rules for the family of nonparametric nonconforming quadrilateral element with four degrees of freedom. Our quadrature rules are motivated by the work of Meng {\it et al.} \cite{meng2018new}. First, we introduce a family of MVP (Mean Value Property)-preserving four DOFs nonconforming elements on the intermediate reference domain introduced by Meng {\it et al.}. Then we design two--points and three--points quadrature rules on the intermediate reference domain. Under the assumption on equal quadrature weights, the deviation from the quadrilateral center of the Gauss points for the two points and three points rules assumes the same quadratic polynomials with constant terms modified. Thus, the two--points rule and three--points rule are constructed at one stroke. The quadrature rules are asymptotically optimal as the mesh size is sufficiently small. Several numerical experiments are carried out, which show efficiency and convergence properties of the new quadrature rules.

Counterfactual explanations are usually generated through heuristics that are sensitive to the search's initial conditions. The absence of guarantees of performance and robustness hinders trustworthiness. In this paper, we take a disciplined approach towards counterfactual explanations for tree ensembles. We advocate for a model-based search aiming at "optimal" explanations and propose efficient mixed-integer programming approaches. We show that isolation forests can be modeled within our framework to focus the search on plausible explanations with a low outlier score. We provide comprehensive coverage of additional constraints that model important objectives, heterogeneous data types, structural constraints on the feature space, along with resource and actionability restrictions. Our experimental analyses demonstrate that the proposed search approach requires a computational effort that is orders of magnitude smaller than previous mathematical programming algorithms. It scales up to large data sets and tree ensembles, where it provides, within seconds, systematic explanations grounded on well-defined models solved to optimality.

北京阿比特科技有限公司