亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Hilbert spaces $H(\mathrm{curl})$ and $H(\mathrm{div})$ are needed for variational problems formulated in the context of the de Rham complex in order to guarantee well-posedness. Consequently, the construction of conforming subspaces is a crucial step in the formulation of viable numerical solutions. Alternatively to the standard definition of a finite element as per Ciarlet, given by the triplet of a domain, a polynomial space and degrees of freedom, this work aims to introduce a novel, simple method of directly constructing semi-continuous vectorial base functions on the reference element via polytopal templates and an underlying $H^1$-conforming polynomial subspace. The base functions are then mapped from the reference element to the element in the physical domain via consistent Piola transformations. The method is defined in such a way, that the underlying $H^1$-conforming subspace can be chosen independently, thus allowing for constructions of arbitrary polynomial order. The base functions arise by multiplication of the basis with template vectors defined for each polytope of the reference element. We prove a unisolvent construction of N\'ed\'elec elements of the first and second type, Brezzi-Douglas-Marini elements, and Raviart-Thomas elements. An application for the method is demonstrated with two examples in the relaxed micromorphic model

相關內容

This paper provides mathematical analysis of an elementary fully discrete finite difference method applied to inhomogeneous (non-constant density and viscosity) incompressible Navier-Stokes system on a bounded domain. The proposed method consists of a version of Lax-Friedrichs explicit scheme for the transport equation and a version of Ladyzhenskaya's implicit scheme for the Navier-Stokes equations. Under the condition that the initial density profile is strictly away from $0$, the scheme is proven to be strongly convergent to a weak solution (up to a subsequence) within an arbitrary time interval, which can be seen as a proof of existence of a weak solution to the system. The results contain a new Aubin-Lions-Simon type compactness method with an interpolation inequality between strong norms of the velocity and a weak norm of the product of the density and velocity.

The mixed form of the Cahn-Hilliard equations is discretized by the hybridizable discontinuous Galerkin method. For any chemical energy density, existence and uniqueness of the numerical solution is obtained. The scheme is proved to be unconditionally stable. Convergence of the method is obtained by deriving a priori error estimates that are valid for the Ginzburg-Lindau chemical energy density and for convex domains. The paper also contains discrete functional tools, namely discrete Agmon and Gagliardo-Nirenberg inequalities, which are proved to be valid in the hybridizable discontinuous Galerkin spaces.

In this paper we consider the generalized inverse iteration for computing ground states of the Gross-Pitaevskii eigenvector problem (GPE). For that we prove explicit linear convergence rates that depend on the maximum eigenvalue in magnitude of a weighted linear eigenvalue problem. Furthermore, we show that this eigenvalue can be bounded by the first spectral gap of a linearized Gross-Pitaevskii operator, recovering the same rates as for linear eigenvector problems. With this we establish the first local convergence result for the basic inverse iteration for the GPE without damping. We also show how our findings directly generalize to extended inverse iterations, such as the Gradient Flow Discrete Normalized (GFDN) proposed in [W. Bao, Q. Du, SIAM J. Sci. Comput., 25 (2004)] or the damped inverse iteration suggested in [P. Henning, D. Peterseim, SIAM J. Numer. Anal., 53 (2020)]. Our analysis also reveals why the inverse iteration for the GPE does not react favourably to spectral shifts. This empirical observation can now be explained with a blow-up of a weighting function that crucially contributes to the convergence rates. Our findings are illustrated by numerical experiments.

Elliptic interface problems whose solutions are $C^0$ continuous have been well studied over the past two decades. The well-known numerical methods include the strongly stable generalized finite element method (SGFEM) and immersed FEM (IFEM). In this paper, we study numerically a larger class of elliptic interface problems where their solutions are discontinuous. A direct application of these existing methods fails immediately as the approximate solution is in a larger space that covers discontinuous functions. We propose a class of high-order enriched unfitted FEMs to solve these problems with implicit or Robin-type interface jump conditions. We design new enrichment functions that capture the imposed discontinuity of the solution while keeping the condition number from fast growth. A linear enriched method in 1D was recently developed using one enrichment function and we generalized it to an arbitrary degree using two simple discontinuous one-sided enrichment functions. The natural tensor product extension to the 2D case is demonstrated. Optimal order convergence in the $L^2$ and broken $H^1$-norms are established. We also establish superconvergence at all discretization nodes (including exact nodal values in special cases). Numerical examples are provided to confirm the theory. Finally, to prove the efficiency of the method for practical problems, the enriched linear, quadratic, and cubic elements are applied to a multi-layer wall model for drug-eluting stents in which zero-flux jump conditions and implicit concentration interface conditions are both present.

We introduce a PDE-based node-to-element contact formulation as an alternative to classical, purely geometrical formulations. It is challenging to devise solutions to nonsmooth contact problem with continuous gap using finite element discretizations. We herein achieve this objective by constructing an approximate distance function (ADF) to the boundaries of solid objects, and in doing so, also obtain universal uniqueness of contact detection. Unilateral constraints are implemented using a mixed model combining the screened Poisson equation and a force element, which has the topology of a continuum element containing an additional incident node. An ADF is obtained by solving the screened Poisson equation with constant essential boundary conditions and a variable transformation. The ADF does not explicitly depend on the number of objects and a single solution of the partial differential equation for this field uniquely defines the contact conditions for all incident points in the mesh. Having an ADF field to any obstacle circumvents the multiple target surfaces and avoids the specific data structures present in traditional contact-impact algorithms. We also relax the interpretation of the Lagrange multipliers as contact forces, and the Courant--Beltrami function is used with a mixed formulation producing the required differentiable result. We demonstrate the advantages of the new approach in two- and three-dimensional problems that are solved using Newton iterations. Simultaneous constraints for each incident point are considered.

Time-dependent Maxwell's equations govern electromagnetics. Under certain conditions, we can rewrite these equations into a partial differential equation of second order, which in this case is the vectorial wave equation. For the vectorial wave, we investigate the numerical application and the challenges in the implementation. For this purpose, we consider a space-time variational setting, i.e. time is just another spatial dimension. More specifically, we apply integration by parts in time as well as in space, leading to a space-time variational formulation with different trial and test spaces. Conforming discretizations of tensor-product type result in a Galerkin--Petrov finite element method that requires a CFL condition for stability. For this Galerkin--Petrov variational formulation, we study the CFL condition and its sharpness. To overcome the CFL condition, we use a Hilbert-type transformation that leads to a variational formulation with equal trial and test spaces. Conforming space-time discretizations result in a new Galerkin--Bubnov finite element method that is unconditionally stable. In numerical examples, we demonstrate the effectiveness of this Galerkin--Bubnov finite element method. Furthermore, we investigate different projections of the right-hand side and their influence on the convergence rates. This paper is the first step towards a more stable computation and a better understanding of vectorial wave equations in a conforming space-time approach.

Consider the problem of solving systems of linear algebraic equations $Ax=b$ with a real symmetric positive definite matrix $A$ using the conjugate gradient (CG) method. To stop the algorithm at the appropriate moment, it is important to monitor the quality of the approximate solution. One of the most relevant quantities for measuring the quality of the approximate solution is the $A$-norm of the error. This quantity cannot be easily computed, however, it can be estimated. In this paper we discuss and analyze the behaviour of the Gauss-Radau upper bound on the $A$-norm of the error, based on viewing CG as a procedure for approximating a certain Riemann-Stieltjes integral. This upper bound depends on a prescribed underestimate $\mu$ to the smallest eigenvalue of $A$. We concentrate on explaining a phenomenon observed during computations showing that, in later CG iterations, the upper bound loses its accuracy, and is almost independent of $\mu$. We construct a model problem that is used to demonstrate and study the behaviour of the upper bound in dependence of $\mu$, and developed formulas that are helpful in understanding this behavior. We show that the above mentioned phenomenon is closely related to the convergence of the smallest Ritz value to the smallest eigenvalue of $A$. It occurs when the smallest Ritz value is a better approximation to the smallest eigenvalue than the prescribed underestimate $\mu$. We also suggest an adaptive strategy for improving the accuracy of the upper bounds in the previous iterations.

A fundamental computational problem is to find a shortest non-zero vector in Euclidean lattices, a problem known as the Shortest Vector Problem (SVP). This problem is believed to be hard even on quantum computers and thus plays a pivotal role in post-quantum cryptography. In this work we explore how (efficiently) Noisy Intermediate Scale Quantum (NISQ) devices may be used to solve SVP. Specifically, we map the problem to that of finding the ground state of a suitable Hamiltonian. In particular, (i) we establish new bounds for lattice enumeration, this allows us to obtain new bounds (resp.~estimates) for the number of qubits required per dimension for any lattices (resp.~random q-ary lattices) to solve SVP; (ii) we exclude the zero vector from the optimization space by proposing (a) a different classical optimisation loop or alternatively (b) a new mapping to the Hamiltonian. These improvements allow us to solve SVP in dimension up to 28 in a quantum emulation, significantly more than what was previously achieved, even for special cases. Finally, we extrapolate the size of NISQ devices that is required to be able to solve instances of lattices that are hard even for the best classical algorithms and find that with approximately $10^3$ noisy qubits such instances can be tackled.

Variable selection in linear regression settings is a much discussed problem. Best subset selection (BSS) is often considered the intuitive 'gold standard', with its use being restricted only by its NP-hard nature. Alternatives such as the least absolute shrinkage and selection operator (Lasso) or the elastic net (Enet) have become methods of choice in high-dimensional settings. A recent proposal represents BSS as a mixed integer optimization problem so that much larger problems have become feasible in reasonable computation time. We present an extensive neutral comparison assessing the variable selection performance, in linear regressions, of BSS compared to forward stepwise selection (FSS), Lasso and Enet. The simulation study considers a wide range of settings that are challenging with regard to dimensionality (with respect to the number of observations and variables), signal-to-noise ratios and correlations between predictors. As main measure of performance, we used the best possible F1-score for each method to ensure a fair comparison irrespective of any criterion for choosing the tuning parameters, and results were confirmed by alternative performance measures. Somewhat surprisingly, it was only in settings where the signal-to-noise ratio was high and the variables were (nearly) uncorrelated that BSS reliably outperformed the other methods, even in low-dimensional settings. Further, the FSS's performance was nearly identical to BSS. Our results shed new light on the usual presumption of BSS being, in principle, the best choice for variable selection. Especially for correlated variables, alternatives like Enet are faster and appear to perform better in practical settings.

The receiver operating characteristic (ROC) curve is a powerful statistical tool and has been widely applied in medical research. In the ROC curve estimation, a commonly used assumption is that larger the biomarker value, greater severity the disease. In this paper, we mathematically interpret ``greater severity of the disease" as ``larger probability of being diseased". This in turn is equivalent to assume the likelihood ratio ordering of the biomarker between the diseased and healthy individuals. With this assumption, we first propose a Bernstein polynomial method to model the distributions of both samples; we then estimate the distributions by the maximum empirical likelihood principle. The ROC curve estimate and the associated summary statistics are obtained subsequently. Theoretically, we establish the asymptotic consistency of our estimators. Via extensive numerical studies, we compare the performance of our method with competitive methods. The application of our method is illustrated by a real-data example.

北京阿比特科技有限公司