亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we introduce new generalized barycentric coordinates (coined as {\em moment coordinates}) on nonconvex quadrilaterals and convex hexahedra with planar faces. This work draws on recent advances in constructing interpolants to describe the motion of the Filippov sliding vector field in nonsmooth dynamical systems, in which nonnegative solutions of signed matrices based on (partial) distances are studied. For a finite element with $n$ vertices (nodes) in $\mathbb{R}^2$, the constant and linear reproducing conditions are supplemented with additional linear moment equations to set up a linear system of equations of full rank $n$, whose solution results in the nonnegative shape functions. On a simple (convex or nonconvex) quadrilateral, moment coordinates using signed distances are identical to mean value coordinates. For signed weights that are based on the product of distances to edges that are incident to a vertex and their edge lengths, we recover Wachspress coordinates on a convex quadrilateral. Moment coordinates are also constructed on a convex hexahedra with planar faces. We present proofs in support of the construction and plots of the shape functions that affirm its properties.

相關內容

The approach to analysing compositional data has been dominated by the use of logratio transformations, to ensure exact subcompositional coherence and, in some situations, exact isometry as well. A problem with this approach is that data zeros, found in most applications, have to be replaced to allow the logarithmic transformation. An alternative new approach, called the `chiPower' transformation, which allows data zeros, is to combine the standardization inherent in the chi-square distance in correspondence analysis, with the essential elements of the Box-Cox power transformation. The chiPower transformation is justified because it} defines between-sample distances that tend to logratio distances for strictly positive data as the power parameter tends to zero, and are then equivalent to transforming to logratios. For data with zeros, a value of the power can be identified that brings the chiPower transformation as close as possible to a logratio transformation, without having to substitute the zeros. Especially in the area of high-dimensional data, this alternative approach can present such a high level of coherence and isometry as to be a valid approach to the analysis of compositional data. Furthermore, in a supervised learning context, if the compositional variables serve as predictors of a response in a modelling framework, for example generalized linear models, then the power can be used as a tuning parameter in optimizing the accuracy of prediction through cross-validation. The chiPower-transformed variables have a straightforward interpretation, since they are each identified with single compositional parts, not ratios.

We analyze the Schr\"odingerisation method for quantum simulation of a general class of non-unitary dynamics with inhomogeneous source terms. The Schr\"odingerisation technique, introduced in \cite{JLY22a,JLY23}, transforms any linear ordinary and partial differential equations with non-unitary dynamics into a system under unitary dynamics via a warped phase transition that maps the equations into a higher dimension, making them suitable for quantum simulation. This technique can also be applied to these equations with inhomogeneous terms modeling source or forcing terms or boundary and interface conditions, and discrete dynamical systems such as iterative methods in numerical linear algebra, through extra equations in the system. Difficulty airses with the presense of inhomogeneous terms since it can change the stability of the original system. In this paper, we systematically study--both theoretically and numerically--the important issue of recovering the original variables from the Schr\"odingerized equations, even when the evolution operator contains unstable modes. We show that even with unstable modes, one can still construct a stable scheme, yet to recover the original variable one needs to use suitable data in the extended space. We analyze and compare both the discrete and continuous Fourier transforms used in the extended dimension, and derive corresponding error estimates, which allows one to use the more appropriate transform for specific equations. We also provide a smoother initialization for the Schrod\"odingerized system to gain higher order accuracy in the extended space. We homogenize the inhomogeneous terms with a stretch transformation, making it easier to recover the original variable. Our recovering technique also provides a simple and generic framework to solve general ill-posed problems in a computationally stable way.

Incorporating probabilistic terms in mathematical models is crucial for capturing and quantifying uncertainties in real-world systems. Indeed, randomness can have a significant impact on the behavior of the problem's solution, and a deeper analysis is needed to obtain more realistic and informative results. On the other hand, the investigation of stochastic models may require great computational resources due to the importance of generating numerous realizations of the system to have meaningful statistics. This makes the development of complexity reduction techniques, such as surrogate models, essential for enabling efficient and scalable simulations. In this work, we exploit polynomial chaos (PC) expansion to study the accuracy of surrogate representations for a bifurcating phenomena in fluid dynamics, namely the Coanda effect, where the stochastic setting gives a different perspective on the non-uniqueness of the solution. Then, its inclusion in the finite element setting is described, arriving to the formulation of the enhanced Spectral Stochastic Finite Element Method (SSFEM). Moreover, we investigate the connections between the deterministic bifurcation diagram and the PC polynomials, underlying their capability in reconstructing the whole solution manifold.

Testing for independence between two random vectors is a fundamental problem in statistics. It is observed from empirical studies that many existing omnibus consistent tests may not work well for some strongly nonmonotonic and nonlinear relationships. To explore the reasons behind this issue, we novelly transform the multivariate independence testing problem equivalently into checking the equality of two bivariate means. An important observation we made is that the power loss is mainly due to cancellation of positive and negative terms in dependence metrics, making them very close to zero. Motivated by this observation, we propose a class of consistent metrics with a positive integer $\gamma$ that exactly characterize independence. Theoretically, we show that the metrics with even and infinity $\gamma$ can effectively avoid the cancellation, and have high powers under the alternatives that two mean differences offset each other. Since we target at a wide range of dependence scenarios in practice, we further suggest to combine the p-values of test statistics with different $\gamma$'s through the Fisher's method. We illustrate the advantages of our proposed tests through extensive numerical studies.

In a regression model with multiple response variables and multiple explanatory variables, if the difference of the mean vectors of the response variables for different values of explanatory variables is always in the direction of the first principal eigenvector of the covariance matrix of the response variables, then it is called a multivariate allometric regression model. This paper studies the estimation of the first principal eigenvector in the multivariate allometric regression model. A class of estimators that includes conventional estimators is proposed based on weighted sum-of-squares matrices of regression sum-of-squares matrix and residual sum-of-squares matrix. We establish an upper bound of the mean squared error of the estimators contained in this class, and the weight value minimizing the upper bound is derived. Sufficient conditions for the consistency of the estimators are discussed in weak identifiability regimes under which the difference of the largest and second largest eigenvalues of the covariance matrix decays asymptotically and in ``large $p$, large $n$" regimes, where $p$ is the number of response variables and $n$ is the sample size. Several numerical results are also presented.

This essay provides a comprehensive analysis of the optimization and performance evaluation of various routing algorithms within the context of computer networks. Routing algorithms are critical for determining the most efficient path for data transmission between nodes in a network. The efficiency, reliability, and scalability of a network heavily rely on the choice and optimization of its routing algorithm. This paper begins with an overview of fundamental routing strategies, including shortest path, flooding, distance vector, and link state algorithms, and extends to more sophisticated techniques.

In this paper I will develop a lambda-term calculus, lambda-2Int, for a bi-intuitionistic logic and discuss its implications for the notions of sense and denotation of derivations in a bilateralist setting. Thus, I will use the Curry-Howard correspondence, which has been well-established between the simply typed lambda-calculus and natural deduction systems for intuitionistic logic, and apply it to a bilateralist proof system displaying two derivability relations, one for proving and one for refuting. The basis will be the natural deduction system of Wansing's bi-intuitionistic logic 2Int, which I will turn into a term-annotated form. Therefore, we need a type theory that extends to a two-sorted typed lambda-calculus. I will present such a term-annotated proof system for 2Int and prove a Dualization Theorem relating proofs and refutations in this system. On the basis of these formal results I will argue that this gives us interesting insights into questions about sense and denotation as well as synonymy and identity of proofs from a bilateralist point of view.

In this paper, we develop a new type of Runge--Kutta (RK) discontinuous Galerkin (DG) method for solving hyperbolic conservation laws. Compared with the original RKDG method, the new method features improved compactness and allows simple boundary treatment. The key idea is to hybridize two different spatial operators in an explicit RK scheme, utilizing local projected derivatives for inner RK stages and the usual DG spatial discretization for the final stage only. Limiters are applied only at the final stage for the control of spurious oscillations. We also explore the connections between our method and Lax--Wendroff DG schemes and ADER-DG schemes. Numerical examples are given to confirm that the new RKDG method is as accurate as the original RKDG method, while being more compact, for problems including two-dimensional Euler equations for compressible gas dynamics.

In this paper, we present a novel class of high-order Runge--Kutta (RK) discontinuous Galerkin (DG) schemes for hyperbolic conservation laws. The new method extends beyond the traditional method of lines framework and utilizes stage-dependent polynomial spaces for the spatial discretization operators. To be more specific, two different DG operators, associated with $\mathcal{P}^k$ and $\mathcal{P}^{k-1}$ piecewise polynomial spaces, are used at different RK stages. The resulting method is referred to as the sdRKDG method. It features fewer floating-point operations and may achieve larger time step sizes. For problems without sonic points, we observe optimal convergence for all the sdRKDG schemes; and for problems with sonic points, we observe that a subset of the sdRKDG schemes remains optimal. We have also conducted von Neumann analysis for the stability and error of the sdRKDG schemes for the linear advection equation in one dimension. Numerical tests, for problems including two-dimensional Euler equations for gas dynamics, are provided to demonstrate the performance of the new method.

We propose a novel algorithm for the support estimation of partially known Gaussian graphical models that incorporates prior information about the underlying graph. In contrast to classical approaches that provide a point estimate based on a maximum likelihood or a maximum a posteriori criterion using (simple) priors on the precision matrix, we consider a prior on the graph and rely on annealed Langevin diffusion to generate samples from the posterior distribution. Since the Langevin sampler requires access to the score function of the underlying graph prior, we use graph neural networks to effectively estimate the score from a graph dataset (either available beforehand or generated from a known distribution). Numerical experiments demonstrate the benefits of our approach.

北京阿比特科技有限公司