亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Functions with singularities are notoriously difficult to approximate with conventional approximation schemes. In computational applications they are often resolved with low-order piecewise polynomials, multilevel schemes or other types of grading strategies. Rational functions are an exception to this rule: for univariate functions with point singularities, such as branch points, rational approximations exist with root-exponential convergence in the rational degree. This is typically enabled by the clustering of poles near the singularity. Both the theory and computational practice of rational functions for function approximation have focused on the univariate case, with extensions to two dimensions via identification with the complex plane. Multivariate rational functions, i.e., quotients of polynomials of several variables, are relatively unexplored in comparison. Yet, apart from a steep increase in theoretical complexity, they also offer a wealth of opportunities. A first observation is that singularities of multivariate rational functions may be continuous curves of poles, rather than isolated ones. By generalizing the clustering of poles from points to curves, we explore constructions of multivariate rational approximations to functions with curves of singularities.

相關內容

We address the problem of constructing approximations based on orthogonal polynomials that preserve an arbitrary set of moments of a given function without loosing the spectral convergence property. To this aim, we compute the constrained polynomial of best approximation for a generic basis of orthogonal polynomials. The construction is entirely general and allows us to derive structure preserving numerical methods for partial differential equations that require the conservation of some moments of the solution, typically representing relevant physical quantities of the problem. These properties are essential to capture with high accuracy the long-time behavior of the solution. We illustrate with the aid of several numerical applications to Fokker-Planck equations the generality and the performances of the present approach.

Complex interval arithmetic is a powerful tool for the analysis of computational errors. The naturally arising rectangular, polar, and circular (together called primitive) interval types are not closed under simple arithmetic operations and their use yields overly relaxed bounds. The later introduced polygonal type, on the other hand, allows for arbitrarily precise representaion of the above operations for a higher computational cost. We propose the polyarcular interval type as an effective extension of the previous types. The polyarcular interval can represent all primitive intervals and most of their arithmetic combinations precisely and has a approximation capability competing with that of the polygonal interval. In particular, in antenna tolerance analysis it can achieve perfect accuracy for lower computational cost then the polygonal type, which we show in a relevant case study. In this paper, we present a rigorous analysis of the arithmetic properties of all five interval types, involving a new algebro-geometric method of boundary analysis.

Gaussian binomial coefficients are q-analogues of the binomial coefficients of integers. On the other hand, binomial coefficients have been extended to finite words, i.e., elements of the finitely generated free monoids. In this paper we bring together these two notions by introducing q-analogues of binomial coefficients of words. We study their basic properties, e.g., by extending classical formulas such as the q-Vandermonde and Manvel's et al. identities to our setting. As a consequence, we get information about the structure of the considered words: these q-deformations of binomial coefficients of words contain much richer information than the original coefficients. From an algebraic perspective, we introduce a q-shuffle and a family q-infiltration products for non-commutative formal power series. Finally, we apply our results to generalize a theorem of Eilenberg characterizing so-called p-group languages. We show that a language is of this type if and only if it is a Boolean combination of specific languages defined through q-binomial coefficients seen as polynomials over $\mathbb{F}_p$.

The comparison of frequency distributions is a common statistical task with broad applications and a long history of methodological development. However, existing measures do not quantify the magnitude and direction by which one distribution is shifted relative to another. In the present study, we define distributional shift (DS) as the concentration of frequencies away from the greatest discrete class, e.g., a histogram's right-most bin. We derive a measure of DS based on the sum of cumulative frequencies, intuitively quantifying shift as a statistical moment. We then define relative distributional shift (RDS) as the difference in DS between distributions. Using simulated random sampling, we demonstrate that RDS is highly related to measures that are popularly used to compare frequency distributions. Focusing on a specific use case, i.e., simulated healthcare Evaluation and Management coding profiles, we show how RDS can be used to examine many pairs of empirical and expected distributions via shift-significance plots. In comparison to other measures, RDS has the unique advantage of being a signed (directional) measure based on a simple difference in an intuitive property.

The design of particle simulation methods for collisional plasma physics has always represented a challenge due to the unbounded total collisional cross section, which prevents a natural extension of the classical Direct Simulation Monte Carlo (DSMC) method devised for the Boltzmann equation. One way to overcome this problem is to consider the design of Monte Carlo algorithms that are robust in the so-called grazing collision limit. In the first part of this manuscript, we will focus on the construction of collision algorithms for the Landau-Fokker-Planck equation based on the grazing collision asymptotics and which avoids the use of iterative solvers. Subsequently, we discuss problems involving uncertainties and show how to develop a stochastic Galerkin projection of the particle dynamics which permits to recover spectral accuracy for smooth solutions in the random space. Several classical numerical tests are reported to validate the present approach.

The deformed energy method has shown to be a good option for dimensional synthesis of mechanisms. In this paper the introduction of some new features to such approach is proposed. First, constraints fixing dimensions of certain links are introduced in the error function of the synthesis problem. Second, requirements on distances between determinate nodes are included in the error function for the analysis of the deformed position problem. Both the overall synthesis error function and the inner analysis error function are optimized using a Sequential Quadratic Problem (SQP) approach. This also reduces the probability of branch or circuit defects. In the case of the inner function analytical derivatives are used, while in the synthesis optimization approximate derivatives have been introduced. Furthermore, constraints are analyzed under two formulations, the Euclidean distance and an alternative approach that uses the previous raised to the power of two. The latter approach is often used in kinematics, and simplifies the computation of derivatives. Some examples are provided to show the convergence order of the error function and the fulfilment of the constraints in both formulations studied under different topological situations or achieved energy levels.

We study the integration problem on Hilbert spaces of (multivariate) periodic functions. The standard technique to prove lower bounds for the error of quadrature rules uses bump functions and the pigeon hole principle. Recently, several new lower bounds have been obtained using a different technique which exploits the Hilbert space structure and a variant of the Schur product theorem. The purpose of this paper is to (a) survey the new proof technique, (b) show that it is indeed superior to the bump-function technique, and (c) sharpen and extend the results from the previous papers.

Multigraded Betti numbers are one of the simplest invariants of multiparameter persistence modules. This invariant is useful in theory -- it completely determines the Hilbert function of the module and the isomorphism type of the free modules in its minimal free resolution -- as well as in practice -- it is easy to visualize and it is one of the main outputs of current multiparameter persistent homology software, such as RIVET. However, to the best of our knowledge, no bottleneck stability result with respect to the interleaving distance has been established for this invariant so far, and this potential lack of stability limits its practical applications. We prove a stability result for multigraded Betti numbers, using an efficiently computable bottleneck-type dissimilarity function we introduce. Our notion of matching is inspired by recent work on signed barcodes, and allows matching bars of the same module in homological degrees of different parity, in addition to matchings bars of different modules in homological degrees of the same parity. Our stability result is a combination of Hilbert's syzygy theorem, Bjerkevik's bottleneck stability for free modules, and a novel stability result for projective resolutions. We also prove, in the $2$-parameter case, a $1$-Wasserstein stability result for Hilbert functions with respect to the $1$-presentation distance of Bjerkevik and Lesnick.

We have developed an efficient and unconditionally energy-stable method for simulating droplet formation dynamics. Our approach involves a novel time-marching scheme based on the scalar auxiliary variable technique, specifically designed for solving the Cahn-Hilliard-Navier-Stokes phase field model with variable density and viscosity. We have successfully applied this method to simulate droplet formation in scenarios where a Newtonian fluid is injected through a vertical tube into another immiscible Newtonian fluid. To tackle the challenges posed by nonhomogeneous Dirichlet boundary conditions at the tube entrance, we have introduced additional nonlocal auxiliary variables and associated ordinary differential equations. These additions effectively eliminate the influence of boundary terms. Moreover, we have incorporated stabilization terms into the scheme to enhance its numerical effectiveness. Notably, our resulting scheme is fully decoupled, requiring the solution of only linear systems at each time step. We have also demonstrated the energy decaying property of the scheme, with suitable modifications. To assess the accuracy and stability of our algorithm, we have conducted extensive numerical simulations. Additionally, we have examined the dynamics of droplet formation and explored the impact of dimensionless parameters on the process. Overall, our work presents a refined method for simulating droplet formation dynamics, offering improved efficiency, energy stability, and accuracy.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

北京阿比特科技有限公司