亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We investigate distributional properties of a class of spectral spatial statistics under irregular sampling of a random field that is defined on $\mathbb{R}^d$, and use this to obtain a test for isotropy. Within this context, edge effects are well-known to create a bias in classical estimators commonly encountered in the analysis of spatial data. This bias increases with dimension $d$ and, for $d>1$, can become non-negligible in the limiting distribution of such statistics to the extent that a nondegenerate distribution does not exist. We provide a general theory for a class of (integrated) spectral statistics that enables to 1) significantly reduce this bias and 2) that ensures that asymptotically Gaussian limits can be derived for $d \le 3$ for appropriately tapered versions of such statistics. We use this to address some crucial gaps in the literature, and demonstrate that tapering with a sufficiently smooth function is necessary to achieve such results. Our findings specifically shed a new light on a recent result in Subba Rao (2018a). Our theory then is used to propose a novel test for isotropy. In contrast to most of the literature, which validates this assumption on a finite number of spatial locations (or a finite number of Fourier frequencies), we develop a test for isotropy on the full spatial domain by means of its characterization in the frequency domain. More precisely, we derive an explicit expression for the minimum $L^2$-distance between the spectral density of the random field and its best approximation by a spectral density of an isotropic process. We prove asymptotic normality of an estimator of this quantity in the mixed increasing domain framework and use this result to derive an asymptotic level $\alpha$-test.

相關內容

Spectral methods yield numerical solutions of the Galerkin-truncated versions of nonlinear partial differential equations involved especially in fluid dynamics. In the presence of discontinuities, such as shocks, spectral approximations develop Gibbs oscillations near the discontinuity. This causes the numerical solution to deviate quickly from the true solution. For spectral approximations of the 1D inviscid Burgers equation, nonlinear wave resonances lead to the formation of tygers in well-resolved areas of the flow, far from the shock. Recently, Besse(to be published) has proposed novel spectral relaxation (SR) and spectral purging (SP) schemes for the removal of tygers and Gibbs oscillations in spectral approximations of nonlinear conservation laws. For the 1D inviscid Burgers equation, it is shown that the novel SR and SP approximations of the solution converge strongly in L2 norm to the entropic weak solution, under an appropriate choice of kernels and related parameters. In this work, we carry out a detailed numerical investigation of SR and SP schemes when applied to the 1D inviscid Burgers equation and report the efficiency of shock capture and the removal of tygers. We then extend our study to systems of nonlinear hyperbolic conservation laws - such as the 2x2 system of the shallow water equations and the standard 3x3 system of 1D compressible Euler equations. For the latter, we generalise the implementation of SR methods to non-periodic problems using Chebyshev polynomials. We then turn to singular flow in the 1D wall approximation of the 3D-axisymmetric wall-bounded incompressible Euler equation. Here, in order to determine the blowup time of the solution, we compare the decay of the width of the analyticity strip, obtained from the pure pseudospectral method, with the improved estimate obtained using the novel spectral relaxation scheme.

We present a simple and unified analysis of the Johnson-Lindenstrauss (JL) lemma, a cornerstone in the field of dimensionality reduction critical for managing high-dimensional data. Our approach not only simplifies the understanding but also unifies various constructions under the JL framework, including spherical, binary-coin, sparse JL, Gaussian and sub-Gaussian models. This simplification and unification make significant strides in preserving the intrinsic geometry of data, essential across diverse applications from streaming algorithms to reinforcement learning. Notably, we deliver the first rigorous proof of the spherical construction's effectiveness and provide a general class of sub-Gaussian constructions within this simplified framework. At the heart of our contribution is an innovative extension of the Hanson-Wright inequality to high dimensions, complete with explicit constants. By employing simple yet powerful probabilistic tools and analytical techniques, such as an enhanced diagonalization process, our analysis not only solidifies the JL lemma's theoretical foundation by removing an independence assumption but also extends its practical reach, showcasing its adaptability and importance in contemporary computational algorithms.

This manuscript deals with the analysis of numerical methods for the full discretization (in time and space) of the linear heat equation with Neumann boundary conditions, and it provides the reader with error estimates that are uniform in time. First, we consider the homogeneous equation with homogeneous Neumann boundary conditions over a finite interval. Using finite differences in space and the Euler method in time, we prove that our method is of order 1 in space, uniformly in time, under a classical CFL condition, and despite its lack of consistency at the boundaries. Second, we consider the nonhomogeneous equation with nonhomogeneous Neumann boundary conditions over a finite interval. Using a tailored similar scheme, we prove that our method is also of order 1 in space, uniformly in time, under a classical CFL condition. We indicate how this numerical method allows for a new way to compute steady states of such equations when they exist. We conclude by several numerical experiments to illustrate the sharpness and relevance of our theoretical results, as well as to examine situations that do not meet the hypotheses of our theoretical results, and to illustrate how our results extend to higher dimensions.

Bundles of matrix polynomials are sets of matrix polynomials with the same size and grade and the same eigenstructure up to the specific values of the eigenvalues. It is known that the closure of the bundle of a pencil $L$ (namely, a matrix polynomial of grade $1$), denoted by $\mathcal{B}(L)$, is the union of $\mathcal{B}(L)$ itself with a finite number of other bundles. The first main contribution of this paper is to prove that the dimension of each of these bundles is strictly smaller than the dimension of $\mathcal{B}(L)$. The second main contribution is to prove that also the closure of the bundle of a matrix polynomial of grade larger than 1 is the union of the bundle itself with a finite number of other bundles of smaller dimension. To get these results we obtain a formula for the (co)dimension of the bundle of a matrix pencil in terms of the Weyr characteristics of the partial multiplicities of the eigenvalues and of the (left and right) minimal indices, and we provide a characterization for the inclusion relationship between the closures of two bundles of matrix polynomials of the same size and grade.

The solution approximation for partial differential equations (PDEs) can be substantially improved using smooth basis functions. The recently introduced mollified basis functions are constructed through mollification, or convolution, of cell-wise defined piecewise polynomials with a smooth mollifier of certain characteristics. The properties of the mollified basis functions are governed by the order of the piecewise functions and the smoothness of the mollifier. In this work, we exploit the high-order and high-smoothness properties of the mollified basis functions for solving PDEs through the point collocation method. The basis functions are evaluated at a set of collocation points in the domain. In addition, boundary conditions are imposed at a set of boundary collocation points distributed over the domain boundaries. To ensure the stability of the resulting linear system of equations, the number of collocation points is set larger than the total number of basis functions. The resulting linear system is overdetermined and is solved using the least square technique. The presented numerical examples confirm the convergence of the proposed approximation scheme for Poisson, linear elasticity, and biharmonic problems. We study in particular the influence of the mollifier and the spatial distribution of the collocation points.

In this paper, effect of physical parameters in presence of magnetic field on heat transfer and flow of third grade non-Newtonian Nanofluid in a porous medium with annular cross sectional analytically has been investigated. The viscosity of Nanofluid categorized in 3 model include constant model and variable models with temperature that in variable category Reynolds Model and Vogel's Model has been used to determine the effect of viscosity in flow filed. analytically solution for velocity, temperature, and nanoparticle concentration are developed by Akbari-Ganji's Method (AGM) that has high proximity with numerical solution (Runge-Kutta 4th-order). Physical parameters that used for extract result for non dimensional variables of nonlinear equations are pressure gradient, Brownian motion parameter, thermophoresis parameter, magnetic field intensity and Grashof number. The results show that the increase in the pressure gradient and Thermophoresis parameter and decrease in the Brownian motion parameter cause the rise in the velocity profile. Also the increase in the Grashof number and decrease in MHD parameter cause the rise in the velocity profile. Furthermore, either increase in Thermophoresis or decrease in Brownian motion parameters results in enhancement in nanoparticle concentration. The highest value of velocity is observed when the Vogel's Model is used for viscosity.

This essay provides a comprehensive analysis of the optimization and performance evaluation of various routing algorithms within the context of computer networks. Routing algorithms are critical for determining the most efficient path for data transmission between nodes in a network. The efficiency, reliability, and scalability of a network heavily rely on the choice and optimization of its routing algorithm. This paper begins with an overview of fundamental routing strategies, including shortest path, flooding, distance vector, and link state algorithms, and extends to more sophisticated techniques.

Explicit time integration schemes coupled with Galerkin discretizations of time-dependent partial differential equations require solving a linear system with the mass matrix at each time step. For applications in structural dynamics, the solution of the linear system is frequently approximated through so-called mass lumping, which consists in replacing the mass matrix by some diagonal approximation. Mass lumping has been widely used in engineering practice for decades already and has a sound mathematical theory supporting it for finite element methods using the classical Lagrange basis. However, the theory for more general basis functions is still missing. Our paper partly addresses this shortcoming. Some special and practically relevant properties of lumped mass matrices are proved and we discuss how these properties naturally extend to banded and Kronecker product matrices whose structure allows to solve linear systems very efficiently. Our theoretical results are applied to isogeometric discretizations but are not restricted to them.

Numerical simulations of high energy-density experiments require equation of state (EOS) models that relate a material's thermodynamic state variables -- specifically pressure, volume/density, energy, and temperature. EOS models are typically constructed using a semi-empirical parametric methodology, which assumes a physics-informed functional form with many tunable parameters calibrated using experimental/simulation data. Since there are inherent uncertainties in the calibration data (parametric uncertainty) and the assumed functional EOS form (model uncertainty), it is essential to perform uncertainty quantification (UQ) to improve confidence in the EOS predictions. Model uncertainty is challenging for UQ studies since it requires exploring the space of all possible physically consistent functional forms. Thus, it is often neglected in favor of parametric uncertainty, which is easier to quantify without violating thermodynamic laws. This work presents a data-driven machine learning approach to constructing EOS models that naturally captures model uncertainty while satisfying the necessary thermodynamic consistency and stability constraints. We propose a novel framework based on physics-informed Gaussian process regression (GPR) that automatically captures total uncertainty in the EOS and can be jointly trained on both simulation and experimental data sources. A GPR model for the shock Hugoniot is derived and its uncertainties are quantified using the proposed framework. We apply the proposed model to learn the EOS for the diamond solid state of carbon, using both density functional theory data and experimental shock Hugoniot data to train the model and show that the prediction uncertainty reduces by considering the thermodynamic constraints.

Mass lumping techniques are commonly employed in explicit time integration schemes for problems in structural dynamics and both avoid solving costly linear systems with the consistent mass matrix and increase the critical time step. In isogeometric analysis, the critical time step is constrained by so-called "outlier" frequencies, representing the inaccurate high frequency part of the spectrum. Removing or dampening these high frequencies is paramount for fast explicit solution techniques. In this work, we propose robust mass lumping and outlier removal techniques for nontrivial geometries, including multipatch and trimmed geometries. Our lumping strategies provably do not deteriorate (and often improve) the CFL condition of the original problem and are combined with deflation techniques to remove persistent outlier frequencies. Numerical experiments reveal the advantages of the method, especially for simulations covering large time spans where they may halve the number of iterations with little or no effect on the numerical solution.

北京阿比特科技有限公司