亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Nonlinear Fokker-Planck equations play a major role in modeling large systems of interacting particles with a proved effectiveness in describing real world phenomena ranging from classical fields such as fluids and plasma to social and biological dynamics. Their mathematical formulation has often to face with physical forces having a significant random component or with particles living in a random environment which characterization may be deduced through experimental data and leading consequently to uncertainty-dependent equilibrium states. In this work, to address the problem of effectively solving stochastic Fokker-Planck systems, we will construct a new equilibrium preserving scheme through a micro-macro approach based on stochastic Galerkin methods. The resulting numerical method, contrarily to the direct application of a stochastic Galerkin projection in the parameter space of the unknowns of the underlying Fokker-Planck model, leads to highly accurate description of the uncertainty dependent large time behavior. Several numerical tests in the context of collective behavior for social and life sciences are presented to assess the validity of the present methodology against standard ones.

相關內容

We consider nonlinear solvers for the incompressible, steady (or at a fixed time step for unsteady) Navier-Stokes equations in the setting where partial measurement data of the solution is available. The measurement data is incorporated/assimilated into the solution through a nudging term addition to the the Picard iteration that penalized the difference between the coarse mesh interpolants of the true solution and solver solution, analogous to how continuous data assimilation (CDA) is implemented for time dependent PDEs. This was considered in the paper [Li et al. {\it CMAME} 2023], and we extend the methodology by improving the analysis to be in the $L^2$ norm instead of a weighted $H^1$ norm where the weight depended on the coarse mesh width, and to the case of noisy measurement data. For noisy measurement data, we prove that the CDA-Picard method is stable and convergent, up to the size of the noise. Numerical tests illustrate the results, and show that a very good strategy when using noisy data is to use CDA-Picard to generate an initial guess for the classical Newton iteration.

The Kuznetsov equation is a classical wave model of acoustics that incorporates quadratic gradient nonlinearities. When its strong damping vanishes, it undergoes a singular behavior change, switching from a parabolic-like to a hyperbolic quasilinear evolution. In this work, we establish for the first time the optimal error bounds for its finite element approximation as well as a semi-implicit fully discrete approximation that are robust with respect to the vanishing damping parameter. The core of the new arguments lies in devising energy estimates directly for the error equation where one can more easily exploit the polynomial structure of the nonlinearities and compensate inverse estimates with smallness conditions on the error. Numerical experiments are included to illustrate the theoretical results.

Nurmuhammad et al. developed the Sinc-Nystr\"{o}m methods for initial value problems in which the solutions exhibit exponential decay end behavior. In these methods, the Single-Exponential (SE) transformation or the Double-Exponential (DE) transformation is combined with the Sinc approximation. Hara and Okayama improved on these transformations to attain a better convergence rate, which was later supported by theoretical error analyses. However, these methods have a computational drawback owing to the inclusion of a special function in the basis functions. To address this issue, Okayama and Hara proposed Sinc-collocation methods, which do not include any special function in the basis functions. This study conducts error analyses of these methods.

Gaussian processes (GPs) are widely-used tools in spatial statistics and machine learning and the formulae for the mean function and covariance kernel of a GP $T u$ that is the image of another GP $u$ under a linear transformation $T$ acting on the sample paths of $u$ are well known, almost to the point of being folklore. However, these formulae are often used without rigorous attention to technical details, particularly when $T$ is an unbounded operator such as a differential operator, which is common in many modern applications. This note provides a self-contained proof of the claimed formulae for the case of a closed, densely-defined operator $T$ acting on the sample paths of a square-integrable (not necessarily Gaussian) stochastic process. Our proof technique relies upon Hille's theorem for the Bochner integral of a Banach-valued random variable.

Spatial regression models are central to the field of spatial statistics. Nevertheless, their estimation in case of large and irregular gridded spatial datasets presents considerable computational challenges. To tackle these computational problems, Arbia \citep{arbia_2014_pairwise} introduced a pseudo-likelihood approach (called pairwise likelihood, say PL) which required the identification of pairs of observations that are internally correlated, but mutually conditionally uncorrelated. However, while the PL estimators enjoy optimal theoretical properties, their practical implementation when dealing with data observed on irregular grids suffers from dramatic computational issues (connected with the identification of the pairs of observations) that, in most empirical cases, negatively counter-balance its advantages. In this paper we introduce an algorithm specifically designed to streamline the computation of the PL in large and irregularly gridded spatial datasets, dramatically simplifying the estimation phase. In particular, we focus on the estimation of Spatial Error models (SEM). Our proposed approach, efficiently pairs spatial couples exploiting the KD tree data structure and exploits it to derive the closed-form expressions for fast parameter approximation. To showcase the efficiency of our method, we provide an illustrative example using simulated data, demonstrating the computational advantages if compared to a full likelihood inference are not at the expenses of accuracy.

The numerical solution of the generalized eigenvalue problem for a singular matrix pencil is challenging due to the discontinuity of its eigenvalues. Classically, such problems are addressed by first extracting the regular part through the staircase form and then applying a standard solver, such as the QZ algorithm, to that regular part. Recently, several novel approaches have been proposed to transform the singular pencil into a regular pencil by relatively simple randomized modifications. In this work, we analyze three such methods by Hochstenbach, Mehl, and Plestenjak that modify, project, or augment the pencil using random matrices. All three methods rely on the normal rank and do not alter the finite eigenvalues of the original pencil. We show that the eigenvalue condition numbers of the transformed pencils are unlikely to be much larger than the $\delta$-weak eigenvalue condition numbers, introduced by Lotz and Noferini, of the original pencil. This not only indicates favorable numerical stability but also reconfirms that these condition numbers are a reliable criterion for detecting simple finite eigenvalues. We also provide evidence that, from a numerical stability perspective, the use of complex instead of real random matrices is preferable even for real singular matrix pencils and real eigenvalues. As a side result, we provide sharp left tail bounds for a product of two independent random variables distributed with the generalized beta distribution of the first kind or Kumaraswamy distribution.

We solve the Landau-Lifshitz-Gilbert equation in the finite-temperature regime, where thermal fluctuations are modeled by a random magnetic field whose variance is proportional to the temperature. By rescaling the temperature proportionally to the computational cell size $\Delta x$ ($T \to T\,\Delta x/a_{\text{eff}}$, where $a_{\text{eff}}$ is the lattice constant) [M. B. Hahn, J. Phys. Comm., 3:075009, 2019], we obtain Curie temperatures $T_{\text{C}}$ that are in line with the experimental values for cobalt, iron and nickel. For finite-sized objects such as nanowires (1D) and nanolayers (2D), the Curie temperature varies with the smallest size $d$ of the system. We show that the difference between the computed finite-size $T_{\text{C}}$ and the bulk $T_{\text{C}}$ follows a power-law of the type: $(\xi_0/d)^\lambda$, where $\xi_0$ is the correlation length at zero temperature, and $\lambda$ is a critical exponent. We obtain values of $\xi_0$ in the nanometer range, also in accordance with other simulations and experiments. The computed critical exponent is close to $\lambda=2$ for all considered materials and geometries. This is the expected result for a mean-field approach, but slightly larger than the values observed experimentally.

A numerical algorithm for regularization of the solution of the source problem for the diffusion-logistic model based on information about the process at fixed moments of time of integral type has been developed. The peculiarity of the problem under study is the discrete formulation in space and impossibility to apply classical algorithms for its numerical solution. The regularization of the problem is based on the application of A.N. Tikhonov's approach and a priori information about the source of the process. The problem was formulated in a variational formulation and solved by the global tensor optimization method. It is shown that in the case of noisy data regularization improves the accuracy of the reconstructed source.

Probably one of the most striking examples of the close connections between global optimization processes and statistical physics is the simulated annealing method, inspired by the famous Monte Carlo algorithm devised by Metropolis et al. in the middle of the last century. In this paper we show how the tools of linear kinetic theory allow to describe this gradient-free algorithm from the perspective of statistical physics and how convergence to the global minimum can be related to classical entropy inequalities. This analysis highlight the strong link between linear Boltzmann equations and stochastic optimization methods governed by Markov processes. Thanks to this formalism we can establish the connections between the simulated annealing process and the corresponding mean-field Langevin dynamics characterized by a stochastic gradient descent approach. Generalizations to other selection strategies in simulated annealing that avoid the acceptance-rejection dynamic are also provided.

High order schemes are known to be unstable in the presence of shock discontinuities or under-resolved solution features for nonlinear conservation laws. Entropy stable schemes address this instability by ensuring that physically relevant solutions satisfy a semi-discrete entropy inequality independently of discretization parameters. This work extends high order entropy stable schemes to the quasi-1D shallow water equations and the quasi-1D compressible Euler equations, which model one-dimensional flows through channels or nozzles with varying width. We introduce new non-symmetric entropy conservative finite volume fluxes for both sets of quasi-1D equations, as well as a generalization of the entropy conservation condition to non-symmetric fluxes. When combined with an entropy stable interface flux, the resulting schemes are high order accurate, conservative, and semi-discretely entropy stable. For the quasi-1D shallow water equations, the resulting schemes are also well-balanced.

北京阿比特科技有限公司