亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose a new simple and explicit numerical scheme for time-homogeneous stochastic differential equations. The scheme is based on sampling increments at each time step from a skew-symmetric probability distribution, with the level of skewness determined by the drift and volatility of the underlying process. We show that as the step-size decreases the scheme converges weakly to the diffusion of interest. We then consider the problem of simulating from the limiting distribution of an ergodic diffusion process using the numerical scheme with a fixed step-size. We establish conditions under which the numerical scheme converges to equilibrium at a geometric rate, and quantify the bias between the equilibrium distributions of the scheme and of the true diffusion process. Notably, our results do not require a global Lipschitz assumption on the drift, in contrast to those required for the Euler--Maruyama scheme for long-time simulation at fixed step-sizes. Our weak convergence result relies on an extension of the theory of Milstein \& Tretyakov to stochastic differential equations with non-Lipschitz drift, which could also be of independent interest. We support our theoretical results with numerical simulations.

相關內容

 Processing 是一門開源編程語言和與之配套的集成開發環境(IDE)的名稱。Processing 在電子藝術和視覺設計社區被用來教授編程基礎,并運用于大量的新媒體和互動藝術作品中。

In high-temperature plasma physics, a strong magnetic field is usually used to confine charged particles. Therefore, for studying the classical mathematical models of the physical problems it needs to consider the effect of external magnetic fields. One of the important model equations in plasma is the Vlasov-Poisson equation with an external magnetic field. This equation usually has multi-scale characteristics and rich physical properties, thus it is very important and meaningful to construct numerical methods that can maintain the physical properties inherited by the original systems over long time. This paper extends the corresponding theory in Cartesian coordinates to general orthogonal curvilinear coordinates, and proves that a Poisson-bracket structure can still be obtained after applying the corresponding finite element discretization. However, the Hamiltonian systems in the new coordinate systems generally cannot be decomposed into sub-systems that can be solved accurately, so it is impossible to use the splitting methods to construct the corresponding geometric integrators. Therefore, this paper proposes a semi-implicit method for strong magnetic fields and analyzes the asymptotic stability of this method.

In this work, N\'ed\'elec elements on locally refined meshes with hanging nodes are considered. A crucial aspect is the orientation of the hanging edges and faces. For non-orientable meshes, no solution or implementation has been available to date. The problem statement and corresponding algorithms are described in great detail. As a model problem, the time-harmonic Maxwell's equations are adopted because N\'ed\'elec elements constitute their natural discretization. The algorithms and implementation are demonstrated through two numerical examples on different uniformly and adaptively refined meshes. The implementation is performed within the finite element library deal.II.

Motivated by the recent successful application of physics-informed neural networks (PINNs) to solve Boltzmann-type equations [S. Jin, Z. Ma, and K. Wu, J. Sci. Comput., 94 (2023), pp. 57], we provide a rigorous error analysis for PINNs in approximating the solution of the Boltzmann equation near a global Maxwellian. The challenge arises from the nonlocal quadratic interaction term defined in the unbounded domain of velocity space. Analyzing this term on an unbounded domain requires the inclusion of a truncation function, which demands delicate analysis techniques. As a generalization of this analysis, we also provide proof of the asymptotic preserving property when using micro-macro decomposition-based neural networks.

Many well-known logical identities are naturally written as equivalences between contextual formulas. A simple example is the Boole-Shannon expansion $c[p] \equiv (p \wedge c[\mathrm{true}] ) \vee (\neg\, p \wedge c[\mathrm{false}] )$, where $c$ denotes an arbitrary formula with possibly multiple occurrences of a "hole", called a context, and $c[\varphi]$ denotes the result of "filling" all holes of $c$ with the formula $\varphi$. Another example is the unfolding rule $\mu X. c[X] \equiv c[\mu X. c[X]]$ of the modal $\mu$-calculus. We consider the modal $\mu$-calculus as overarching temporal logic and, as usual, reduce the problem whether $\varphi_1 \equiv \varphi_2$ holds for contextual formulas $\varphi_1, \varphi_2$ to the problem whether $\varphi_1 \leftrightarrow \varphi_2$ is valid . We show that the problem whether a contextual formula of the $\mu$-calculus is valid for all contexts can be reduced to validity of ordinary formulas. Our first result constructs a canonical context such that a formula is valid for all contexts if{}f it is valid for this particular one. However, the ordinary formula is exponential in the nesting-depth of the context variables. In a second result we solve this problem, thus proving that validity of contextual formulas is EXP-complete, as for ordinary equivalences. We also prove that both results hold for CTL and LTL as well. We conclude the paper with some experimental results. In particular, we use our implementation to automatically prove the correctness of a set of six contextual equivalences of LTL recently introduced by Esparza et al. for the normalization of LTL formulas. While Esparza et al. need several pages of manual proof, our tool only needs milliseconds to do the job and to compute counterexamples for incorrect variants of the equivalences.

Predicting quantum operator matrices such as Hamiltonian, overlap, and density matrices in the density functional theory (DFT) framework is crucial for understanding material properties. Current methods often focus on individual operators and struggle with efficiency and scalability for large systems. Here we introduce a novel deep learning model, SLEM (strictly localized equivariant message-passing) for predicting multiple quantum operators, that achieves state-of-the-art accuracy while dramatically improving computational efficiency. SLEM's key innovation is its strict locality-based design, constructing local, equivariant representations for quantum tensors while preserving physical symmetries. This enables complex many-body dependence without expanding the effective receptive field, leading to superior data efficiency and transferability. Using an innovative SO(2) convolution technique, SLEM reduces the computational complexity of high-order tensor products and is therefore capable of handling systems requiring the $f$ and $g$ orbitals in their basis sets. We demonstrate SLEM's capabilities across diverse 2D and 3D materials, achieving high accuracy even with limited training data. SLEM's design facilitates efficient parallelization, potentially extending DFT simulations to systems with device-level sizes, opening new possibilities for large-scale quantum simulations and high-throughput materials discovery.

We consider the problem of learning a linear operator $\theta$ between two Hilbert spaces from empirical observations, which we interpret as least squares regression in infinite dimensions. We show that this goal can be reformulated as an inverse problem for $\theta$ with the feature that its forward operator is generally non-compact (even if $\theta$ is assumed to be compact or of $p$-Schatten class). However, we prove that, in terms of spectral properties and regularisation theory, this inverse problem is equivalent to the known compact inverse problem associated with scalar response regression. Our framework allows for the elegant derivation of dimension-free rates for generic learning algorithms under H\"older-type source conditions. The proofs rely on the combination of techniques from kernel regression with recent results on concentration of measure for sub-exponential Hilbertian random variables. The obtained rates hold for a variety of practically-relevant scenarios in functional regression as well as nonlinear regression with operator-valued kernels and match those of classical kernel regression with scalar response.

The paper considers grad-div stabilized equal-order finite elements (FE) methods for the linearized Navier-Stokes equations. A block triangular preconditioner for the resulting system of algebraic equations is proposed which is closely related to the Augmented Lagrangian (AL) preconditioner. A field-of-values analysis of a preconditioned Krylov subspace method shows convergence bounds that are independent of the mesh parameter variation. Numerical studies support the theory and demonstrate the robustness of the approach also with respect to the viscosity parameter variation, as is typical for AL preconditioners when applied to inf-sup stable FE pairs. The numerical experiments also address the accuracy of grad-div stabilized equal-order FE method for the steady state Navier-Stokes equations.

While backward error analysis does not generalise straightforwardly to the strong and weak approximation of stochastic differential equations, it extends for the sampling of ergodic dynamics. The calculation of the modified equation relies on tedious calculations and there is no expression of the modified vector field, in opposition to the deterministic setting. We uncover in this paper the Hopf algebra structures associated to the laws of composition and substitution of exotic aromatic S-series, relying on the new idea of clumping. We use these algebraic structures to provide the algebraic foundations of stochastic numerical analysis with S-series, as well as an explicit expression of the modified vector field as an exotic aromatic B-series.

In this work we propose a discretization of the second boundary condition for the Monge-Ampere equation arising in geometric optics and optimal transport. The discretization we propose is the natural generalization of the popular Oliker-Prussner method proposed in 1988. For the discretization of the differential operator, we use a discrete analogue of the subdifferential. Existence, unicity and stability of the solutions to the discrete problem are established. Convergence results to the continuous problem are given.

One of the most promising applications of machine learning (ML) in computational physics is to accelerate the solution of partial differential equations (PDEs). The key objective of ML-based PDE solvers is to output a sufficiently accurate solution faster than standard numerical methods, which are used as a baseline comparison. We first perform a systematic review of the ML-for-PDE solving literature. Of articles that use ML to solve a fluid-related PDE and claim to outperform a standard numerical method, we determine that 79% (60/76) compare to a weak baseline. Second, we find evidence that reporting biases, especially outcome reporting bias and publication bias, are widespread. We conclude that ML-for-PDE solving research is overoptimistic: weak baselines lead to overly positive results, while reporting biases lead to underreporting of negative results. To a large extent, these issues appear to be caused by factors similar to those of past reproducibility crises: researcher degrees of freedom and a bias towards positive results. We call for bottom-up cultural changes to minimize biased reporting as well as top-down structural reforms intended to reduce perverse incentives for doing so.

北京阿比特科技有限公司