亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This work is concerned with solving high-dimensional Fokker-Planck equations with the novel perspective that solving the PDE can be reduced to independent instances of density estimation tasks based on the trajectories sampled from its associated particle dynamics. With this approach, one sidesteps error accumulation occurring from integrating the PDE dynamics on a parameterized function class. This approach significantly simplifies deployment, as one is free of the challenges of implementing loss terms based on the differential equation. In particular, we introduce a novel class of high-dimensional functions called the functional hierarchical tensor (FHT). The FHT ansatz leverages a hierarchical low-rank structure, offering the advantage of linearly scalable runtime and memory complexity relative to the dimension count. We introduce a sketching-based technique that performs density estimation over particles simulated from the particle dynamics associated with the equation, thereby obtaining a representation of the Fokker-Planck solution in terms of our ansatz. We apply the proposed approach successfully to three challenging time-dependent Ginzburg-Landau models with hundreds of variables.

相關內容

An adaptive method for parabolic partial differential equations that combines sparse wavelet expansions in time with adaptive low-rank approximations in the spatial variables is constructed and analyzed. The method is shown to converge and satisfy similar complexity bounds as existing adaptive low-rank methods for elliptic problems, establishing its suitability for parabolic problems on high-dimensional spatial domains. The construction also yields computable rigorous a posteriori error bounds for such problems. The results are illustrated by numerical experiments.

This paper is concerned with the approximation of solutions to a class of second order non linear abstract differential equations. The finite-dimensional approximate solutions of the given system are built with the aid of the projection operator. We investigate the connection between the approximate solution and exact solution, and the question of convergence. Moreover, we define the Faedo-Galerkin(F-G) approximations and prove the existence and convergence results. The results are obtained by using the theory of cosine functions, Banach fixed point theorem and fractional power of closed linear operators. At last, an example of abstract formulation is provided.

In this work we introduce novel stress-only formulations of linear elasticity with special attention to their approximate solution using weighted residual methods. We present four sets of boundary value problems for a pure stress formulation of three-dimensional solids, and in two dimensions for plane stress and plane strain. The associated governing equations are derived by modifications and combinations of the Beltrami-Michell equations and the Navier-Cauchy equations. The corresponding variational forms of dimension $d \in \{2,3\}$ allow to directly approximate the stress tensor without any presupposed potential stress functions, and are shown to be well-posed in $\mathit{H}^1 \otimes \mathrm{Sym}(d)$ in the framework of functional analysis via the Lax-Milgram theorem, making their finite element implementation using $\mathit{C}^0$-continuous elements straightforward. Further, in the finite element setting we provide a treatment for constant and piece-wise constant body forces via distributions. The operators and differential identities in this work are provided in modern tensor notation and rely on exact sequences, making the resulting equations and differential relations directly comprehensible. Finally, numerical benchmarks for convergence as well as spectral analysis are used to test the limits and identify viable use-cases of the equations.

We propose a new numerical domain decomposition method for solving elliptic equations on compact Riemannian manifolds. One advantage of this method is its ability to bypass the need for global triangulations or grids on the manifolds. Additionally, it features a highly parallel iterative scheme. To verify its efficacy, we conduct numerical experiments on some $4$-dimensional manifolds without and with boundary.

Partial differential equations (PDEs) have become an essential tool for modeling complex physical systems. Such equations are typically solved numerically via mesh-based methods, such as finite element methods, with solutions over the spatial domain. However, obtaining these solutions are often prohibitively costly, limiting the feasibility of exploring parameters in PDEs. In this paper, we propose an efficient emulator that simultaneously predicts the solutions over the spatial domain, with theoretical justification of its uncertainty quantification. The novelty of the proposed method lies in the incorporation of the mesh node coordinates into the statistical model. In particular, the proposed method segments the mesh nodes into multiple clusters via a Dirichlet process prior and fits Gaussian process models with the same hyperparameters in each of them. Most importantly, by revealing the underlying clustering structures, the proposed method can provide valuable insights into qualitative features of the resulting dynamics that can be used to guide further investigations. Real examples are demonstrated to show that our proposed method has smaller prediction errors than its main competitors, with competitive computation time, and identifies interesting clusters of mesh nodes that possess physical significance, such as satisfying boundary conditions. An R package for the proposed methodology is provided in an open repository.

High-order Hadamard-form entropy stable multidimensional summation-by-parts discretizations of the Euler and compressible Navier-Stokes equations are considerably more expensive than the standard divergence-form discretization. In search of a more efficient entropy stable scheme, we extend the entropy-split method for implementation on unstructured grids and investigate its properties. The main ingredients of the scheme are Harten's entropy functions, diagonal-$ \mathsf{E} $ summation-by-parts operators with diagonal norm matrix, and entropy conservative simultaneous approximation terms (SATs). We show that the scheme is high-order accurate and entropy conservative on periodic curvilinear unstructured grids for the Euler equations. An entropy stable matrix-type interface dissipation operator is constructed, which can be added to the SATs to obtain an entropy stable semi-discretization. Fully-discrete entropy conservation is achieved using a relaxation Runge-Kutta method. Entropy stable viscous SATs, applicable to both the Hadamard-form and entropy-split schemes, are developed for the compressible Navier-Stokes equations. In the absence of heat fluxes, the entropy-split scheme is entropy stable for the compressible Navier-Stokes equations. Local conservation in the vicinity of discontinuities is enforced using an entropy stable hybrid scheme. Several numerical problems involving both smooth and discontinuous solutions are investigated to support the theoretical results. Computational cost comparison studies suggest that the entropy-split scheme offers substantial efficiency benefits relative to Hadamard-form multidimensional SBP-SAT discretizations.

Using validated numerical methods, interval arithmetic and Taylor models, we propose a certified predictor-corrector loop for tracking zeros of polynomial systems with a parameter. We provide a Rust implementation which shows tremendous improvement over existing software for certified path tracking.

We introduce a predictor-corrector discretisation scheme for the numerical integration of a class of stochastic differential equations and prove that it converges with weak order 1.0. The key feature of the new scheme is that it builds up sequentially (and recursively) in the dimension of the state space of the solution, hence making it suitable for approximations of high-dimensional state space models. We show, using the stochastic Lorenz 96 system as a test model, that the proposed method can operate with larger time steps than the standard Euler-Maruyama scheme and, therefore, generate valid approximations with a smaller computational cost. We also introduce the theoretical analysis of the error incurred by the new predictor-corrector scheme when used as a building block for discrete-time Bayesian filters for continuous-time systems. Finally, we assess the performance of several ensemble Kalman filters that incorporate the proposed sequential predictor-corrector Euler scheme and the standard Euler-Maruyama method. The numerical experiments show that the filters employing the new sequential scheme can operate with larger time steps, smaller Monte Carlo ensembles and noisier systems.

This paper discusses the error and cost aspects of ill-posed integral equations when given discrete noisy point evaluations on a fine grid. Standard solution methods usually employ discretization schemes that are directly induced by the measurement points. Thus, they may scale unfavorably with the number of evaluation points, which can result in computational inefficiency. To address this issue, we propose an algorithm that achieves the same level of accuracy while significantly reducing computational costs. Our approach involves an initial averaging procedure to sparsify the underlying grid. To keep the exposition simple, we focus only on one-dimensional ill-posed integral equations that have sufficient smoothness. However, the approach can be generalized to more complicated two- and three-dimensional problems with appropriate modifications.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

北京阿比特科技有限公司