亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, Particle-in-Cell algorithms for the Vlasov-Poisson system are presented based on its Poisson bracket structure. The Poisson equation is solved by finite element methods, in which the appropriate finite element spaces are taken to guarantee that the semi-discretized system possesses a well defined discrete Poisson bracket structure. Then, splitting methods are applied to the semi-discretized system by decomposing the Hamiltonian function. The resulting discretizations are proved to be Poisson bracket preserving. Moreover, the conservative quantities of the system are also well preserved. In numerical experiments, we use the presented numerical methods to simulate various physical phenomena. Due to the huge computational effort of the practical computations, we employ the strategy of parallel computing. The numerical results verify the efficiency of the new derived numerical discretizations.

相關內容

 是一個開源的,適合 web 設計師和前端開發者的編輯器,由 Adobe 創立。

We consider the problem of kernel classification. Works on kernel regression have shown that the rate of decay of the prediction error with the number of samples for a large class of data-sets is well characterized by two quantities: the capacity and source of the data-set. In this work, we compute the decay rates for the misclassification (prediction) error under the Gaussian design, for data-sets satisfying source and capacity assumptions. We derive the rates as a function of the source and capacity coefficients for two standard kernel classification settings, namely margin-maximizing Support Vector Machines (SVM) and ridge classification, and contrast the two methods. As a consequence, we find that the known worst-case rates are loose for this class of data-sets. Finally, we show that the rates presented in this work are also observed on real data-sets.

High-dimensional Partial Differential Equations (PDEs) are a popular mathematical modelling tool, with applications ranging from finance to computational chemistry. However, standard numerical techniques for solving these PDEs are typically affected by the curse of dimensionality. In this work, we tackle this challenge while focusing on stationary diffusion equations defined over a high-dimensional domain with periodic boundary conditions. Inspired by recent progress in sparse function approximation in high dimensions, we propose a new method called compressive Fourier collocation. Combining ideas from compressive sensing and spectral collocation, our method replaces the use of structured collocation grids with Monte Carlo sampling and employs sparse recovery techniques, such as orthogonal matching pursuit and $\ell^1$ minimization, to approximate the Fourier coefficients of the PDE solution. We conduct a rigorous theoretical analysis showing that the approximation error of the proposed method is comparable with the best $s$-term approximation (with respect to the Fourier basis) to the solution. Using the recently introduced framework of random sampling in bounded Riesz systems, our analysis shows that the compressive Fourier collocation method mitigates the curse of dimensionality with respect to the number of collocation points under sufficient conditions on the regularity of the diffusion coefficient. We also present numerical experiments that illustrate the accuracy and stability of the method for the approximation of sparse and compressible solutions.

We develop an \textit{a posteriori} error analysis for a numerical estimate of the time at which a functional of the solution to a partial differential equation (PDE) first achieves a threshold value on a given time interval. This quantity of interest (QoI) differs from classical QoIs which are modeled as bounded linear (or nonlinear) functionals {of the solution}. Taylor's theorem and an adjoint-based \textit{a posteriori} analysis is used to derive computable and accurate error estimates in the case of semi-linear parabolic and hyperbolic PDEs. The accuracy of the error estimates is demonstrated through numerical solutions of the one-dimensional heat equation and linearized shallow water equations (SWE), representing parabolic and hyperbolic cases, respectively.

We study the $L^1$-approximation of the log-Heston SDE at equidistant time points by Euler-type methods. We establish the convergence order $ 1/2-\epsilon$ for $\epsilon >0$ arbitrarily small, if the Feller index $\nu$ of the underlying CIR process satisfies $\nu > 1$. Thus, we recover the standard convergence order of the Euler scheme for SDEs with globally Lipschitz coefficients. Moreover, we discuss the case $\nu \leq 1$ and illustrate our findings by several numerical examples.

Optimal well placement and well injection-production are crucial for the reservoir development to maximize the financial profits during the project lifetime. Meta-heuristic algorithms have showed good performance in solving complex, nonlinear and non-continuous optimization problems. However, a large number of numerical simulation runs are involved during the optimization process. In this work, a novel and efficient data-driven evolutionary algorithm, called generalized data-driven differential evolutionary algorithm (GDDE), is proposed to reduce the number of simulation runs on well-placement and control optimization problems. Probabilistic neural network (PNN) is adopted as the classifier to select informative and promising candidates, and the most uncertain candidate based on Euclidean distance is prescreened and evaluated with a numerical simulator. Subsequently, local surrogate model is built by radial basis function (RBF) and the optimum of the surrogate, found by optimizer, is evaluated by the numerical simulator to accelerate the convergence. It is worth noting that the shape factors of RBF model and PNN are optimized via solving hyper-parameter sub-expensive optimization problem. The results show the optimization algorithm proposed in this study is very promising for a well-placement optimization problem of two-dimensional reservoir and joint optimization of Egg model.

Some continuous optimization methods can be connected to ordinary differential equations (ODEs) by taking continuous limits, and their convergence rates can be explained by the ODEs. However, since such ODEs can achieve any convergence rate by time scaling, the correspondence is not as straightforward as usually expected, and deriving new methods through ODEs is not quite direct. In this letter, we pay attention to stability restriction in discretizing ODEs and show that acceleration by time scaling always implies deceleration in discretization; they balance out so that we can define an attainable unique convergence rate which we call an "essential convergence rate".

This paper presents a new numerical method which approximates Neumann type null controls for the heat equation and is based on the Fokas method. This is a direct method for solving problems originating from the control theory, which allows the realisation of an efficient numerical algorithm that requires small computational effort for determining the null control with exponentially small error. Furthermore, the unified character of the Fokas method makes the extension of the numerical algorithm to a wide range of other linear PDEs and different type of boundary conditions straightforward.

Peskin's Immersed Boundary (IB) model and method are among one of the most important modeling tools and numerical methods. The IB method has been known to be first order accurate in the velocity. However, almost no rigorous theoretical proof can be found in the literature for Stokes equations with a prescribed velocity boundary condition. In this paper, it has been shown that the pressure of the Stokes equation has a convergence order $O(\sqrt{h} |\log h| )$ in the $L^2$ norm while the velocity has an $O(h |\log h| )$ convergence order in the infinity norm in two-space dimensions. The proofs are based on splitting the singular source terms, discrete Green functions on finite lattices with homogeneous and Neumann boundary conditions, a new discovered simplest $L^2$ discrete delta function, and the convergence proof of the IB method for elliptic interface problems \cite{li:mathcom}. The conclusion in this paper can apply to problems with different boundary conditions as long as the problems are wellposed. The proof process also provides an efficient way to decouple the system into three Helmholtz/Poisson equations without affecting the order of convergence. A non-trivial numerical example is also provided to confirm the theoretical analysis and the simple new discrete delta function.

This article presents methods to efficiently compute the Coriolis matrix and underlying Christoffel symbols (of the first kind) for tree-structure rigid-body systems. The algorithms can be executed purely numerically, without requiring partial derivatives as in unscalable symbolic techniques. The computations share a recursive structure in common with classical methods such as the Composite-Rigid-Body Algorithm and are of the lowest possible order: $O(Nd)$ for the Coriolis matrix and $O(Nd^2)$ for the Christoffel symbols, where $N$ is the number of bodies and $d$ is the depth of the kinematic tree. Implementation in C/C++ shows computation times on the order of 10-20 $\mu$s for the Coriolis matrix and 40-120 $\mu$s for the Christoffel symbols on systems with 20 degrees of freedom. The results demonstrate feasibility for the adoption of these algorithms within high-rate ($>$1kHz) loops for model-based control applications.

We introduce a tableau decision method for deciding realizability of specifications expressed in a safety fragment of LTL that includes bounded future temporal operators. Tableau decision procedures for temporal and modal logics have been thoroughly studied for satisfiability and for translating temporal formulae into equivalent B\"uchi automata, and also for model checking, where a specification and system are provided. However, to the best of our knowledge no tableau method has been studied for the reactive synthesis problem. Reactive synthesis starts from a specification where propositional variables are split into those controlled by the environment and those controlled by the system, and consists on automatically producing a system that guarantees the specification for all environments. Realizability is the decision problem of whether there is one such system. In this paper we present a method to decide realizability of safety specifications, from which we can also extract (i.e. synthesize) a correct system (in case the specification is realizable). Our method can easily be extended to handle richer domains (integers, etc) and bounds in the temporal operators in ways that automata approaches for synthesis cannot.

北京阿比特科技有限公司