亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Casimir preserving integrators for stochastic Lie-Poisson equations with Stratonovich noise are developed extending Runge-Kutta Munthe-Kaas methods. The underlying Lie-Poisson structure is preserved along stochastic trajectories. A related stochastic differential equation on the Lie algebra is derived. The solution of this differential equation updates the evolution of the Lie-Poisson dynamics by means of the exponential map. The constructed numerical method conserves Casimir-invariants exactly, which is important for long time integration. This is illustrated numerically for the case of the stochastic heavy top and the stochastic sine-Euler equations.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志(zhi)。 Publisher:Elsevier。 SIT:

Acceptance-rejection (AR), Independent Metropolis Hastings (IMH) or importance sampling (IS) Monte Carlo (MC) simulation algorithms all involve computing ratios of probability density functions (pdfs). On the other hand, classifiers discriminate labeled samples produced by a mixture of two distributions and can be used for approximating the ratio of the two corresponding pdfs.This bridge between simulation and classification enables us to propose pdf-free versions of pdf-ratio-based simulation algorithms, where the ratio is replaced by a surrogate function computed via a classifier. From a probabilistic modeling perspective, our procedure involves a structured energy based model which can easily be trained and is compatible with the classical samplers.

For a singular integral equation on an interval of the real line, we study the behavior of the error of a delta-delta discretization. We show that the convergence is non-uniform, between order $O(h^{2})$ in the interior of the interval and a boundary layer where the consistency error does not tend to zero.

A new hybridizable discontinuous Galerkin method, named the CHDG method, is proposed for solving time-harmonic scalar wave propagation problems. This method relies on a standard discontinuous Galerkin scheme with upwind numerical fluxes and high-order polynomial bases. Auxiliary unknowns corresponding to characteristic variables are defined at the interface between the elements, and the physical fields are eliminated to obtain a reduced system. The reduced system can be written as a fixed-point problem that can be solved with stationary iterative schemes. Numerical results with 2D benchmarks are presented to study the performance of the approach. Compared to the standard HDG approach, the properties of the reduced system are improved with CHDG, which is more suited for iterative solution procedures. The condition number of the reduced system is smaller with CHDG than with the standard HDG method. Iterative solution procedures with CGNR or GMRES required smaller numbers of iterations with CHDG.

Ne\v{s}et\v{r}il and Ossona de Mendez recently proposed a new definition of graph convergence called structural convergence. The structural convergence framework is based on the probability of satisfaction of logical formulas from a fixed fragment of first-order formulas. The flexibility of choosing the fragment allows to unify the classical notions of convergence for sparse and dense graphs. Since the field is relatively young, the range of examples of convergent sequences is limited and only a few methods of construction are known. Our aim is to extend the variety of constructions by considering the gadget construction. We show that, when restricting to the set of sentences, the application of gadget construction on elementarily convergent sequences yields an elementarily convergent sequence. On the other hand, we show counterexamples witnessing that a generalization to the full first-order convergence is not possible without additional assumptions. We give several different sufficient conditions to ensure the full convergence. One of them states that the resulting sequence is first-order convergent if the replaced edges are dense in the original sequence of structures.

This paper introduces a formulation of the variable density incompressible Navier-Stokes equations by modifying the nonlinear terms in a consistent way. For Galerkin discretizations, the formulation leads to full discrete conservation of mass, squared density, momentum, angular momentum and kinetic energy without the divergence-free constraint being strongly enforced. In addition to favorable conservation properties, the formulation is shown to make the density field invariant to global shifts. The effect of viscous regularizations on conservation properties is also investigated. Numerical tests validate the theory developed in this work. The new formulation shows superior performance compared to other formulations from the literature, both in terms of accuracy for smooth problems and in terms of robustness.

We introduce a new class of Discontinuous Galerkin (DG) methods for solving nonlinear conservation laws on unstructured Voronoi meshes that use a nonconforming Virtual Element basis defined within each polygonal control volume. The basis functions are evaluated as an L2 projection of the virtual basis which remains unknown, along the lines of the Virtual Element Method (VEM). Contrarily to the VEM approach, the new basis functions lead to a nonconforming representation of the solution with discontinuous data across the element boundaries, as typically employed in DG discretizations. To improve the condition number of the resulting mass matrix, an orthogonalization of the full basis is proposed. The discretization in time is carried out following the ADER (Arbitrary order DERivative Riemann problem) methodology, which yields one-step fully discrete schemes that make use of a coupled space-time representation of the numerical solution. The space-time basis functions are constructed as a tensor product of the virtual basis in space and a one-dimensional Lagrange nodal basis in time. The resulting space-time stiffness matrix is stabilized by an extension of the dof-dof stabilization technique adopted in the VEM framework, hence allowing an element-local space-time Galerkin finite element predictor to be evaluated. The novel methods are referred to as VEM-DG schemes, and they are arbitrarily high order accurate in space and time. The new VEM-DG algorithms are rigorously validated against a series of benchmarks in the context of compressible Euler and Navier-Stokes equations. Numerical results are verified with respect to literature reference solutions and compared in terms of accuracy and computational efficiency to those obtained using a standard modal DG scheme with Taylor basis functions. An analysis of the condition number of the mass and space-time stiffness matrix is also forwarded.

The recently proposed recursive projection-aggregation (RPA) decoding algorithm for Reed-Muller codes has received significant attention as it provides near-ML decoding performance at reasonable complexity for short codes. However, its complicated structure makes it unsuitable for hardware implementation. Iterative projection-aggregation (IPA) decoding is a modified version of RPA decoding that simplifies the hardware implementation. In this work, we present a flexible hardware architecture for the IPA decoder that can be configured from fully-sequential to fully-parallel, thus making it suitable for a wide range of applications with different constraints and resource budgets. Our simulation and implementation results show that the IPA decoder has 41% lower area consumption, 44% lower latency, four times higher throughput, but currently seven times higher power consumption for a code with block length of 128 and information length of 29 compared to a state-of-the-art polar successive cancellation list (SCL) decoder with comparable decoding performance.

Model-based sequential approaches to discrete "black-box" optimization, including Bayesian optimization techniques, often access the same points multiple times for a given objective function in interest, resulting in many steps to find the global optimum. Here, we numerically study the effect of a postprocessing method on Bayesian optimization that strictly prohibits duplicated samples in the dataset. We find the postprocessing method significantly reduces the number of sequential steps to find the global optimum, especially when the acquisition function is of maximum a posterior estimation. Our results provide a simple but general strategy to solve the slow convergence of Bayesian optimization for high-dimensional problems.

In Bayesian statistics, posterior contraction rates (PCRs) quantify the speed at which the posterior distribution concentrates on arbitrarily small neighborhoods of a true model, in a suitable way, as the sample size goes to infinity. In this paper, we develop a new approach to PCRs, with respect to strong norm distances on parameter spaces of functions. Critical to our approach is the combination of a local Lipschitz-continuity for the posterior distribution with a dynamic formulation of the Wasserstein distance, which allows to set forth an interesting connection between PCRs and some classical problems arising in mathematical analysis, probability and statistics, e.g., Laplace methods for approximating integrals, Sanov's large deviation principles in the Wasserstein distance, rates of convergence of mean Glivenko-Cantelli theorems, and estimates of weighted Poincar\'e-Wirtinger constants. We first present a theorem on PCRs for a model in the regular infinite-dimensional exponential family, which exploits sufficient statistics of the model, and then extend such a theorem to a general dominated model. These results rely on the development of novel techniques to evaluate Laplace integrals and weighted Poincar\'e-Wirtinger constants in infinite-dimension, which are of independent interest. The proposed approach is applied to the regular parametric model, the multinomial model, the finite-dimensional and the infinite-dimensional logistic-Gaussian model and the infinite-dimensional linear regression. In general, our approach leads to optimal PCRs in finite-dimensional models, whereas for infinite-dimensional models it is shown explicitly how the prior distribution affect PCRs.

Due to their inherent capability in semantic alignment of aspects and their context words, attention mechanism and Convolutional Neural Networks (CNNs) are widely applied for aspect-based sentiment classification. However, these models lack a mechanism to account for relevant syntactical constraints and long-range word dependencies, and hence may mistakenly recognize syntactically irrelevant contextual words as clues for judging aspect sentiment. To tackle this problem, we propose to build a Graph Convolutional Network (GCN) over the dependency tree of a sentence to exploit syntactical information and word dependencies. Based on it, a novel aspect-specific sentiment classification framework is raised. Experiments on three benchmarking collections illustrate that our proposed model has comparable effectiveness to a range of state-of-the-art models, and further demonstrate that both syntactical information and long-range word dependencies are properly captured by the graph convolution structure.

北京阿比特科技有限公司