亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we propose a new randomized method for numerical integration on a compact complex manifold with respect to a continuous volume form. Taking for quadrature nodes a suitable determinantal point process, we build an unbiased Monte Carlo estimator of the integral of any Lipschitz function, and show that the estimator satisfies a central limit theorem, with a faster rate than under independent sampling. In particular, seeing a complex manifold of dimension $d$ as a real manifold of dimension $d_{\mathbb{R}}=2d$, the mean squared error for $N$ quadrature nodes decays as $N^{-1-2/d_{\mathbb{R}}}$; this is faster than previous DPP-based quadratures and reaches the optimal worst-case rate investigated by [Bakhvalov 1965] in Euclidean spaces. The determinantal point process we use is characterized by its kernel, which is the Bergman kernel of a holomorphic Hermitian line bundle, and we strongly build upon the work of Berman that led to the central limit theorem in [Berman, 2018].We provide numerical illustrations for the Riemann sphere.

相關內容

In this paper, we obtain a precise estimate of the probability that the sparse binomial random graph contains a large number of vertices in a triangle. The estimate of log of this probability is correct up to second order, and enables us to propose an exponential random graph model based on the number of vertices in a triangle. Specifically, by tuning a single parameter, we can with high probability induce any given fraction of vertices in a triangle. Moreover, in the proposed exponential random graph model we derive the large deviation principle for the number of edges. As a byproduct, we propose a consistent estimator of the tuning parameter.

We review error estimation methods for co-simulation, in particular methods that are applicable when the subsystems provide minimal interfaces. By this, we mean that subsystems do not support rollback of time steps, do not output derivatives, and do not provide any other information about their internals other than the output variables that are required for coupling with other subsystems. Such "black-box" subsystems are quite common in industrial applications, and the ability to couple them and run large-system simulations is one of the major attractions of the co-simulation paradigm. We also describe how the resulting error indicators may be used to automatically control macro time step sizes in order to strike a good balance between simulation speed and accuracy. The various elements of the step size control algorithm are presented in pseudocode so that readers may implement them and test them in their own applications. We provide practicable advice on how to use error indicators to judge the quality of a co-simulation, how to avoid common pitfalls, and how to configure the step size control algorithm.

In this paper, we combine convolutional neural networks (CNNs) with reduced order modeling (ROM) for efficient simulations of multiscale problems. These problems are modeled by partial differential equations with high-dimensional random inputs. The proposed method involves two separate CNNs: Basis CNNs and Coefficient CNNs (Coef CNNs), which correspond to two main parts of ROM. The method is called CNN-based ROM. The former one learns input-specific basis functions from the snapshots of fine-scale solutions. An activation function, inspired by Galerkin projection, is utilized at the output layer to reconstruct fine-scale solutions from the basis functions. Numerical results show that the basis functions learned by the Basis CNNs resemble data, which help to significantly reduce the number of the basis functions. Moreover, CNN-based ROM is less sensitive to data fluctuation caused by numerical errors than traditional ROM. Since the tests of Basis CNNs still need fine-scale stiffness matrix and load vector, it can not be directly applied to nonlinear problems. The Coef CNNs can be applied to nonlinear problems and designed to determine the coefficients for linear combination of basis functions. In addition, two applications of CNN-based ROM are presented, including predicting MsFEM basis functions within oversampling regions and building accurate surrogates for inverse problems.

In this paper, we use the stochastic approximation method to estimate Sliced Average Variance Estimation (SAVE). This method is known for its efficiency in recursive estimation. Stochastic approximation is particularly effective for constructing recursive estimators and has been widely used in density estimation, regression, and semi-parametric models. We demonstrate that the resulting estimator is asymptotically normal and root n consistent. Through simulations conducted in the laboratory and applied to real data, we show that it is faster than the kernel method previously proposed.

We develop in this paper a new regularized flow dynamic approach to construct efficient numerical schemes for Wasserstein gradient flows in Lagrangian coordinates. Instead of approximating the Wasserstein distance which needs to solve constrained minimization problems, we reformulate the problem using the Benamou-Brenier's flow dynamic approach, leading to algorithms which only need to solve unconstrained minimization problem in $L^2$ distance. Our schemes automatically inherit some essential properties of Wasserstein gradient systems such as positivity-preserving, mass conservative and energy dissipation. We present ample numerical simulations of Porous-Medium equations, Keller-Segel equations and Aggregation equations to validate the accuracy and stability of the proposed schemes. Compared to numerical schemes in Eulerian coordinates, our new schemes can capture sharp interfaces for various Wasserstein gradient flows using relatively smaller number of unknowns.

In this paper we propose a novel macroscopic (fluid dynamics) model for describing pedestrian flow in low and high density regimes. The model is characterized by the fact that the maximal density reachable by the crowd - usually a fixed model parameter - is instead a state variable. To do that, the model couples a conservation law, devised as usual for tracking the evolution of the crowd density, with a Burgers-like PDE with a nonlocal term describing the evolution of the maximal density. The variable maximal density is used here to describe the effects of the psychological/physical pushing forces which are observed in crowds during competitive or emergency situations. Specific attention is also dedicated to the fundamental diagram, i.e., the function which expresses the relationship between crowd density and flux. Although the model needs a well defined fundamental diagram as known input parameter, it is not evident a priori which relationship between density and flux will be actually observed, due to the time-varying maximal density. An a posteriori analysis shows that the observed fundamental diagram has an elongated "tail" in the congested region, thus resulting similar to the concave/concave fundamental diagram with a "double hump" observed in real crowds.

Noninformative priors constructed for estimation purposes are usually not appropriate for model selection and testing. The methodology of integral priors was developed to get prior distributions for Bayesian model selection when comparing two models, modifying initial improper reference priors. We propose a generalization of this methodology to more than two models. Our approach adds an artificial copy of each model under comparison by compactifying the parametric space and creating an ergodic Markov chain across all models that returns the integral priors as marginals of the stationary distribution. Besides the garantee of their existance and the lack of paradoxes attached to estimation reference priors, an additional advantage of this methodology is that the simulation of this Markov chain is straightforward as it only requires simulations of imaginary training samples for all models and from the corresponding posterior distributions. This renders its implementation automatic and generic, both in the nested case and in the nonnested case.

In this paper, we aim to improve the performance of a deep learning model towards image classification tasks, proposing a novel anchor-based training methodology, named \textit{Online Anchor-based Training} (OAT). The OAT method, guided by the insights provided in the anchor-based object detection methodologies, instead of learning directly the class labels, proposes to train a model to learn percentage changes of the class labels with respect to defined anchors. We define as anchors the batch centers at the output of the model. Then, during the test phase, the predictions are converted back to the original class label space, and the performance is evaluated. The effectiveness of the OAT method is validated on four datasets.

In this paper, we consider the task of efficiently computing the numerical solution of evolutionary complex Ginzburg--Landau equations on Cartesian product domains with homogeneous Dirichlet/Neumann or periodic boundary conditions. To this aim, we employ for the time integration high-order exponential methods of splitting and Lawson type with constant time step size. These schemes enjoy favorable stability properties and, in particular, do not show restrictions on the time step size due to the underlying stiffness of the models. The needed actions of matrix exponentials are efficiently realized by using a tensor-oriented approach that suitably employs the so-called $\mu$-mode product (when the semidiscretization in space is performed with finite differences) or with pointwise operations in Fourier space (when the model is considered with periodic boundary conditions). The overall effectiveness of the approach is demonstrated by running simulations on a variety of two- and three-dimensional (systems of) complex Ginzburg--Landau equations with cubic or cubic-quintic nonlinearities, which are widely considered in literature to model relevant physical phenomena. In fact, we show that high-order exponential-type schemes may outperform standard techniques to integrate in time the models under consideration, i.e., the well-known second-order split-step method and the explicit fourth-order Runge--Kutta integrator, for stringent accuracies.

In this paper, we innovatively develop uniform/variable-time-step weighted and shifted BDF2 (WSBDF2) methods for the anisotropic Cahn-Hilliard (CH) model, combining the scalar auxiliary variable (SAV) approach with two types of stabilized techniques. Using the concept of $G$-stability, the uniform-time-step WSBDF2 method is theoretically proved to be energy-stable. Due to the inapplicability of the relevant G-stability properties, another technique is adopted in this work to demonstrate the energy stability of the variable-time-step WSBDF2 method. In addition, the two numerical schemes are all mass-conservative.Finally, numerous numerical simulations are presented to demonstrate the stability and accuracy of these schemes.

北京阿比特科技有限公司