亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a discretization of the dynamic optimal transport problem for which we can obtain the convergence rate for the value of the transport cost to its continuous value when the temporal and spatial stepsize vanish. This convergence result does not require any regularity assumption on the measures, though experiments suggest that the rate is not sharp. Via an analysis of the duality gap we also obtain the convergence rates for the gradient of the optimal potentials and the velocity field under mild regularity assumptions. To obtain such rates we discretize the dual formulation of the dynamic optimal transport problem and use the mature literature related to the error due to discretizing the Hamilton-Jacobi equation.

相關內容

In the realm of statistical exploration, the manipulation of pseudo-random values to discern their impact on data distribution presents a compelling avenue of inquiry. This article investigates the question: Is it possible to add pseudo-random values without compelling a shift towards a normal distribution?. Employing Python techniques, the study explores the nuances of pseudo-random value addition within the context of additions, aiming to unravel the interplay between randomness and resulting statistical characteristics. The Materials and Methods chapter details the construction of datasets comprising up to 300 billion pseudo-random values, employing three distinct layers of manipulation. The Results chapter visually and quantitatively explores the generated datasets, emphasizing distribution and standard deviation metrics. The study concludes with reflections on the implications of pseudo-random value manipulation and suggests avenues for future research. In the layered exploration, the first layer introduces subtle normalization with increasing summations, while the second layer enhances normality. The third layer disrupts typical distribution patterns, leaning towards randomness despite pseudo-random value summation. Standard deviation patterns across layers further illuminate the dynamic interplay of pseudo-random operations on statistical characteristics. While not aiming to disrupt academic norms, this work modestly contributes insights into data distribution complexities. Future studies are encouraged to delve deeper into the implications of data manipulation on statistical outcomes, extending the understanding of pseudo-random operations in diverse contexts.

The comparison of frequency distributions is a common statistical task with broad applications and a long history of methodological development. However, existing measures do not quantify the magnitude and direction by which one distribution is shifted relative to another. In the present study, we define distributional shift (DS) as the concentration of frequencies away from the greatest discrete class, e.g., a histogram's right-most bin. We derive a measure of DS based on the sum of cumulative frequencies, intuitively quantifying shift as a statistical moment. We then define relative distributional shift (RDS) as the difference in DS between distributions. Using simulated random sampling, we demonstrate that RDS is highly related to measures that are popularly used to compare frequency distributions. Focusing on a specific use case, i.e., simulated healthcare Evaluation and Management coding profiles, we show how RDS can be used to examine many pairs of empirical and expected distributions via shift-significance plots. In comparison to other measures, RDS has the unique advantage of being a signed (directional) measure based on a simple difference in an intuitive property.

Acoustic wave equation is a partial differential equation (PDE) which describes propagation of acoustic waves through a material. In general, the solution to this PDE is nonunique. Therefore, it is necessary to impose initial conditions in the form of Cauchy conditions for obtaining a unique solution. Theoretically, solving the wave equation is equivalent to representing the wavefield in terms of a radiation source which possesses finite energy over space and time.The radiation source is represented by a forcing term in the right-hand-side of the wave equation. In practice, the source may be represented in terms of normal derivative of pressure or normal velocity over a surface. The pressure wavefield is then calculated by solving an associated boundary-value problem via imposing conditions on the boundary of a chosen solution space. From ananalytic point of view, this manuscript aims to review typical approaches for obtaining unique solution to the acoustic wave equation in terms of either a volumetric radiation source, or a surface source in terms of normal derivative of pressure or normal velocity. A numerical approximation of the derived formulae will then be explained. The key step for numerically approximating the derived analytic formulae is inclusion of source, and will be studied carefully in this manuscript.

We perform a quantitative assessment of different strategies to compute the contribution due to surface tension in incompressible two-phase flows using a conservative level set (CLS) method. More specifically, we compare classical approaches, such as the direct computation of the curvature from the level set or the Laplace-Beltrami operator, with an evolution equation for the mean curvature recently proposed in literature. We consider the test case of a static bubble, for which an exact solution for the pressure jump across the interface is available, and the test case of an oscillating bubble, showing pros and cons of the different approaches.

We introduce a method to construct a stochastic surrogate model from the results of dimensionality reduction in forward uncertainty quantification. The hypothesis is that the high-dimensional input augmented by the output of a computational model admits a low-dimensional representation. This assumption can be met by numerous uncertainty quantification applications with physics-based computational models. The proposed approach differs from a sequential application of dimensionality reduction followed by surrogate modeling, as we "extract" a surrogate model from the results of dimensionality reduction in the input-output space. This feature becomes desirable when the input space is genuinely high-dimensional. The proposed method also diverges from the Probabilistic Learning on Manifold, as a reconstruction mapping from the feature space to the input-output space is circumvented. The final product of the proposed method is a stochastic simulator that propagates a deterministic input into a stochastic output, preserving the convenience of a sequential "dimensionality reduction + Gaussian process regression" approach while overcoming some of its limitations. The proposed method is demonstrated through two uncertainty quantification problems characterized by high-dimensional input uncertainties.

We consider nonparametric Bayesian inference in a multidimensional diffusion model with reflecting boundary conditions based on discrete high-frequency observations. We prove a general posterior contraction rate theorem in $L^2$-loss, which is applied to Gaussian priors. The resulting posteriors, as well as their posterior means, are shown to converge to the ground truth at the minimax optimal rate over H\"older smoothness classes in any dimension. Of independent interest and as part of our proofs, we show that certain frequentist penalized least squares estimators are also minimax optimal.

We establish a lower bound for the complexity of multiplying two skew polynomials. The lower bound coincides with the upper bound conjectured by Caruso and Borgne in 2017, up to a log factor. We present algorithms for three special cases, indicating that the aforementioned lower bound is quasi-optimal. In fact, our lower bound is quasi-optimal in the sense of bilinear complexity. In addition, we discuss the average bilinear complexity of simultaneous multiplication of skew polynomials and the complexity of skew polynomial multiplication in the case of towers of extensions.

The interest in network analysis of bibliographic data has grown substantially in recent years, yet comprehensive statistical models for examining the complete dynamics of scientific networks based on bibliographic data are generally lacking. Current empirical studies often focus on models restricting analysis either to paper citation networks (paper-by-paper) or author networks (author-by-author). However, such networks encompass not only direct connections between papers, but also indirect relationships between the references of papers connected by a citation link. In this paper, we extend recently developed relational hyperevent models (RHEM) for analyzing scientific networks. We introduce new covariates representing theoretically meaningful and empirically interesting sub-network configurations. The model accommodates testing hypotheses considering: (i) the polyadic nature of scientific publication events, and (ii) the interdependencies between authors and references of current and prior papers. We implement the model using purpose-built, publicly available open-source software, demonstrating its empirical value in an analysis of a large publicly available scientific network dataset. Assessing the relative strength of various effects reveals that both the hyperedge structure of publication events, as well as the interconnection between authors and references significantly improve our understanding and interpretation of collaborative scientific production.

In this paper, we consider the numerical approximation of a time-fractional stochastic Cahn--Hilliard equation driven by an additive fractionally integrated Gaussian noise. The model involves a Caputo fractional derivative in time of order $\alpha\in(0,1)$ and a fractional time-integral noise of order $\gamma\in[0,1]$. The numerical scheme approximates the model by a piecewise linear finite element method in space and a convolution quadrature in time (for both time-fractional operators), along with the $L^2$-projection for the noise. We carefully investigate the spatially semidiscrete and fully discrete schemes, and obtain strong convergence rates by using clever energy arguments. The temporal H\"older continuity property of the solution played a key role in the error analysis. Unlike the stochastic Allen--Cahn equation, the presence of the unbounded elliptic operator in front of the cubic nonlinearity in the underlying model adds complexity and challenges to the error analysis. To overcome these difficulties, several new techniques and error estimates are developed. The study concludes with numerical examples that validate the theoretical findings.

Data assimilation is a method of uncertainty quantification to estimate the hidden true state by updating the prediction owing to model dynamics with observation data. As a prediction model, we consider a class of nonlinear dynamical systems on Hilbert spaces including the two-dimensional Navier-Stokes equations and the Lorenz '63 and '96 equations. For nonlinear model dynamics, the ensemble Kalman filter (EnKF) is often used to approximate the mean and covariance of the probability distribution with a set of particles called an ensemble. In this paper, we consider a deterministic version of the EnKF known as the ensemble transform Kalman filter (ETKF), performing well even with limited ensemble sizes in comparison to other stochastic implementations of the EnKF. When the ETKF is applied to large-scale systems, an ad-hoc numerical technique called a covariance inflation is often employed to reduce approximation errors. Despite the practical effectiveness of the ETKF, little is theoretically known. The present study aims to establish the theoretical analysis of the ETKF. We obtain that the estimation error of the ETKF with and without the covariance inflation is bounded for any finite time. In particular, the uniform-in-time error bound is obtained when an inflation parameter is chosen appropriately, justifying the effectiveness of the covariance inflation in the ETKF.

北京阿比特科技有限公司