亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We discuss the design of an invariant measure-preserving transformed dynamics for the numerical treatment of Langevin dynamics based on rescaling of time, with the goal of sampling from an invariant measure. Given an appropriate monitor function which characterizes the numerical difficulty of the problem as a function of the state of the system, this method allows the stepsizes to be reduced only when necessary, facilitating efficient recovery of long-time behavior. We study both the overdamped and underdamped Langevin dynamics. We investigate how an appropriate correction term that ensures preservation of the invariant measure should be incorporated into a numerical splitting scheme. Finally, we demonstrate the use of the technique in several model systems, including a Bayesian sampling problem with a steep prior.

相關內容

Overdamped Langevin dynamics are reversible stochastic differential equations which are commonly used to sample probability measures in high-dimensional spaces, such as the ones appearing in computational statistical physics and Bayesian inference. By varying the diffusion coefficient, there are in fact infinitely many overdamped Langevin dynamics which are reversible with respect to the target probability measure at hand. This suggests to optimize the diffusion coefficient in order to increase the convergence rate of the dynamics, as measured by the spectral gap of the generator associated with the stochastic differential equation. We analytically study this problem here, obtaining in particular necessary conditions on the optimal diffusion coefficient. We also derive an explicit expression of the optimal diffusion in some appropriate homogenized limit. Numerical results, both relying on discretizations of the spectral gap problem and Monte Carlo simulations of the stochastic dynamics, demonstrate the increased quality of the sampling arising from an appropriate choice of the diffusion coefficient.

Capturing the extremal behaviour of data often requires bespoke marginal and dependence models which are grounded in rigorous asymptotic theory, and hence provide reliable extrapolation into the upper tails of the data-generating distribution. We present a toolbox of four methodological frameworks, motivated by modern extreme value theory, that can be used to accurately estimate extreme exceedance probabilities or the corresponding level in either a univariate or multivariate setting. Our frameworks were used to facilitate the winning contribution of Team Yalla to the EVA (2023) Conference Data Challenge, which was organised for the 13$^\text{th}$ International Conference on Extreme Value Analysis. This competition comprised seven teams competing across four separate sub-challenges, with each requiring the modelling of data simulated from known, yet highly complex, statistical distributions, and extrapolation far beyond the range of the available samples in order to predict probabilities of extreme events. Data were constructed to be representative of real environmental data, sampled from the fantasy country of "Utopia"

We develop an inferential toolkit for analyzing object-valued responses, which correspond to data situated in general metric spaces, paired with Euclidean predictors within the conformal framework. To this end we introduce conditional profile average transport costs, where we compare distance profiles that correspond to one-dimensional distributions of probability mass falling into balls of increasing radius through the optimal transport cost when moving from one distance profile to another. The average transport cost to transport a given distance profile to all others is crucial for statistical inference in metric spaces and underpins the proposed conditional profile scores. A key feature of the proposed approach is to utilize the distribution of conditional profile average transport costs as conformity score for general metric space-valued responses, which facilitates the construction of prediction sets by the split conformal algorithm. We derive the uniform convergence rate of the proposed conformity score estimators and establish asymptotic conditional validity for the prediction sets. The finite sample performance for synthetic data in various metric spaces demonstrates that the proposed conditional profile score outperforms existing methods in terms of both coverage level and size of the resulting prediction sets, even in the special case of scalar and thus Euclidean responses. We also demonstrate the practical utility of conditional profile scores for network data from New York taxi trips and for compositional data reflecting energy sourcing of U.S. states.

We present the first formulation of the optimal polynomial approximation of the solution of linear non-autonomous systems of ODEs in the framework of the so-called $\star$-product. This product is the basis of new approaches for the solution of such ODEs, both in the analytical and the numerical sense. The paper shows how to formally state the problem and derives upper bounds for its error.

This paper analyses conforming and nonconforming virtual element formulations of arbitrary polynomial degrees on general polygonal meshes for the coupling of solid and fluid phases in deformable porous plates. The governing equations consist of one fourth-order equation for the transverse displacement of the middle surface coupled with a second-order equation for the pressure head relative to the solid with mixed boundary conditions. We propose novel enrichment operators that connect nonconforming virtual element spaces of general degree to continuous Sobolev spaces. These operators satisfy additional orthogonal and best-approximation properties (referred to as a conforming companion operator in the context of finite element methods), which play an important role in the nonconforming methods. This paper proves a priori error estimates in the best-approximation form, and derives residual--based reliable and efficient a posteriori error estimates in appropriate norms, and shows that these error bounds are robust with respect to the main model parameters. The computational examples illustrate the numerical behaviour of the suggested virtual element discretisations and confirm the theoretical findings on different polygonal meshes with mixed boundary conditions.

In this work, energy levels of the Majumdar-Ghosh model (MGM) are analyzed up to 15 spins chain in the noisy intermediate-scale quantum framework using noisy simulations. This is a useful model whose exact solution is known for a particular choice of interaction coefficients. We have solved this model for interaction coefficients other than that required for the exactly solvable conditions as this solution can be of help in understanding the quantum phase transitions in complex spin chain models. The solutions are obtained using quantum approximate optimization algorithms (QAOA), and variational quantum eigensolver (VQE). To obtain the solutions, the one-dimensional lattice network is mapped to a Hamiltonian that corresponds to the required interaction coefficients among spins. Then, the ground states energy eigenvalue of this Hamiltonian is found using QAOA and VQE. Further, the validity of the Lieb-Schultz-Mattis theorem in the context of MGM is established by employing variational quantum deflation to find the first excited energy of MGM. Solution for an unweighted Max-cut graph for 17 nodes is also obtained using QAOA and VQE to know which one of these two techniques performs better in a combinatorial optimization problem. Since the variational quantum algorithms used here to revisit the Max-cut problem and MGM are hybrid algorithms, they require classical optimization. Consequently, the results obtained using different types of classical optimizers are compared to reveal that the QNSPSA optimizer improves the convergence of QAOA in comparison to the SPSA optimizer. However, VQE with EfficientSU2 ansatz using the SPSA optimizer yields the best results.

We detail the mathematical formulation of the line of "functional quantizer" modules developed by the Mathematics and Music Lab (MML) at Michigan Technological University, for the VCV Rack software modular synthesizer platform, which allow synthesizer players to tune oscillators to new musical scales based on mathematical functions. For example, we describe the recently-released MML Logarithmic Quantizer (LOG QNT) module that tunes synthesizer oscillators to the non-Pythagorean musical scale introduced by indie band The Apples in Stereo.

High-dimensional, higher-order tensor data are gaining prominence in a variety of fields, including but not limited to computer vision and network analysis. Tensor factor models, induced from noisy versions of tensor decompositions or factorizations, are natural potent instruments to study a collection of tensor-variate objects that may be dependent or independent. However, it is still in the early stage of developing statistical inferential theories for the estimation of various low-rank structures, which are customary to play the role of signals of tensor factor models. In this paper, we attempt to ``decode" the estimation of a higher-order tensor factor model by leveraging tensor matricization. Specifically, we recast it into mode-wise traditional high-dimensional vector/fiber factor models, enabling the deployment of conventional principal components analysis (PCA) for estimation. Demonstrated by the Tucker tensor factor model (TuTFaM), which is induced from the noisy version of the widely-used Tucker decomposition, we summarize that estimations on signal components are essentially mode-wise PCA techniques, and the involvement of projection and iteration will enhance the signal-to-noise ratio to various extent. We establish the inferential theory of the proposed estimators, conduct rich simulation experiments, and illustrate how the proposed estimations can work in tensor reconstruction, and clustering for independent video and dependent economic datasets, respectively.

We introduce a nonconforming hybrid finite element method for the two-dimensional vector Laplacian, based on a primal variational principle for which conforming methods are known to be inconsistent. Consistency is ensured using penalty terms similar to those used to stabilize hybridizable discontinuous Galerkin (HDG) methods, with a carefully chosen penalty parameter due to Brenner, Li, and Sung [Math. Comp., 76 (2007), pp. 573-595]. Our method accommodates elements of arbitrarily high order and, like HDG methods, it may be implemented efficiently using static condensation. The lowest-order case recovers the $P_1$-nonconforming method of Brenner, Cui, Li, and Sung [Numer. Math., 109 (2008), pp. 509-533], and we show that higher-order convergence is achieved under appropriate regularity assumptions. The analysis makes novel use of a family of weighted Sobolev spaces, due to Kondrat'ev, for domains admitting corner singularities.

With the advent of massive data sets much of the computational science and engineering community has moved toward data-intensive approaches in regression and classification. However, these present significant challenges due to increasing size, complexity and dimensionality of the problems. In particular, covariance matrices in many cases are numerically unstable and linear algebra shows that often such matrices cannot be inverted accurately on a finite precision computer. A common ad hoc approach to stabilizing a matrix is application of a so-called nugget. However, this can change the model and introduce error to the original solution. It is well known from numerical analysis that ill-conditioned matrices cannot be accurately inverted. In this paper we develop a multilevel computational method that scales well with the number of observations and dimensions. A multilevel basis is constructed adapted to a kD-tree partitioning of the observations. Numerically unstable covariance matrices with large condition numbers can be transformed into well conditioned multilevel ones without compromising accuracy. Moreover, it is shown that the multilevel prediction exactly solves the Best Linear Unbiased Predictor (BLUP) and Generalized Least Squares (GLS) model, but is numerically stable. The multilevel method is tested on numerically unstable problems of up to 25 dimensions. Numerical results show speedups of up to 42,050 times for solving the BLUP problem, but with the same accuracy as the traditional iterative approach. For very ill-conditioned cases the speedup is infinite. In addition, decay estimates of the multilevel covariance matrices are derived based on high dimensional interpolation techniques from the field of numerical analysis. This work lies at the intersection of statistics, uncertainty quantification, high performance computing and computational applied mathematics.

北京阿比特科技有限公司