亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We build a valid p-value based on a concentration inequality for bounded random variables introduced by Pelekis, Ramon and Wang. The motivation behind this work is the calibration of predictive algorithms in a distribution-free setting. The super-uniform p-value is tighter than Hoeffding and Bentkus alternatives in certain regions. Even though we are motivated by a calibration setting in a machine learning context, the ideas presented in this work are also relevant in classical statistical inference. Furthermore, we compare the power of a collection of valid p- values for bounded losses, which are presented in previous literature.

相關內容

We present a semi-amortized variational inference framework designed for computationally feasible uncertainty quantification in 2D full-waveform inversion to explore the multimodal posterior distribution without dimensionality reduction. The framework is called WISER, short for full-Waveform variational Inference via Subsurface Extensions with Refinements. WISER leverages the power of generative artificial intelligence to perform approximate amortized inference that is low-cost albeit showing an amortization gap. This gap is closed through non-amortized refinements that make frugal use of acoustic wave physics. Case studies illustrate that WISER is capable of full-resolution, computationally feasible, and reliable uncertainty estimates of velocity models and imaged reflectivities.

A greedy randomized nonlinear Bregman-Kaczmarz method by sampling the working index with residual information is developed for the solution of the constrained nonlinear system of equations. Theoretical analyses prove the convergence of the greedy randomized nonlinear Bregman-Kaczmarz method and its relaxed version. Numerical experiments verify the effectiveness of the proposed method,which converges faster than the existing nonlinear Bregman-Kaczmarz methods.

The present work concerns the derivation of a numerical scheme to approximate weak solutions of the Euler equations with a gravitational source term. The designed scheme is proved to be fully well-balanced since it is able to exactly preserve all moving equilibrium solutions, as well as the corresponding steady solutions at rest obtained when the velocity vanishes. Moreover, the proposed scheme is entropy-preserving since it satisfies all fully discrete entropy inequalities. In addition, in order to satisfy the required admissibility of the approximate solutions, the positivity of both approximate density and pressure is established. Several numerical experiments attest the relevance of the developed numerical method.

Learning unknown stochastic differential equations (SDEs) from observed data is a significant and challenging task with applications in various fields. Current approaches often use neural networks to represent drift and diffusion functions, and construct likelihood-based loss by approximating the transition density to train these networks. However, these methods often rely on one-step stochastic numerical schemes, necessitating data with sufficiently high time resolution. In this paper, we introduce novel approximations to the transition density of the parameterized SDE: a Gaussian density approximation inspired by the random perturbation theory of dynamical systems, and its extension, the dynamical Gaussian mixture approximation (DynGMA). Benefiting from the robust density approximation, our method exhibits superior accuracy compared to baseline methods in learning the fully unknown drift and diffusion functions and computing the invariant distribution from trajectory data. And it is capable of handling trajectory data with low time resolution and variable, even uncontrollable, time step sizes, such as data generated from Gillespie's stochastic simulations. We then conduct several experiments across various scenarios to verify the advantages and robustness of the proposed method.

Matrix evolution equations occur in many applications, such as dynamical Lyapunov/Sylvester systems or Riccati equations in optimization and stochastic control, machine learning or data assimilation. In many cases, their tightest stability condition is coming from a linear term. Exponential time differencing (ETD) is known to produce highly stable numerical schemes by treating the linear term in an exact fashion. In particular, for stiff problems, ETD methods are a method of choice. We propose an extension of the class of ETD algorithms to matrix-valued dynamical equations. This allows us to produce highly efficient and stable integration schemes. We show their efficiency and applicability for a variety of real-world problems, from geophysical applications to dynamical problems in machine learning.

We propose a third-order numerical integrator based on the Neumann series and the Filon quadrature, designed mainly for highly oscillatory partial differential equations. The method can be applied to equations that exhibit small or moderate oscillations; however, counter-intuitively, large oscillations increase the accuracy of the scheme. With the proposed approach, the convergence order of the method can be easily improved. Error analysis of the method is also performed. We consider linear evolution equations involving first- and second-time derivatives that feature elliptic differential operators, such as the heat equation or the wave equation. Numerical experiments consider the case in which the space dimension is greater than one and confirm the theoretical study.

We design a fully implementable scheme to compute the invariant distribution of ergodic McKean-Vlasov SDE satisfying a uniform confluence property. Under natural conditions, we prove various convergence results notably we obtain rates for the Wasserstein distance in quadratic mean and almost sure sense.

Learning tasks play an increasingly prominent role in quantum information and computation. They range from fundamental problems such as state discrimination and metrology over the framework of quantum probably approximately correct (PAC) learning, to the recently proposed shadow variants of state tomography. However, the many directions of quantum learning theory have so far evolved separately. We propose a general mathematical formalism for describing quantum learning by training on classical-quantum data and then testing how well the learned hypothesis generalizes to new data. In this framework, we prove bounds on the expected generalization error of a quantum learner in terms of classical and quantum information-theoretic quantities measuring how strongly the learner's hypothesis depends on the specific data seen during training. To achieve this, we use tools from quantum optimal transport and quantum concentration inequalities to establish non-commutative versions of decoupling lemmas that underlie recent information-theoretic generalization bounds for classical machine learning. Our framework encompasses and gives intuitively accessible generalization bounds for a variety of quantum learning scenarios such as quantum state discrimination, PAC learning quantum states, quantum parameter estimation, and quantumly PAC learning classical functions. Thereby, our work lays a foundation for a unifying quantum information-theoretic perspective on quantum learning.

We present a method to generate contingency tables that follow loglinear models with prescribed marginal probabilities and dependence structures. We make use of (loglinear) Poisson regression, where the dependence structures, described using odds ratios, are implemented using an offset term. We apply this methodology to carry out simulation studies in the context of population size estimation using dual system and triple system estimators, popular in official statistics. These estimators use contingency tables that summarise the counts of elements enumerated or captured within lists that are linked. The simulation is used to investigate these estimators in the situation that the model assumptions are fulfilled, and the situation that the model assumptions are violated.

We use Stein characterisations to derive new moment-type estimators for the parameters of several truncated multivariate distributions in the i.i.d. case; we also derive the asymptotic properties of these estimators. Our examples include the truncated multivariate normal distribution and truncated products of independent univariate distributions. The estimators are explicit and therefore provide an interesting alternative to the maximum-likelihood estimator (MLE). The quality of these estimators is assessed through competitive simulation studies, in which we compare their behaviour to the performance of the MLE and the score matching approach.

北京阿比特科技有限公司