亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the present work, we provide the general expression of the normalized centered moments of the Fr\'echet extreme-value distribution. In order to try to represent a set of data corresponding to rare events by a Fr\'echet distribution, it is important to be able to determine its characteristic parameter $\alpha$. Such a parameter can be deduced from the variance (proportional to the square of the Full Width at Half Maximum) of the studied distribution. However, the corresponding equation requires a numerical resolution. We propose two simple estimates of $\alpha$ from the knowledge of the variance, based on the Laurent series of the Gamma function. The most accurate expression involves the Ap\'ery constant.

相關內容

The high efficiency of a recently proposed method for computing with Gaussian processes relies on expanding a (translationally invariant) covariance kernel into complex exponentials, with frequencies lying on a Cartesian equispaced grid. Here we provide rigorous error bounds for this approximation for two popular kernels -- Mat\'ern and squared exponential -- in terms of the grid spacing and size. The kernel error bounds are uniform over a hypercube centered at the origin. Our tools include a split into aliasing and truncation errors, and bounds on sums of Gaussians or modified Bessel functions over various lattices. For the Mat\'ern case, motivated by numerical study, we conjecture a stronger Frobenius-norm bound on the covariance matrix error for randomly-distributed data points. Lastly, we prove bounds on, and study numerically, the ill-conditioning of the linear systems arising in such regression problems.

This paper focuses on a challenging class of inverse problems that is often encountered in applications. The forward model is a complex non-linear black-box, potentially non-injective, whose outputs cover multiple decades in amplitude. Observations are supposed to be simultaneously damaged by additive and multiplicative noises and censorship. As needed in many applications, the aim of this work is to provide uncertainty quantification on top of parameter estimates. The resulting log-likelihood is intractable and potentially non-log-concave. An adapted Bayesian approach is proposed to provide credibility intervals along with point estimates. An MCMC algorithm is proposed to deal with the multimodal posterior distribution, even in a situation where there is no global Lipschitz constant (or it is very large). It combines two kernels, namely an improved version of (Preconditioned Metropolis Adjusted Langevin) PMALA and a Multiple Try Metropolis (MTM) kernel. Whenever smooth, its gradient admits a Lipschitz constant too large to be exploited in the inference process. This sampler addresses all the challenges induced by the complex form of the likelihood. The proposed method is illustrated on classical test multimodal distributions as well as on a challenging and realistic inverse problem in astronomy.

A growing line of work shows how learned predictions can be used to break through worst-case barriers to improve the running time of an algorithm. However, incorporating predictions into data structures with strong theoretical guarantees remains underdeveloped. This paper takes a step in this direction by showing that predictions can be leveraged in the fundamental online list labeling problem. In the problem, n items arrive over time and must be stored in sorted order in an array of size Theta(n). The array slot of an element is its label and the goal is to maintain sorted order while minimizing the total number of elements moved (i.e., relabeled). We design a new list labeling data structure and bound its performance in two models. In the worst-case learning-augmented model, we give guarantees in terms of the error in the predictions. Our data structure provides strong guarantees: it is optimal for any prediction error and guarantees the best-known worst-case bound even when the predictions are entirely erroneous. We also consider a stochastic error model and bound the performance in terms of the expectation and variance of the error. Finally, the theoretical results are demonstrated empirically. In particular, we show that our data structure has strong performance on real temporal data sets where predictions are constructed from elements that arrived in the past, as is typically done in a practical use case.

It is well known, that Fr\'echet means on non-Euclidean spaces may exhibit nonstandard asymptotic rates depending on curvature. Even for distributions featuring standard asymptotic rates, there are non-Euclidean effects, altering finite sampling rates up to considerable sample sizes. These effects can be measured by the variance modulation function proposed by Pennec (2019). Among others, in view of statistical inference, it is important to bound this function on intervals of sampling sizes. In a first step into this direction, for the special case of a K-spider we give such an interval, based only on folded moments and total probabilities of spider legs and illustrate the method by simulations.

Semi-supervised learning is being extensively applied to estimate classifiers from training data in which not all the labels of the feature vectors are available. We present gmmsslm, an R package for estimating the Bayes' classifier from such partially classified data in the case where the feature vector has a multivariate Gaussian (normal) distribution in each of the predefined classes. Our package implements a recently proposed Gaussian mixture modelling framework that incorporates a missingness mechanism for the missing labels in which the probability of a missing label is represented via a logistic model with covariates that depend on the entropy of the feature vector. Under this framework, it has been shown that the accuracy of the Bayes' classifier formed from the Gaussian mixture model fitted to the partially classified training data can even have lower error rate than if it were estimated from the sample completely classified. This result was established in the particular case of two Gaussian classes with a common covariance matrix. Here, we focus on the effective implementation of an algorithm for multiple Gaussian classes with arbitrary covariance matrices. A strategy for initialising the algorithm is discussed and illustrated. The new package is demonstrated on some real data.

Laplace approximation is a very useful tool in Bayesian inference and it claims a nearly Gaussian behavior of the posterior. \cite{SpLaplace2022} established some rather accurate finite sample results about the quality of Laplace approximation in terms of the so called effective dimension $p$ under the critical dimension constraint $p^{3} \ll n$. However, this condition can be too restrictive for many applications like error-in-operator problem or Deep Neuronal Networks. This paper addresses the question whether the dimensionality condition can be relaxed and the accuracy of approximation can be improved if the target of estimation is low dimensional while the nuisance parameter is high or infinite dimensional. Under mild conditions, the marginal posterior can be approximated by a Gaussian mixture and the accuracy of the approximation only depends on the target dimension. Under the condition $p^{2} \ll n$ or in some special situation like semi-orthogonality, the Gaussian mixture can be replaced by one Gaussian distribution leading to a classical Laplace result. The second result greatly benefits from the recent advances in Gaussian comparison from \cite{GNSUl2017}. The results are illustrated and specified for the case of error-in-operator model.

Statistical analysis is increasingly confronted with complex data from metric spaces. Petersen and M\"uller (2019) established a general paradigm of Fr\'echet regression with complex metric space valued responses and Euclidean predictors. However, the local approach therein involves nonparametric kernel smoothing and suffers from the curse of dimensionality. To address this issue, we in this paper propose a novel random forest weighted local Fr\'echet regression paradigm. The main mechanism of our approach relies on a locally adaptive kernel generated by random forests. Our first method utilizes these weights as the local average to solve the conditional Fr\'echet mean, while the second method performs local linear Fr\'echet regression, both significantly improving existing Fr\'echet regression methods. Based on the theory of infinite order U-processes and infinite order Mmn -estimator, we establish the consistency, rate of convergence, and asymptotic normality for our local constant estimator, which covers the current large sample theory of random forests with Euclidean responses as a special case. Numerical studies show the superiority of our methods with several commonly encountered types of responses such as distribution functions, symmetric positive-definite matrices, and sphere data. The practical merits of our proposals are also demonstrated through the application to human mortality distribution data and New York taxi data.

When persistence diagrams are formalized as the Mobius inversion of the birth-death function, they naturally generalize to the multi-parameter setting and enjoy many of the key properties, such as stability, that we expect in applications. The direct definition in the 2-parameter setting, and the corresponding brute-force algorithm to compute them, require $\Omega(n^4)$ operations. But the size of the generalized persistence diagram, $C$, can be as low as linear (and as high as cubic). We elucidate a connection between the 2-parameter and the ordinary 1-parameter settings, which allows us to design an output-sensitive algorithm, whose running time is in $O(n^3 + Cn)$.

Lattice Boltzmann schemes are efficient numerical methods to solve a broad range of problems under the form of conservation laws. However, they suffer from a chronic lack of clear theoretical foundations. In particular, the consistency analysis and the derivation of the modified equations are still open issues. This has prevented, until today, to have an analogous of the Lax equivalence theorem for Lattice Boltzmann schemes. We propose a rigorous consistency study and the derivation of the modified equations for any lattice Boltzmann scheme under acoustic and diffusive scalings. This is done by passing from a kinetic (lattice Boltzmann) to a macroscopic (Finite Difference) point of view at a fully discrete level in order to eliminate the non-conserved moments relaxing away from the equilibrium. We rewrite the lattice Boltzmann scheme as a multi-step Finite Difference scheme on the conserved variables, as introduced in our previous contribution. We then perform the usual analyses for Finite Difference by exploiting its precise characterization using matrices of Finite Difference operators. Though we present the derivation of the modified equations until second-order underacoustic scaling, we provide all the elements to extend it to higher orders, since the kinetic-macroscopic connection is conducted at the fully discrete level. Finally, we show that our strategy yields, in a more rigorous setting, the same results as previous works in the literature.

Causal discovery and causal reasoning are classically treated as separate and consecutive tasks: one first infers the causal graph, and then uses it to estimate causal effects of interventions. However, such a two-stage approach is uneconomical, especially in terms of actively collected interventional data, since the causal query of interest may not require a fully-specified causal model. From a Bayesian perspective, it is also unnatural, since a causal query (e.g., the causal graph or some causal effect) can be viewed as a latent quantity subject to posterior inference -- other unobserved quantities that are not of direct interest (e.g., the full causal model) ought to be marginalized out in this process and contribute to our epistemic uncertainty. In this work, we propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning, which jointly infers a posterior over causal models and queries of interest. In our approach to ABCI, we focus on the class of causally-sufficient, nonlinear additive noise models, which we model using Gaussian processes. We sequentially design experiments that are maximally informative about our target causal query, collect the corresponding interventional data, and update our beliefs to choose the next experiment. Through simulations, we demonstrate that our approach is more data-efficient than several baselines that only focus on learning the full causal graph. This allows us to accurately learn downstream causal queries from fewer samples while providing well-calibrated uncertainty estimates for the quantities of interest.

北京阿比特科技有限公司