亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Data sets of multivariate normal distributions abound in many scientific areas like diffusion tensor imaging, structure tensor computer vision, radar signal processing, machine learning, just to name a few. In order to process those normal data sets for downstream tasks like filtering, classification or clustering, one needs to define proper notions of dissimilarities between normals and paths joining them. The Fisher-Rao distance defined as the Riemannian geodesic distance induced by the Fisher information metric is such a principled metric distance which however is not known in closed-form excepts for a few particular cases. In this work, we first report a fast and robust method to approximate arbitrarily finely the Fisher-Rao distance between multivariate normal distributions. Second, we introduce a class of distances based on diffeomorphic embeddings of the normal manifold into a submanifold of the higher-dimensional symmetric positive-definite cone corresponding to the manifold of centered normal distributions. We show that the projective Hilbert distance on the cone yields a metric on the embedded normal submanifold and we pullback that cone distance with its associated straight line Hilbert cone geodesics to obtain a distance and smooth paths between normal distributions. Compared to the Fisher-Rao distance approximation, the pullback Hilbert cone distance is computationally light since it requires to compute only the extreme minimal and maximal eigenvalues of matrices. Finally, we show how to use those distances in clustering tasks.

相關內容

Nowadays, numerical models are widely used in most of engineering fields to simulate the behaviour of complex systems, such as for example power plants or wind turbine in the energy sector. Those models are nevertheless affected by uncertainty of different nature (numerical, epistemic) which can affect the reliability of their predictions. We develop here a new method for quantifying conditional parameter uncertainty within a chain of two numerical models in the context of multiphysics simulation. More precisely, we aim to calibrate the parameters $\theta$ of the second model of the chain conditionally on the value of parameters $\lambda$ of the first model, while assuming the probability distribution of $\lambda$ is known. This conditional calibration is carried out from the available experimental data of the second model. In doing so, we aim to quantify as well as possible the impact of the uncertainty of $\lambda$ on the uncertainty of $\theta$. To perform this conditional calibration, we set out a nonparametric Bayesian formalism to estimate the functional dependence between $\theta$ and $\lambda$, denoted by $\theta(\lambda)$. First, each component of $\theta(\lambda)$ is assumed to be the realization of a Gaussian process prior. Then, if the second model is written as a linear function of $\theta(\lambda)$, the Bayesian machinery allows us to compute analytically the posterior predictive distribution of $\theta(\lambda)$ for any set of realizations $\lambda$. The effectiveness of the proposed method is illustrated on several analytical examples.

When dealing with a parametric statistical model, a Riemannian manifold can naturally appear by endowing the parameter space with the Fisher information metric. The geometry induced on the parameters by this metric is then referred to as the Fisher-Rao information geometry. Interestingly, this yields a point of view that allows for leveragingmany tools from differential geometry. After a brief introduction about these concepts, we will present some practical uses of these geometric tools in the framework of elliptical distributions. This second part of the exposition is divided into three main axes: Riemannian optimization for covariance matrix estimation, Intrinsic Cram\'er-Rao bounds, and classification using Riemannian distances.

Preference-based optimization algorithms are iterative procedures that seek the optimal calibration of a decision vector based only on comparisons between couples of different tunings. At each iteration, a human decision-maker expresses a preference between two calibrations (samples), highlighting which one, if any, is better than the other. The optimization procedure must use the observed preferences to find the tuning of the decision vector that is most preferred by the decision-maker, while also minimizing the number of comparisons. In this work, we formulate the preference-based optimization problem from a utility theory perspective. Then, we propose GLISp-r, an extension of a recent preference-based optimization procedure called GLISp. The latter uses a Radial Basis Function surrogate to describe the tastes of the decision-maker. Iteratively, GLISp proposes new samples to compare with the best calibration available by trading off exploitation of the surrogate model and exploration of the decision space. In GLISp-r, we propose a different criterion to use when looking for new candidate samples that is inspired by MSRS, a popular procedure in the black-box optimization framework. Compared to GLISp, GLISp-r is less likely to get stuck on local optima of the preference-based optimization problem. We motivate this claim theoretically, with a proof of global convergence, and empirically, by comparing the performances of GLISp and GLISp-r on several benchmark optimization problems.

Semitopologies model consensus in distributed system by equating the notion of a quorum -- a set of participants sufficient to make local progress -- with that of an open set. This yields a topology-like theory of consensus, but semitopologies generalise topologies, since the intersection of two quorums need not necessarily be a quorum. The semitopological model of consensus is naturally heterogeneous and local, just like topologies can be heterogenous and local, and for the same reasons: points may have different quorums and there is no restriction that open sets / quorums be uniformly generated (e.g. open sets can be something other than two-thirds majorities of the points in the space). Semiframes are an algebraic abstraction of semitopologies. They are to semitopologies as frames are to topologies. We give a notion of semifilter, which plays a role analogous to filters, and show how to build a semiframe out of the open sets of a semitopology, and a semitopology out of the semifilters of a semiframe. We define suitable notions of category and morphism and prove a categorical duality between (sober) semiframes and (spatial) semitopologies, and investigate well-behavedness properties on semitopologies and semiframes across the duality. Surprisingly, the structure of semiframes is not what one might initially expect just from looking at semitopologies, and the canonical structure required for the duality result -- a compatibility relation *, generalising sets intersection -- is also canonical for expressing well-behavedness properties. Overall, we deliver a new categorical, algebraic, abstract framework within which to study consensus on distributed systems, and which is also simply interesting to consider as a mathematical theory in its own right.

We give a fully polynomial-time randomized approximation scheme (FPRAS) for two terminal reliability in directed acyclic graphs.

We consider the problem of testing the fit of a sample of items from many categories to the uniform distribution over the categories. As a class of alternative hypotheses, we consider the removal of an $\ell_p$ ball of radius $\epsilon$ around the uniform rate sequence for $p \leq 2$. When the number of samples $n$ and number of categories $N$ go to infinity while $\epsilon$ goes to zero, the minimax risk $R_\epsilon^*$ in testing based on the sample's histogram (number of absent categories, singletons, collisions, ...) asymptotes to $2\Phi(-n N^{2-2/p} \epsilon^2/\sqrt{8N})$, with $\Phi(x)$ the normal CDF. This characterization allows comparing the many estimators previously proposed for this problem at the constant level rather than the rate of convergence of their risks. The minimax test mostly relies on collisions when $n/N$ is small, but otherwise behaves like the chisquared test. Empirical studies over a range of problem parameters show that this estimate is accurate in finite samples and that our test is significantly better than the chisquared test or a test that only uses collisions. Our analysis relies on the asymptotic normality of histogram ordinates, the equivalence between the minimax setting and a Bayesian setting, and the characterization of the least favorable prior by reducing a multi-dimensional optimization problem to a one-dimensional problem.

This paper presents a physics and data co-driven surrogate modeling method for efficient rare event simulation of civil and mechanical systems with high-dimensional input uncertainties. The method fuses interpretable low-fidelity physical models with data-driven error corrections. The hypothesis is that a well-designed and well-trained simplified physical model can preserve salient features of the original model, while data-fitting techniques can fill the remaining gaps between the surrogate and original model predictions. The coupled physics-data-driven surrogate model is adaptively trained using active learning, aiming to achieve a high correlation and small bias between the surrogate and original model responses in the critical parametric region of a rare event. A final importance sampling step is introduced to correct the surrogate model-based probability estimations. Static and dynamic problems with input uncertainties modeled by random field and stochastic process are studied to demonstrate the proposed method.

It is well known since 1960s that by exploring the tensor product structure of the discrete Laplacian on Cartesian meshes, one can develop a simple direct Poisson solver with an $\mathcal O(N^{\frac{d+1}d})$ complexity in $d$-dimension. The GPU acceleration of numerically solving PDEs has been explored successfully around fifteen years ago and become more and more popular in the past decade, driven by significant advancement in both hardware and software technologies, especially in the recent few years. We present in this paper a simple but extremely fast MATLAB implementation on a modern GPU, which can be easily reproduced, for solving 3D Poisson type equations using a spectral-element method. In particular, it costs less than one second on a Nvidia A100 for solving a Poisson equation with one billion degree of freedoms.

We investigate a class of parametric elliptic semilinear partial differential equations of second order with homogeneous essential boundary conditions, where the coefficients and the right-hand side (and hence the solution) may depend on a parameter. This model can be seen as a reaction-diffusion problem with a polynomial nonlinearity in the reaction term. The efficiency of various numerical approximations across the entire parameter space is closely related to the regularity of the solution with respect to the parameter. We show that if the coefficients and the right-hand side are analytic or Gevrey class regular with respect to the parameter, the same type of parametric regularity is valid for the solution. The key ingredient of the proof is the combination of the alternative-to-factorial technique from our previous work [1] with a novel argument for the treatment of the power-type nonlinearity in the reaction term. As an application of this abstract result, we obtain rigorous convergence estimates for numerical integration of semilinear reaction-diffusion problems with random coefficients using Gaussian and Quasi-Monte Carlo quadrature. Our theoretical findings are confirmed in numerical experiments.

Gaussian processes (GPs) are popular nonparametric statistical models for learning unknown functions and quantifying the spatiotemporal uncertainty in data. Recent works have extended GPs to model scalar and vector quantities distributed over non-Euclidean domains, including smooth manifolds appearing in numerous fields such as computer vision, dynamical systems, and neuroscience. However, these approaches assume that the manifold underlying the data is known, limiting their practical utility. We introduce RVGP, a generalisation of GPs for learning vector signals over latent Riemannian manifolds. Our method uses positional encoding with eigenfunctions of the connection Laplacian, associated with the tangent bundle, readily derived from common graph-based approximation of data. We demonstrate that RVGP possesses global regularity over the manifold, which allows it to super-resolve and inpaint vector fields while preserving singularities. Furthermore, we use RVGP to reconstruct high-density neural dynamics derived from low-density EEG recordings in healthy individuals and Alzheimer's patients. We show that vector field singularities are important disease markers and that their reconstruction leads to a comparable classification accuracy of disease states to high-density recordings. Thus, our method overcomes a significant practical limitation in experimental and clinical applications.

北京阿比特科技有限公司