亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Information geometry is concerned with the application of differential geometry concepts in the study of the parametric spaces of statistical models. When the random variables are independent and identically distributed, the underlying parametric space exhibit constant curvature, which makes the geometry hyperbolic (negative) or spherical (positive). In this paper, we derive closed-form expressions for the components of the first and second fundamental forms regarding pairwise isotropic Gaussian-Markov random field manifolds, allowing the computation of the Gaussian, mean and principal curvatures. Computational simulations using Markov Chain Monte Carlo dynamics indicate that a change in the sign of the Gaussian curvature is related to the emergence of phase transitions in the field. Moreover, the curvatures are highly asymmetrical for positive and negative displacements in the inverse temperature parameter, suggesting the existence of irreversible geometric properties in the parametric space along the dynamics. Furthermore, these asymmetric changes in the curvature of the space induces an intrinsic notion of time in the evolution of the random field.

相關內容

We propose and analyze volume-preserving parametric finite element methods for surface diffusion, conserved mean curvature flow and an intermediate evolution law in an axisymmetric setting. The weak formulations are presented in terms of the generating curves of the axisymmetric surfaces. The proposed numerical methods are based on piecewise linear parametric finite elements. The constructed fully practical schemes satisfy the conservation of the enclosed volume. In addition, we prove the unconditional stability and consider the distribution of vertices for the discretized schemes. The introduced methods are implicit and the resulting nonlinear systems of equations can be solved very efficiently and accurately via the Newton's iterative method. Numerical results are presented to show the accuracy and efficiency of the introduced schemes for computing the considered axisymmetric geometric flows.

Gaussian processes are machine learning models capable of learning unknown functions in a way that represents uncertainty, thereby facilitating construction of optimal decision-making systems. Motivated by a desire to deploy Gaussian processes in novel areas of science, a rapidly-growing line of research has focused on constructively extending these models to handle non-Euclidean domains, including Riemannian manifolds, such as spheres and tori. We propose techniques that generalize this class to model vector fields on Riemannian manifolds, which are important in a number of application areas in the physical sciences. To do so, we present a general recipe for constructing gauge independent kernels, which induce Gaussian vector fields, i.e. vector-valued Gaussian processes coherent with geometry, from scalar-valued Riemannian kernels. We extend standard Gaussian process training methods, such as variational inference, to this setting. This enables vector-valued Gaussian processes on Riemannian manifolds to be trained using standard methods and makes them accessible to machine learning practitioners.

We describe the first gradient methods on Riemannian manifolds to achieve accelerated rates in the non-convex case. Under Lipschitz assumptions on the Riemannian gradient and Hessian of the cost function, these methods find approximate first-order critical points faster than regular gradient descent. A randomized version also finds approximate second-order critical points. Both the algorithms and their analyses build extensively on existing work in the Euclidean case. The basic operation consists in running the Euclidean accelerated gradient descent method (appropriately safe-guarded against non-convexity) in the current tangent space, then moving back to the manifold and repeating. This requires lifting the cost function from the manifold to the tangent space, which can be done for example through the Riemannian exponential map. For this approach to succeed, the lifted cost function (called the pullback) must retain certain Lipschitz properties. As a contribution of independent interest, we prove precise claims to that effect, with explicit constants. Those claims are affected by the Riemannian curvature of the manifold, which in turn affects the worst-case complexity bounds for our optimization algorithms.

For integers $d \geq 2$ and $k \geq d+1$, a $k$-hole in a set $S$ of points in general position in $\mathbb{R}^d$ is a $k$-tuple of points from $S$ in convex position such that the interior of their convex hull does not contain any point from $S$. For a convex body $K \subseteq \mathbb{R}^d$ of unit $d$-dimensional volume, we study the expected number $EH^K_{d,k}(n)$ of $k$-holes in a set of $n$ points drawn uniformly and independently at random from $K$. We prove an asymptotically tight lower bound on $EH^K_{d,k}(n)$ by showing that, for all fixed integers $d \geq 2$ and $k\geq d+1$, the number $EH_{d,k}^K(n)$ is at least $\Omega(n^d)$. For some small holes, we even determine the leading constant $\lim_{n \to \infty}n^{-d}EH^K_{d,k}(n)$ exactly. We improve the currently best known lower bound on $\lim_{n \to \infty}n^{-d}EH^K_{d,d+1}(n)$ by Reitzner and Temesvari (2019). In the plane, we show that the constant $\lim_{n \to \infty}n^{-2}EH^K_{2,k}(n)$ is independent of $K$ for every fixed $k \geq 3$ and we compute it exactly for $k=4$, improving earlier estimates by Fabila-Monroy, Huemer, and Mitsche (2015) and by the authors (2020).

Defining shape and form as equivalence classes under translation, rotation and -- for shapes -- also scale, we extend generalized additive regression to models for the shape/form of planar curves and/or landmark configurations. The model respects the resulting quotient geometry of the response, employing the squared geodesic distance as loss function and a geodesic response function to map the additive predictor to the shape/form space. For fitting the model, we propose a Riemannian $L_2$-Boosting algorithm well suited for a potentially large number of possibly parameter-intensive model terms, which also yields automated model selection. We provide novel intuitively interpretable visualizations for (even non-linear) covariate effects in the shape/form space via suitable tensor-product factorization. The usefulness of the proposed framework is illustrated in an analysis of 1) astragalus shapes of wild and domesticated sheep and 2) cell forms generated in a biophysical model, as well as 3) in a realistic simulation study with response shapes and forms motivated from a dataset on bottle outlines.

We present a family of discretizations for the Variable Eddington Factor (VEF) equations that have high-order accuracy on curved meshes and efficient preconditioned iterative solvers. The VEF discretizations are combined with a high-order Discontinuous Galerkin transport discretization to form an effective high-order, linear transport method. The VEF discretizations are derived by extending the unified analysis of Discontinuous Galerkin methods for elliptic problems to the VEF equations. This framework is used to define analogs of the interior penalty, second method of Bassi and Rebay, minimal dissipation local Discontinuous Galerkin, and continuous finite element methods. The analysis of subspace correction preconditioners, which use a continuous operator to iteratively precondition the discontinuous discretization, is extended to the case of the non-symmetric VEF system. Numerical results demonstrate that the VEF discretizations have arbitrary-order accuracy on curved meshes, preserve the thick diffusion limit, and are effective on a proxy problem from thermal radiative transfer in both outer transport iterations and inner preconditioned linear solver iterations. In addition, a parallel weak scaling study of the interior penalty VEF discretization demonstrates the scalability of the method out to 1152 processors.

As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.

The inductive biases of graph representation learning algorithms are often encoded in the background geometry of their embedding space. In this paper, we show that general directed graphs can be effectively represented by an embedding model that combines three components: a pseudo-Riemannian metric structure, a non-trivial global topology, and a unique likelihood function that explicitly incorporates a preferred direction in embedding space. We demonstrate the representational capabilities of this method by applying it to the task of link prediction on a series of synthetic and real directed graphs from natural language applications and biology. In particular, we show that low-dimensional cylindrical Minkowski and anti-de Sitter spacetimes can produce equal or better graph representations than curved Riemannian manifolds of higher dimensions.

There has recently been increasing interest in learning representations of temporal knowledge graphs (KGs), which record the dynamic relationships between entities over time. Temporal KGs often exhibit multiple simultaneous non-Euclidean structures, such as hierarchical and cyclic structures. However, existing embedding approaches for temporal KGs typically learn entity representations and their dynamic evolution in the Euclidean space, which might not capture such intrinsic structures very well. To this end, we propose Dy- ERNIE, a non-Euclidean embedding approach that learns evolving entity representations in a product of Riemannian manifolds, where the composed spaces are estimated from the sectional curvatures of underlying data. Product manifolds enable our approach to better reflect a wide variety of geometric structures on temporal KGs. Besides, to capture the evolutionary dynamics of temporal KGs, we let the entity representations evolve according to a velocity vector defined in the tangent space at each timestamp. We analyze in detail the contribution of geometric spaces to representation learning of temporal KGs and evaluate our model on temporal knowledge graph completion tasks. Extensive experiments on three real-world datasets demonstrate significantly improved performance, indicating that the dynamics of multi-relational graph data can be more properly modeled by the evolution of embeddings on Riemannian manifolds.

M. Christandl conjectured that the composition of any trace preserving PPT map with itself is entanglement breaking. We prove that Christandl's conjecture holds asymptotically by showing that the distance between the iterates of any unital or trace preserving PPT map and the set of entanglement breaking maps tends to zero. Finally, for every graph we define a one-parameter family of maps on matrices and determine the least value of the parameter such that the map is variously, positive, completely positive, PPT and entanglement breaking in terms of properties of the graph. Our estimates are sharp enough to conclude that Christandl's conjecture holds for these families.

北京阿比特科技有限公司