亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Information geometry is concerned with the application of differential geometry concepts in the study of the parametric spaces of statistical models. When the random variables are independent and identically distributed, the underlying parametric space exhibit constant curvature, which makes the geometry hyperbolic (negative) or spherical (positive). In this paper, we derive closed-form expressions for the components of the first and second fundamental forms regarding pairwise isotropic Gaussian-Markov random field manifolds, allowing the computation of the Gaussian, mean and principal curvatures. Computational simulations using Markov Chain Monte Carlo dynamics indicate that a change in the sign of the Gaussian curvature is related to the emergence of phase transitions in the field. Moreover, the curvatures are highly asymmetrical for positive and negative displacements in the inverse temperature parameter, suggesting the existence of irreversible geometric properties in the parametric space along the dynamics. Furthermore, these asymmetric changes in the curvature of the space induces an intrinsic notion of time in the evolution of the random field.

相關內容

Constitutive models are widely used for modeling complex systems in science and engineering, where first-principle-based, well-resolved simulations are often prohibitively expensive. For example, in fluid dynamics, constitutive models are required to describe nonlocal, unresolved physics such as turbulence and laminar-turbulent transition. However, traditional constitutive models based on partial differential equations (PDEs) often lack robustness and are too rigid to accommodate diverse calibration datasets. We propose a frame-independent, nonlocal constitutive model based on a vector-cloud neural network that can be learned with data. The model predicts the closure variable at a point based on the flow information in its neighborhood. Such nonlocal information is represented by a group of points, each having a feature vector attached to it, and thus the input is referred to as vector cloud. The cloud is mapped to the closure variable through a frame-independent neural network, invariant both to coordinate translation and rotation and to the ordering of points in the cloud. As such, the network can deal with any number of arbitrarily arranged grid points and thus is suitable for unstructured meshes in fluid simulations. The merits of the proposed network are demonstrated for scalar transport PDEs on a family of parameterized periodic hill geometries. The vector-cloud neural network is a promising tool not only as nonlocal constitutive models and but also as general surrogate models for PDEs on irregular domains.

This paper introduces a $K$-function for assessing second-order properties of inhomogeneous random measures generated by marked point processes. The marks can be geometric objects like fibers or sets of positive volume, and the presented $K$-function takes into account geometric features of the marks, such as tangent directions of fibers. The $K$-function requires an estimate of the inhomogeneous density function of the random measure. We introduce parametric estimates for the density function based on parametric models that represent large scale features of the inhomogeneous random measure. The proposed methodology is applied to simulated fiber patterns as well as a three-dimensional data set of steel fibers in concrete.

We consider a general setting for dynamic tensor field tomography in an inhomogeneous refracting and absorbing medium as inverse source problem for the associated transport equation. Following Fermat's principle the Riemannian metric in the considered domain is generated by the refractive index of the medium. There is wealth of results for the inverse problem of recovering a tensor field from its longitudinal ray transform in a static euclidean setting, whereas there are only few inversion formulas and algorithms existing for general Riemannian metrics and time-dependent tensor fields. It is a well-known fact that tensor field tomography is equivalent to an inverse source problem for a transport equation where the ray transform serves as given boundary data. We prove that this result extends to the dynamic case. Interpreting dynamic tensor tomography as inverse source problem represents a holistic approach in this field. To guarantee that the forward mappings are well-defined, it is necessary to prove existence and uniqueness for the underlying transport equations. Unfortunately, the bilinear forms of the associated weak formulations do not satisfy the coercivity condition. To this end we transfer to viscosity solutions and prove their unique existence in appropriate Sobolev (static case) and Sobolev-Bochner (dynamic case) spaces under a certain assumption that allows only small variations of the refractive index. Numerical evidence is given that the viscosity solution solves the original transport equation if the viscosity term turns to zero.

Deep Markov models (DMM) are generative models that are scalable and expressive generalization of Markov models for representation, learning, and inference problems. However, the fundamental stochastic stability guarantees of such models have not been thoroughly investigated. In this paper, we provide sufficient conditions of DMM's stochastic stability as defined in the context of dynamical systems and propose a stability analysis method based on the contraction of probabilistic maps modeled by deep neural networks. We make connections between the spectral properties of neural network's weights and different types of used activation functions on the stability and overall dynamic behavior of DMMs with Gaussian distributions. Based on the theory, we propose a few practical methods for designing constrained DMMs with guaranteed stability. We empirically substantiate our theoretical results via intuitive numerical experiments using the proposed stability constraints.

Global spectral methods offer the potential to compute solutions of partial differential equations numerically to very high accuracy. In this work, we develop a novel global spectral method for linear partial differential equations on cubes by extending ideas of Chebop2 [Townsend and Olver, J. Comput. Phys., 299 (2015)] to the three-dimensional setting utilizing expansions in tensorized polynomial bases. Solving the discretized PDE involves a linear system that can be recast as a linear tensor equation. Under suitable additional assumptions, the structure of these equations admits for an efficient solution via the blocked recursive solver [Chen and Kressner, Numer. Algorithms, 84 (2020)]. In the general case, when these assumptions are not satisfied, this solver is used as a preconditioner to speed up computations.

Numerical solutions to high-dimensional partial differential equations (PDEs) based on neural networks have seen exciting developments. This paper derives complexity estimates of the solutions of $d$-dimensional second-order elliptic PDEs in the Barron space, that is a set of functions admitting the integral of certain parametric ridge function against a probability measure on the parameters. We prove under some appropriate assumptions that if the coefficients and the source term of the elliptic PDE lie in Barron spaces, then the solution of the PDE is $\epsilon$-close with respect to the $H^1$ norm to a Barron function. Moreover, we prove dimension-explicit bounds for the Barron norm of this approximate solution, depending at most polynomially on the dimension $d$ of the PDE. As a direct consequence of the complexity estimates, the solution of the PDE can be approximated on any bounded domain by a two-layer neural network with respect to the $H^1$ norm with a dimension-explicit convergence rate.

Random fields are mathematical structures used to model the spatial interaction of random variables along time, with applications ranging from statistical physics and thermodynamics to system's biology and the simulation of complex systems. Despite being studied since the 19th century, little is known about how the dynamics of random fields are related to the geometric properties of their parametric spaces. For example, how can we quantify the similarity between two random fields operating in different regimes using an intrinsic measure? In this paper, we propose a numerical method for the computation of geodesic distances in Gaussian random field manifolds. First, we derive the metric tensor of the underlying parametric space (the 3 x 3 first-order Fisher information matrix), then we derive the 27 Christoffel symbols required in the definition of the system of non-linear differential equations whose solution is a geodesic curve starting at the initial conditions. The fourth-order Runge-Kutta method is applied to numerically solve the non-linear system through an iterative approach. The obtained results show that the proposed method can estimate the geodesic distances for several different initial conditions. Besides, the results reveal an interesting pattern: in several cases, the geodesic curve obtained by reversing the system of differential equations in time does not match the original curve, suggesting the existence of irreversible geometric deformations in the trajectory of a moving reference traveling along a geodesic curve.

The quadrature-based method of moments (QMOM) offers a promising class of approximation techniques for reducing kinetic equations to fluid equations that are valid beyond thermodynamic equilibrium. A major challenge with these and other closures is that whenever the flux function must be evaluated (e.g., in a numerical update), a moment-inversion problem must be solved that computes the flux from the known input moments. In this work we study a particular five-moment variant of QMOM known as HyQMOM and establish that this system is moment-invertible over a convex region in solution space. We then develop a high-order Lax-Wendroff discontinuous Galerkin scheme for solving the resulting fluid system. The scheme is based on a predictor-corrector approach, where the prediction step is a localized space-time discontinuous Galerkin scheme. The nonlinear algebraic system that arises in this prediction step is solved using a Picard iteration. The correction step is a straightforward explicit update using the predicted solution in order to evaluate space-time flux integrals. In the absence of additional limiters, the proposed high-order scheme does not in general guarantee that the numerical solution remains in the convex set over which HyQMOM is moment-invertible. To overcome this challenge, we introduce novel limiters that rigorously guarantee that the computed solution does not leave the convex set over which moment-invertible and hyperbolicity of the fluid system is guaranteed. We develop positivity-preserving limiters in both the prediction and correction steps, as well as an oscillation-limiter that damps unphysical oscillations near shocks and rarefactions. Finally, we perform convergence tests to verify the order of accuracy of the scheme, as well as test the scheme on Riemann data to demonstrate the shock-capturing and robustness of the method.

Gaussian processes are machine learning models capable of learning unknown functions in a way that represents uncertainty, thereby facilitating construction of optimal decision-making systems. Motivated by a desire to deploy Gaussian processes in novel areas of science, a rapidly-growing line of research has focused on constructively extending these models to handle non-Euclidean domains, including Riemannian manifolds, such as spheres and tori. We propose techniques that generalize this class to model vector fields on Riemannian manifolds, which are important in a number of application areas in the physical sciences. To do so, we present a general recipe for constructing gauge independent kernels, which induce Gaussian vector fields, i.e. vector-valued Gaussian processes coherent with geometry, from scalar-valued Riemannian kernels. We extend standard Gaussian process training methods, such as variational inference, to this setting. This enables vector-valued Gaussian processes on Riemannian manifolds to be trained using standard methods and makes them accessible to machine learning practitioners.

In this article, we pursue our investigation of the connections between the theory of computation and hydrodynamics. We prove the existence of stationary solutions of the Euler equations in Euclidean space, of Beltrami type, that can simulate a universal Turing machine. In particular, these solutions possess undecidable trajectories. Heretofore, the known Turing complete constructions of steady Euler flows in dimension 3 or higher were not associated to a prescribed metric. Our solutions do not have finite energy, and their construction makes crucial use of the non-compactness of $\mathbb R^3$, however they can be employed to show that an arbitrary tape-bounded Turing machine can be robustly simulated by a Beltrami flow on $\mathbb T^3$ (with the standard flat metric). This shows that there exist steady solutions to the Euler equations on the flat torus exhibiting dynamical phenomena of (robust) arbitrarily high computational complexity. We also quantify the energetic cost for a Beltrami field on $\mathbb T^3$ to simulate a tape-bounded Turing machine, thus providing additional support for the space-bounded Church-Turing thesis. Another implication of our construction is that a Gaussian random Beltrami field on Euclidean space exhibits arbitrarily high computational complexity with probability $1$. Finally, our proof also yields Turing complete flows and maps on $\mathbb{S}^2$ with zero topological entropy, thus disclosing a certain degree of independence within different hierarchies of complexity.

北京阿比特科技有限公司