亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper is a significant step forward in understanding dependency equilibria within the framework of real algebraic geometry encompassing both pure and mixed equilibria. We start by breaking down the concept for a general audience, using concrete examples to illustrate the main results. In alignment with Spohn's original definition of dependency equilibria, we propose three alternative definitions, allowing for an algebro-geometric comprehensive study of all dependency equilibria. We give a sufficient condition for the existence of a pure dependency equilibrium and show that every Nash equilibrium lies on the Spohn variety, the algebraic model for dependency equilibria. For generic games, the set of real points of the Spohn variety is Zariski dense. Furthermore, every Nash equilibrium in this case is a dependency equilibrium. Finally, we present a detailed analysis of the geometric structure of dependency equilibria for $(2\times2)$-games.

相關內容

CASES:International Conference on Compilers, Architectures, and Synthesis for Embedded Systems。 Explanation:嵌入式系統編譯器、體系結構和綜合國際會議。 Publisher:ACM。 SIT:

We introduce a method based on Gaussian process regression to identify discrete variational principles from observed solutions of a field theory. The method is based on the data-based identification of a discrete Lagrangian density. It is a geometric machine learning technique in the sense that the variational structure of the true field theory is reflected in the data-driven model by design. We provide a rigorous convergence statement of the method. The proof circumvents challenges posed by the ambiguity of discrete Lagrangian densities in the inverse problem of variational calculus. Moreover, our method can be used to quantify model uncertainty in the equations of motions and any linear observable of the discrete field theory. This is illustrated on the example of the discrete wave equation and Schr\"odinger equation. The article constitutes an extension of our previous article arXiv:2404.19626 for the data-driven identification of (discrete) Lagrangians for variational dynamics from an ode setting to the setting of discrete pdes.

While backward error analysis does not generalise straightforwardly to the strong and weak approximation of stochastic differential equations, it extends for the sampling of ergodic dynamics. The calculation of the modified equation relies on tedious calculations and there is no expression of the modified vector field, in opposition to the deterministic setting. We uncover in this paper the Hopf algebra structures associated to the laws of composition and substitution of exotic aromatic S-series, relying on the new idea of clumping. We use these algebraic structures to provide the algebraic foundations of stochastic numerical analysis with S-series, as well as an explicit expression of the modified vector field as an exotic aromatic B-series.

In this work we propose a discretization of the second boundary condition for the Monge-Ampere equation arising in geometric optics and optimal transport. The discretization we propose is the natural generalization of the popular Oliker-Prussner method proposed in 1988. For the discretization of the differential operator, we use a discrete analogue of the subdifferential. Existence, unicity and stability of the solutions to the discrete problem are established. Convergence results to the continuous problem are given.

One of the most promising applications of machine learning (ML) in computational physics is to accelerate the solution of partial differential equations (PDEs). The key objective of ML-based PDE solvers is to output a sufficiently accurate solution faster than standard numerical methods, which are used as a baseline comparison. We first perform a systematic review of the ML-for-PDE solving literature. Of articles that use ML to solve a fluid-related PDE and claim to outperform a standard numerical method, we determine that 79% (60/76) compare to a weak baseline. Second, we find evidence that reporting biases, especially outcome reporting bias and publication bias, are widespread. We conclude that ML-for-PDE solving research is overoptimistic: weak baselines lead to overly positive results, while reporting biases lead to underreporting of negative results. To a large extent, these issues appear to be caused by factors similar to those of past reproducibility crises: researcher degrees of freedom and a bias towards positive results. We call for bottom-up cultural changes to minimize biased reporting as well as top-down structural reforms intended to reduce perverse incentives for doing so.

We consider a non-linear Bayesian data assimilation model for the periodic two-dimensional Navier-Stokes equations with initial condition modelled by a Gaussian process prior. We show that if the system is updated with sufficiently many discrete noisy measurements of the velocity field, then the posterior distribution eventually concentrates near the ground truth solution of the time evolution equation, and in particular that the initial condition is recovered consistently by the posterior mean vector field. We further show that the convergence rate can in general not be faster than inverse logarithmic in sample size, but describe specific conditions on the initial conditions when faster rates are possible. In the proofs we provide an explicit quantitative estimate for backward uniqueness of solutions of the two-dimensional Navier-Stokes equations.

We study the problem of computing the value function from a discretely-observed trajectory of a continuous-time diffusion process. We develop a new class of algorithms based on easily implementable numerical schemes that are compatible with discrete-time reinforcement learning (RL) with function approximation. We establish high-order numerical accuracy as well as the approximation error guarantees for the proposed approach. In contrast to discrete-time RL problems where the approximation factor depends on the effective horizon, we obtain a bounded approximation factor using the underlying elliptic structures, even if the effective horizon diverges to infinity.

This paper deals with the numerical solution of conservation laws in the two dimensional case using a novel compact implicit time discretization that enable applications of fast algebraic solvers. We present details for the second order accurate parametric scheme based on the finite volume method including simple variants of ENO (Essentially Non-Oscillatory) and WENO (Weighted Essentially Non-Oscillatory) approximations. We present numerical experiments for representative linear and nonlinear problems.

We propose an extremely versatile approach to address a large family of matrix nearness problems, possibly with additional linear constraints. Our method is based on splitting a matrix nearness problem into two nested optimization problems, of which the inner one can be solved either exactly or cheaply, while the outer one can be recast as an unconstrained optimization task over a smooth real Riemannian manifold. We observe that this paradigm applies to many matrix nearness problems of practical interest appearing in the literature, thus revealing that they are equivalent in this sense to a Riemannian optimization problem. We also show that the objective function to be minimized on the Riemannian manifold can be discontinuous, thus requiring regularization techniques, and we give conditions for this to happen. Finally, we demonstrate the practical applicability of our method by implementing it for a number of matrix nearness problems that are relevant for applications and are currently considered very demanding in practice. Extensive numerical experiments demonstrate that our method often greatly outperforms its predecessors, including algorithms specifically designed for those particular problems.

We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.

Deep learning is usually described as an experiment-driven field under continuous criticizes of lacking theoretical foundations. This problem has been partially fixed by a large volume of literature which has so far not been well organized. This paper reviews and organizes the recent advances in deep learning theory. The literature is categorized in six groups: (1) complexity and capacity-based approaches for analyzing the generalizability of deep learning; (2) stochastic differential equations and their dynamic systems for modelling stochastic gradient descent and its variants, which characterize the optimization and generalization of deep learning, partially inspired by Bayesian inference; (3) the geometrical structures of the loss landscape that drives the trajectories of the dynamic systems; (4) the roles of over-parameterization of deep neural networks from both positive and negative perspectives; (5) theoretical foundations of several special structures in network architectures; and (6) the increasingly intensive concerns in ethics and security and their relationships with generalizability.

北京阿比特科技有限公司