亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we present a numerical approach to solve the McKean-Vlasov equations, which are distribution-dependent stochastic differential equations, under some non-globally Lipschitz conditions for both the drift and diffusion coefficients. We establish a propagation of chaos result, based on which the McKean-Vlasov equation is approximated by an interacting particle system. A truncated Euler scheme is then proposed for the interacting particle system allowing for a Khasminskii-type condition on the coefficients. To reduce the computational cost, the random batch approximation proposed in [Jin et al., J. Comput. Phys., 400(1), 2020] is extended to the interacting particle system where the interaction could take place in the diffusion term. An almost half order of convergence is proved in $L^p$ sense. Numerical tests are performed to verify the theoretical results.

相關內容

IFIP TC13 Conference on Human-Computer Interaction是人機交互領域的研究者和實踐者展示其工作的重要平臺。多年來,這些會議吸引了來自幾個國家和文化的研究人員。官網鏈接: · 極小值 · Agent · INTERACT · 相同 ·
2023 年 7 月 18 日

An important open question in human-robot interaction (HRI) is precisely when an agent should decide to communicate, particularly in a cooperative task. Perceptual Control Theory (PCT) tells us that agents are able to cooperate on a joint task simply by sharing the same 'intention', thereby distributing the effort required to complete the task among the agents. This is even true for agents that do not possess the same abilities, so long as the goal is observable, the combined actions are sufficient to complete the task, and there is no local minimum in the search space. If these conditions hold, then a cooperative task can be accomplished without any communication between the contributing agents. However, for tasks that do contain local minima, the global solution can only be reached if at least one of the agents adapts its intention at the appropriate moments, and this can only be achieved by appropriately timed communication. In other words, it is hypothesised that in cooperative tasks, the function of communication is to coordinate actions in a complex search space that contains local minima. These principles have been verified in a computer-based simulation environment in which two independent one-dimensional agents are obliged to cooperate in order to solve a two-dimensional path-finding task.

This paper is concerned with the designing, analyzing and implementing linear and nonlinear discretization scheme for the distributed optimal control problem (OCP) with the Cahn-Hilliard (CH) equation as constrained. We propose three difference schemes to approximate and investigate the solution behaviour of the OCP for the CH equation. We present the convergence analysis of the proposed discretization. We verify our findings by presenting numerical experiments.

We initiate the study of Bayesian conversations, which model interactive communication between two strategic agents without a mediator. We compare this to communication through a mediator and investigate the settings in which mediation can expand the range of implementable outcomes. In the first part of the paper, we ask whether the distributions of posterior beliefs that can be induced by a mediator protocol can also be induced by a (unmediated) Bayesian conversation. We show this is not possible -- mediator protocols can ``correlate'' the posteriors in a way that unmediated conversations cannot. Additionally, we provide characterizations of which distributions over posteriors are achievable via mediator protocols and Bayesian conversations. In the second part of the paper, we delve deeper into the eventual outcome of two-player games after interactive communication. We focus on games where only one agent has a non-trivial action and examine the performance of communication protocols that are individually rational (IR) for both parties. We consider different levels of IR including ex-ante, interim, and ex-post; and we impose different restrictions on how Alice and Bob can deviate from the protocol: the players are committed/non-committed. Our key findings reveal that, in the cases of ex-ante and interim IR, the expected utilities achievable through a mediator are equivalent to those achievable through unmediated Bayesian conversations. However, in the models of ex-post IR and non-committed interim IR, we observe a separation in the achievable outcomes.

We formulate standard and multilevel Monte Carlo methods for the $k$th moment $\mathbb{M}^k_\varepsilon[\xi]$ of a Banach space valued random variable $\xi\colon\Omega\to E$, interpreted as an element of the $k$-fold injective tensor product space $\otimes^k_\varepsilon E$. For the standard Monte Carlo estimator of $\mathbb{M}^k_\varepsilon[\xi]$, we prove the $k$-independent convergence rate $1-\frac{1}{p}$ in the $L_q(\Omega;\otimes^k_\varepsilon E)$-norm, provided that (i) $\xi\in L_{kq}(\Omega;E)$ and (ii) $q\in[p,\infty)$, where $p\in[1,2]$ is the Rademacher type of $E$. By using the fact that Rademacher averages are dominated by Gaussian sums combined with a version of Slepian's inequality for Gaussian processes due to Fernique, we moreover derive corresponding results for multilevel Monte Carlo methods, including a rigorous error estimate in the $L_q(\Omega;\otimes^k_\varepsilon E)$-norm and the optimization of the computational cost for a given accuracy. Whenever the type of the Banach space $E$ is $p=2$, our findings coincide with known results for Hilbert space valued random variables. We illustrate the abstract results by three model problems: second-order elliptic PDEs with random forcing or random coefficient, and stochastic evolution equations. In these cases, the solution processes naturally take values in non-Hilbertian Banach spaces. Further applications, where physical modeling constraints impose a setting in Banach spaces of type $p<2$, are indicated.

We present implicit and explicit versions of a numerical algorithm for solving a Volterra integro-differential equation. These algorithms are an extension of our previous work, and cater for a kernel of general form. We use an appropriate test equation to study the stability of both algorithms, numerically deriving stability regions. The region for the implicit method appears to be unbounded, while the explicit has a bounded region close to the origin. We perform a few calculations to demonstrate our results.

We consider gradient-related methods for low-rank matrix optimization with a smooth cost function. The methods operate on single factors of the low-rank factorization and share aspects of both alternating and Riemannian optimization. Two possible choices for the search directions based on Gauss-Southwell type selection rules are compared: one using the gradient of a factorized non-convex formulation, the other using the Riemannian gradient. While both methods provide gradient convergence guarantees that are similar to the unconstrained case, numerical experiments on a quadratic cost function indicate that the version based on the Riemannian gradient is significantly more robust with respect to small singular values and the condition number of the cost function. As a side result of our approach, we also obtain new convergence results for the alternating least squares method.

The criticality problem in nuclear engineering asks for the principal eigen-pair of a Boltzmann operator describing neutron transport in a reactor core. Being able to reliably design, and control such reactors requires assessing these quantities within quantifiable accuracy tolerances. In this paper we propose a paradigm that deviates from the common practice of approximately solving the corresponding spectral problem with a fixed, presumably sufficiently fine discretization. Instead, the present approach is based on first contriving iterative schemes, formulated in function space, that are shown to converge at a quantitative rate without assuming any a priori excess regularity properties, and that exploit only properties of the optical parameters in the underlying radiative transfer model. We develop the analytical and numerical tools for approximately realizing each iteration step withing judiciously chosen accuracy tolerances, verified by a posteriori estimates, so as to still warrant quantifiable convergence to the exact eigen-pair. This is carried out in full first for a Newton scheme. Since this is only locally convergent we analyze in addition the convergence of a power iteration in function space to produce sufficiently accurate initial guesses. Here we have to deal with intrinsic difficulties posed by compact but unsymmetric operators preventing standard arguments used in the finite dimensional case. Our main point is that we can avoid any condition on an initial guess to be already in a small neighborhood of the exact solution. We close with a discussion of remaining intrinsic obstructions to a certifiable numerical implementation, mainly related to not knowing the gap between the principal eigenvalue and the next smaller one in modulus.

In contrast with the diffusion equation which smoothens the initial data to $C^\infty$ for $t>0$ (away from the corners/edges of the domain), the subdiffusion equation only exhibits limited spatial regularity. As a result, one generally cannot expect high-order accuracy in space in solving the subdiffusion equation with nonsmooth initial data. In this paper, a new splitting of the solution is constructed for high-order finite element approximations to the subdiffusion equation with nonsmooth initial data. The method is constructed by splitting the solution into two parts, i.e., a time-dependent smooth part and a time-independent nonsmooth part, and then approximating the two parts via different strategies. The time-dependent smooth part is approximated by using high-order finite element method in space and convolution quadrature in time, while the steady nonsmooth part could be approximated by using smaller mesh size or other methods that could yield high-order accuracy. Several examples are presented to show how to accurately approximate the steady nonsmooth part, including piecewise smooth initial data, Dirac--Delta point initial data, and Dirac measure concentrated on an interface. The argument could be directly extended to subdiffusion equations with nonsmooth source data. Extensive numerical experiments are presented to support the theoretical analysis and to illustrate the performance of the proposed high-order splitting finite element methods.

Advances in artificial intelligence often stem from the development of new environments that abstract real-world situations into a form where research can be done conveniently. This paper contributes such an environment based on ideas inspired by elementary Microeconomics. Agents learn to produce resources in a spatially complex world, trade them with one another, and consume those that they prefer. We show that the emergent production, consumption, and pricing behaviors respond to environmental conditions in the directions predicted by supply and demand shifts in Microeconomics. We also demonstrate settings where the agents' emergent prices for goods vary over space, reflecting the local abundance of goods. After the price disparities emerge, some agents then discover a niche of transporting goods between regions with different prevailing prices -- a profitable strategy because they can buy goods where they are cheap and sell them where they are expensive. Finally, in a series of ablation experiments, we investigate how choices in the environmental rewards, bartering actions, agent architecture, and ability to consume tradable goods can either aid or inhibit the emergence of this economic behavior. This work is part of the environment development branch of a research program that aims to build human-like artificial general intelligence through multi-agent interactions in simulated societies. By exploring which environment features are needed for the basic phenomena of elementary microeconomics to emerge automatically from learning, we arrive at an environment that differs from those studied in prior multi-agent reinforcement learning work along several dimensions. For example, the model incorporates heterogeneous tastes and physical abilities, and agents negotiate with one another as a grounded form of communication.

Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.

北京阿比特科技有限公司