亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Fokker-Planck equation describes the evolution of the probability density associated with a stochastic differential equation. As the dimension of the system grows, solving this partial differential equation (PDE) using conventional numerical methods becomes computationally prohibitive. Here, we introduce a fast, scalable, and interpretable method for solving the Fokker-Planck equation which is applicable in higher dimensions. This method approximates the solution as a linear combination of shape-morphing Gaussians with time-dependent means and covariances. These parameters evolve according to the method of reduced-order nonlinear solutions (RONS) which ensures that the approximate solution stays close to the true solution of the PDE for all times. As such, the proposed method approximates the transient dynamics as well as the equilibrium density, when the latter exists. Our approximate solutions can be viewed as an evolution on a finite-dimensional statistical manifold embedded in the space of probability densities. We show that the metric tensor in RONS coincides with the Fisher information matrix on this manifold. We also discuss the interpretation of our method as a shallow neural network with Gaussian activation functions and time-varying parameters. In contrast to existing deep learning methods, our method is interpretable, requires no training, and automatically ensures that the approximate solution satisfies all properties of a probability density.

相關內容

We are interested in creating statistical methods to provide informative summaries of random fields through the geometry of their excursion sets. To this end, we introduce an estimator for the length of the perimeter of excursion sets of random fields on $\mathbb{R}^2$ observed over regular square tilings. The proposed estimator acts on the empirically accessible binary digital images of the excursion regions and computes the length of a piecewise linear approximation of the excursion boundary. The estimator is shown to be consistent as the pixel size decreases, without the need of any normalization constant, and with neither assumption of Gaussianity nor isotropy imposed on the underlying random field. In this general framework, even when the domain grows to cover $\mathbb{R}^2$, the estimation error is shown to be of smaller order than the side length of the domain. For affine, strongly mixing random fields, this translates to a multivariate Central Limit Theorem for our estimator when multiple levels are considered simultaneously. Finally, we conduct several numerical studies to investigate statistical properties of the proposed estimator in the finite-sample data setting.

We consider the numerical solution of the real time equilibrium Dyson equation, which is used in calculations of the dynamical properties of quantum many-body systems. We show that this equation can be written as a system of coupled, nonlinear, convolutional Volterra integro-differential equations, for which the kernel depends self-consistently on the solution. As is typical in the numerical solution of Volterra-type equations, the computational bottleneck is the quadratic-scaling cost of history integration. However, the structure of the nonlinear Volterra integral operator precludes the use of standard fast algorithms. We propose a quasilinear-scaling FFT-based algorithm which respects the structure of the nonlinear integral operator. The resulting method can reach large propagation times, and is thus well-suited to explore quantum many-body phenomena at low energy scales. We demonstrate the solver with two standard model systems: the Bethe graph, and the Sachdev-Ye-Kitaev model.

In this article, we construct a numerical method for a stochastic version of the Susceptible Infected Susceptible (SIS) epidemic model, expressed by a suitable stochastic differential equation (SDE), by using the semi-discrete method to a suitable transformed process. We prove the strong convergence of the proposed method, with order $1,$ and examine its stability properties. Since SDEs generally lack analytical solutions, numerical techniques are commonly employed. Hence, the research will seek numerical solutions for existing stochastic models by constructing suitable numerical schemes and comparing them with other schemes. The objective is to achieve a qualitative and efficient approach to solving the equations. Additionally, for models that have not yet been proposed for stochastic modeling using SDEs, the research will formulate them appropriately, conduct theoretical analysis of the model properties, and subsequently solve the corresponding SDEs.

Recent tropical cyclones, e.g., Hurricane Harvey (2017), have lead to significant rainfall and resulting runoff with accompanying flooding. When the runoff interacts with storm surge, the resulting floods can be greatly amplified and lead to effects that cannot be modeled by simple superposition of its distinctive sources. In an effort to develop accurate numerical simulations of runoff, surge, and compounding floods, we develop a local discontinuous Galerkin method for modified shallow water equations. In this modification, nonzero sources to the continuity equation are included to incorporate rainfall into the model using parametric rainfall models from literature as well as hindcast data. The discontinuous Galerkin spatial discretization is accompanied with a strong stability preserving explicit Runge Kutta time integrator. Hence, temporal stability is ensured through the CFL condition and we exploit the embarrassingly parallel nature of the developed method using MPI parallelization. We demonstrate the capabilities of the developed method though a sequence of physically relevant numerical tests, including small scale test cases based on laboratory measurements and large scale experiments with Hurricane Harvey in the Gulf of Mexico. The results highlight the conservation properties and robustness of the developed method and show the potential of compound flood modeling using our approach.

The virtual element method was introduced 10 years ago and it has generated a large number of theoretical results and applications ever since. Here, we overview the main mathematical results concerning the stabilization term of the method as an introduction for newcomers in the field. In particular, we summarize the proofs of some results for two dimensional ``nodal'' conforming and nonconforming virtual element spaces to pinpoint the essential tools used in the stability analysis. We discuss their extensions to several other virtual elements. Finally, we show several ways to prove interpolation estimates, including a recent one that is based on employing the stability bounds.

Deep operator networks (DeepONets) have demonstrated their capability of approximating nonlinear operators for initial- and boundary-value problems. One attractive feature of DeepONets is their versatility since they do not rely on prior knowledge about the solution structure of a problem and can thus be directly applied to a large class of problems. However, convergence in identifying the parameters of the networks may sometimes be slow. In order to improve on DeepONets for approximating the wave equation, we introduce the Green operator networks (GreenONets), which use the representation of the exact solution to the homogeneous wave equation in term of the Green's function. The performance of GreenONets and DeepONets is compared on a series of numerical experiments for homogeneous and heterogeneous media in one and two dimensions.

Machine learning models often need to be robust to noisy input data. The effect of real-world noise (which is often random) on model predictions is captured by a model's local robustness, i.e., the consistency of model predictions in a local region around an input. However, the na\"ive approach to computing local robustness based on Monte-Carlo sampling is statistically inefficient, leading to prohibitive computational costs for large-scale applications. In this work, we develop the first analytical estimators to efficiently compute local robustness of multi-class discriminative models using local linear function approximation and the multivariate Normal CDF. Through the derivation of these estimators, we show how local robustness is connected to concepts such as randomized smoothing and softmax probability. We also confirm empirically that these estimators accurately and efficiently compute the local robustness of standard deep learning models. In addition, we demonstrate these estimators' usefulness for various tasks involving local robustness, such as measuring robustness bias and identifying examples that are vulnerable to noise perturbation in a dataset. By developing these analytical estimators, this work not only advances conceptual understanding of local robustness, but also makes its computation practical, enabling the use of local robustness in critical downstream applications.

We consider estimation of parameters defined as linear functionals of solutions to linear inverse problems. Any such parameter admits a doubly robust representation that depends on the solution to a dual linear inverse problem, where the dual solution can be thought as a generalization of the inverse propensity function. We provide the first source condition double robust inference method that ensures asymptotic normality around the parameter of interest as long as either the primal or the dual inverse problem is sufficiently well-posed, without knowledge of which inverse problem is the more well-posed one. Our result is enabled by novel guarantees for iterated Tikhonov regularized adversarial estimators for linear inverse problems, over general hypothesis spaces, which are developments of independent interest.

This manuscript portrays optimization as a process. In many practical applications the environment is so complex that it is infeasible to lay out a comprehensive theoretical model and use classical algorithmic theory and mathematical optimization. It is necessary as well as beneficial to take a robust approach, by applying an optimization method that learns as one goes along, learning from experience as more aspects of the problem are observed. This view of optimization as a process has become prominent in varied fields and has led to some spectacular success in modeling and systems that are now part of our daily lives.

This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyze the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such models' predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterize the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behavior and lets us categorize networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths. With these tools, we can learn in detail about the inductive bias of architectures, hyperparameters, and optimizers.

北京阿比特科技有限公司