亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The simulation of chemical kinetics involving multiple scales constitutes a modeling challenge (from ordinary differential equations to Markov chain) and a computational challenge (multiple scales, large dynamical systems, time step restrictions). In this paper we propose a new discrete stochastic simulation algorithm: the postprocessed second kind stabilized orthogonal $\tau$-leap Runge-Kutta method (PSK-$\tau$-ROCK). In the context of chemical kinetics this method can be seen as a stabilization of Gillespie's explicit $\tau$-leap combined with a postprocessor. The stabilized procedure allows to simulate problems with multiple scales (stiff), while the postprocessing procedure allows to approximate the invariant measure (e.g. mean and variance) of ergodic stochastic dynamical systems. We prove stability and accuracy of the PSK-$\tau$-ROCK. Numerical experiments illustrate the high reliability and efficiency of the scheme when compared to other $\tau$-leap methods.

相關內容

We examine a variety of numerical methods that arise when considering dynamical systems in the context of physics-based simulations of deformable objects. Such problems arise in various applications, including animation, robotics, control and fabrication. The goals and merits of suitable numerical algorithms for these applications are different from those of typical numerical analysis research in dynamical systems. Here the mathematical model is not fixed a priori but must be adjusted as necessary to capture the desired behaviour, with an emphasis on effectively producing lively animations of objects with complex geometries. Results are often judged by how realistic they appear to observers (by the "eye-norm") as well as by the efficacy of the numerical procedures employed. And yet, we show that with an adjusted view numerical analysis and applied mathematics can contribute significantly to the development of appropriate methods and their analysis in a variety of areas including finite element methods, stiff and highly oscillatory ODEs, model reduction, and constrained optimization.

Quantum computing is a promising technology that harnesses the peculiarities of quantum mechanics to deliver computational speedups for some problems that are intractable to solve on a classical computer. Current generation noisy intermediate-scale quantum (NISQ) computers are severely limited in terms of chip size and error rates. Shallow quantum circuits with uncomplicated topologies are essential for successful applications in the NISQ era. Based on matrix analysis, we derive localized circuit transformations to efficiently compress quantum circuits for simulation of certain spin Hamiltonians known as free fermions. The depth of the compressed circuits is independent of simulation time and grows linearly with the number of spins. The proposed numerical circuit compression algorithm behaves backward stable and scales cubically in the number of spins enabling circuit synthesis beyond $\mathcal{O}(10^3)$ spins. The resulting quantum circuits have a simple nearest-neighbor topology, which makes them ideally suited for NISQ devices.

We consider a nonlocal evolution equation representing the continuum limit of a large ensemble of interacting particles on graphs forced by noise. The two principle ingredients of the continuum model are a nonlocal term and Q-Wiener process describing the interactions among the particles in the network and stochastic forcing respectively. The network connectivity is given by a square integrable function called a graphon. We prove that the initial value problem for the continuum model is well-posed. Further, we construct a semidiscrete (discrete in space and continuous in time) and a fully discrete schemes for the nonlocal model. The former is obtained by a discontinuous Galerkin method and the latter is based on further discretizing time using the Euler-Maruyama method. We prove convergence and estimate the rate of convergence in each case. For the semidiscrete scheme, the rate of convergence estimate is expressed in terms of the regularity of the graphon, Q-Wiener process, and the initial data. We work in generalized Lipschitz spaces, which allows to treat models with data of lower regularity. This is important for applications as many interesting types of connectivity including small-world and power-law are expressed by graphons that are not smooth. The error analysis of the fully discrete scheme, on the other hand, reveals that for some models common in applied science, one has a higher speed of convergence than that predicted by the standard estimates for the Euler-Maruyama method. The rate of convergence analysis is supplemented with detailed numerical experiments, which are consistent with our analytical results. As a by-product, this work presents a rigorous justification for taking continuum limit for a large class of interacting dynamical systems on graphs subject to noise.

We establish uniform error bounds of time-splitting Fourier pseudospectral (TSFP) methods for the nonlinear Klein--Gordon equation (NKGE) with weak power-type nonlinearity and $O(1)$ initial data, while the nonlinearity strength is characterized by $\varepsilon^{p}$ with a constant $p \in \mathbb{N}^+$ and a dimensionless parameter $\varepsilon \in (0, 1]$, for the long-time dynamics up to the time at $O(\varepsilon^{-\beta})$ with $0 \leq \beta \leq p$. In fact, when $0 < \varepsilon \ll 1$, the problem is equivalent to the long-time dynamics of NKGE with small initial data and $O(1)$ nonlinearity strength, while the amplitude of the initial data (and the solution) is at $O(\varepsilon)$. By reformulating the NKGE into a relativistic nonlinear Schr\"{o}dinger equation, we adapt the TSFP method to discretize it numerically. By using the method of mathematical induction to bound the numerical solution, we prove uniform error bounds at $O(h^{m}+\varepsilon^{p-\beta}\tau^2)$ of the TSFP method with $h$ mesh size, $\tau$ time step and $m\ge2$ depending on the regularity of the solution. The error bounds are uniformly accurate for the long-time simulation up to the time at $O(\varepsilon^{-\beta})$ and uniformly valid for $\varepsilon\in(0,1]$. Especially, the error bounds are uniformly at the second order rate for the large time step $\tau = O(\varepsilon^{-(p-\beta)/2})$ in the parameter regime $0\le\beta <p$. Numerical results are reported to confirm our error bounds in the long-time regime. Finally, the TSFP method and its error bounds are extended to a highly oscillatory complex NKGE which propagates waves with wavelength at $O(1)$ in space and $O(\varepsilon^{\beta})$ in time and wave velocity at $O(\varepsilon^{-\beta})$.

A simple third order compact finite element method is proposed for one-dimensional Sturm-Liouville boundary value problems. The key idea is based on the interpolation error estimate, which can be related to the source term. Thus, a simple posterior error analysis or a modified basis functions based on original piecewise linear basis function will lead to a third order accurate solution in the $L^2$ norm, and second order in the $H^1$ or the energy norm. Numerical examples have confirmed our analysis.

Scientific machine learning has been successfully applied to inverse problems and PDE discoveries in computational physics. One caveat of current methods however is the need for large amounts of (clean) data in order to recover full system responses or underlying physical models. Bayesian methods may be particularly promising to overcome these challenges as they are naturally less sensitive to sparse and noisy data. In this paper, we propose to use Bayesian neural networks (BNN) in order to: 1) Recover the full system states from measurement data (e.g. temperature, velocity field, etc.). We use Hamiltonian Monte-Carlo to sample the posterior distribution of a deep and dense BNN, and show that it is possible to accurately capture physics of varying complexity without overfitting. 2) Recover the parameters in the underlying partial differential equation (PDE) governing the physical system. Using the trained BNN as a surrogate of the system response, we generate datasets of derivatives potentially comprising the latent PDE of the observed system and perform a Bayesian linear regression (BLR) between the successive derivatives in space and time to recover the original PDE parameters. We take advantage of the confidence intervals on the BNN outputs and introduce the spatial derivative variance into the BLR likelihood to discard the influence of highly uncertain surrogate data points, which allows for more accurate parameter discovery. We demonstrate our approach on a handful of example applied to physics and non-linear dynamics.

In this work, we study the class of stochastic process that generalizes the Ornstein-Uhlenbeck processes, hereafter called by \emph{Generalized Ornstein-Uhlenbeck Type Process} and denoted by GOU type process. We consider them driven by the class of noise processes such as Brownian motion, symmetric $\alpha$-stable L\'evy process, a L\'evy process, and even a Poisson process. We give necessary and sufficient conditions under the memory kernel function for the time-stationary and the Markov properties for these processes. When the GOU type process is driven by a L\'evy noise we prove that it is infinitely divisible showing its generating triplet. Several examples derived from the GOU type process are illustrated showing some of their basic properties as well as some time series realizations. These examples also present their theoretical and empirical autocorrelation or normalized codifference functions depending on whether the process has a finite or infinite second moment. We also present the maximum likelihood estimation as well as the Bayesian estimation procedures for the so-called \emph{Cosine process}, a particular process in the class of GOU type processes. For the Bayesian estimation method, we consider the power series representation of Fox's H-function to better approximate the density function of a random variable $\alpha$-stable distributed. We consider four goodness-of-fit tests for helping to decide which \emph{Cosine process} (driven by a Gaussian or an $\alpha$-stable noise) best fit real data sets. Two applications of GOU type model are presented: one based on the Apple company stock market price data and the other based on the cardiovascular mortality in Los Angeles County data.

We introduce a method for learning minimal-dimensional dynamical models from high-dimensional time series data that lie on a low-dimensional manifold, as arises for many processes. For an arbitrary manifold, there is no smooth global coordinate representation, so following the formalism of differential topology we represent the manifold as an atlas of charts. We first partition the data into overlapping regions. Then undercomplete autoencoders are used to find low-dimensional coordinate representations for each region. We then use the data to learn dynamical models in each region, which together yield a global low-dimensional dynamical model. We apply this method to examples ranging from simple periodic dynamics to complex, nominally high-dimensional non-periodic bursting dynamics of the Kuramoto-Sivashinsky equation. We demonstrate that it: (1) can yield dynamical models of the lowest possible dimension, where previous methods generally cannot; (2) exhibits computational benefits including scalability, parallelizability, and adaptivity; and (3) separates state space into regions of distinct behaviours.

Sampling methods (e.g., node-wise, layer-wise, or subgraph) has become an indispensable strategy to speed up training large-scale Graph Neural Networks (GNNs). However, existing sampling methods are mostly based on the graph structural information and ignore the dynamicity of optimization, which leads to high variance in estimating the stochastic gradients. The high variance issue can be very pronounced in extremely large graphs, where it results in slow convergence and poor generalization. In this paper, we theoretically analyze the variance of sampling methods and show that, due to the composite structure of empirical risk, the variance of any sampling method can be decomposed into \textit{embedding approximation variance} in the forward stage and \textit{stochastic gradient variance} in the backward stage that necessities mitigating both types of variance to obtain faster convergence rate. We propose a decoupled variance reduction strategy that employs (approximate) gradient information to adaptively sample nodes with minimal variance, and explicitly reduces the variance introduced by embedding approximation. We show theoretically and empirically that the proposed method, even with smaller mini-batch sizes, enjoys a faster convergence rate and entails a better generalization compared to the existing methods.

Matter evolved under influence of gravity from minuscule density fluctuations. Non-perturbative structure formed hierarchically over all scales, and developed non-Gaussian features in the Universe, known as the Cosmic Web. To fully understand the structure formation of the Universe is one of the holy grails of modern astrophysics. Astrophysicists survey large volumes of the Universe and employ a large ensemble of computer simulations to compare with the observed data in order to extract the full information of our own Universe. However, to evolve trillions of galaxies over billions of years even with the simplest physics is a daunting task. We build a deep neural network, the Deep Density Displacement Model (hereafter D$^3$M), to predict the non-linear structure formation of the Universe from simple linear perturbation theory. Our extensive analysis, demonstrates that D$^3$M outperforms the second order perturbation theory (hereafter 2LPT), the commonly used fast approximate simulation method, in point-wise comparison, 2-point correlation, and 3-point correlation. We also show that D$^3$M is able to accurately extrapolate far beyond its training data, and predict structure formation for significantly different cosmological parameters. Our study proves, for the first time, that deep learning is a practical and accurate alternative to approximate simulations of the gravitational structure formation of the Universe.

北京阿比特科技有限公司