亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we develop a robust fast method for mobile-immobile variable-order (VO) time-fractional diffusion equations (tFDEs), superiorly handling the cases of small or vanishing lower bound of the VO function. The valid fast approximation of the VO Caputo fractional derivative is obtained using integration by parts and the exponential-sum-approximation method. Compared with the general direct method, the proposed algorithm ($RF$-$L1$ formula) reduces the acting memory from $\mathcal{O}(n)$ to $\mathcal{O}(\log^2 n)$ and computational cost from $\mathcal{O}(n^2)$ to $\mathcal{O}(n \log^2 n)$, respectively, where $n$ is the number of time levels. Then $RF$-$L1$ formula is applied to construct the fast finite difference scheme for the VO tFDEs, which sharp decreases the memory requirement and computational complexity. The error estimate for the proposed scheme is studied only under some assumptions of the VO function, coefficients, and the source term, but without any regularity assumption of the true solutions. Numerical experiments are presented to verify the effectiveness of the proposed method.

相關內容

FAST:Conference on File and Storage Technologies。 Explanation:文件和存儲技術會議。 Publisher:USENIX。 SIT:

In this paper, we propose a monotone approximation scheme for a class of fully nonlinear partial integro-differential equations (PIDEs) which characterize the nonlinear $\alpha$-stable L\'{e}vy processes under sublinear expectation space with $\alpha \in(1,2)$. Two main results are obtained: (i) the error bounds for the monotone approximation scheme of nonlinear PIDEs, and (ii) the convergence rates of a generalized central limit theorem of Bayraktar-Munk for $\alpha$-stable random variables under sublinear expectation. Our proofs use and extend techniques introduced by Krylov and Barles-Jakobsen.

We focus on finite element method computations for time-dependent problems. We prove that the computational cost of the space-time formulation is higher than the cost of the time-marching schemes. This applies to both direct and iterative solvers. It concerns both uniform and adaptive grids. The only exception from this rule is the h adaptive space-time simulation of the traveling point object, resulting in refinements towards their trajectory in the space-time domain. However, if this object has wings and the mesh refinements capture the shape of the wing (if the mesh refinements capture any two-dimensional manifold) the space-time formulation is more expensive than time-marching schemes. We also show that the cost of static condensation for the higher-order finite element method with hierarchical basis functions is always higher for space-time formulations. Numerical experiments with Octave confirm our theoretical findings.

In this work, we show that solvers of elliptic boundary value problems in $d$ dimensions can be approximated to accuracy $\epsilon$ from only $\mathcal{O}\left(\log(N)\log^{d}(N / \epsilon)\right)$ matrix-vector products with carefully chosen vectors (right-hand sides). The solver is only accessed as a black box, and the underlying operator may be unknown and of an arbitrarily high order. Our algorithm (1) has complexity $\mathcal{O}\left(N\log^2(N)\log^{2d}(N / \epsilon)\right)$ and represents the solution operator as a sparse Cholesky factorization with $\mathcal{O}\left(N\log(N)\log^{d}(N / \epsilon)\right)$ nonzero entries, (2) allows for embarrassingly parallel evaluation of the solution operator and the computation of its log-determinant, (3) allows for $\mathcal{O}\left(\log(N)\log^{d}(N / \epsilon)\right)$ complexity computation of individual entries of the matrix representation of the solver that in turn enables its recompression to an $\mathcal{O}\left(N\log^{d}(N / \epsilon)\right)$ complexity representation. As a byproduct, our compression scheme produces a homogenized solution operator with near-optimal approximation accuracy. We include rigorous proofs of these results, and to the best of our knowledge, the proposed algorithm achieves the best trade-off between accuracy $\epsilon$ and the number of required matrix-vector products of the original solver.

We present a new enriched Galerkin (EG) scheme for the Stokes equations based on piecewise linear elements for the velocity unknowns and piecewise constant elements for the pressure. The proposed EG method augments the conforming piecewise linear space for velocity by adding an additional degree of freedom which corresponds to one discontinuous linear basis function per element. Thus, the total number of degrees of freedom is significantly reduced in comparison with standard conforming, non-conforming, and discontinuous Galerkin schemes for the Stokes equation. We show the well-posedness of the new EG approach and prove that the scheme converges optimally. For the solution of the resulting large-scale indefinite linear systems we propose robust block preconditioners, yielding scalable results independent of the discretization and physical parameters. Numerical results confirm the convergence rates of the discretization and also the robustness of the linear solvers for a variety of test problems.

In this article, we develop and analyse a new spectral method to solve the semi-classical Schr\"odinger equation based on the Gaussian wave-packet transform (GWPT) and Hagedorn's semi-classical wave-packets (HWP). The GWPT equivalently recasts the highly oscillatory wave equation as a much less oscillatory one (the $w$ equation) coupled with a set of ordinary differential equations governing the dynamics of the so-called GWPT parameters. The Hamiltonian of the $ w $ equation consists of a quadratic part and a small non-quadratic perturbation, which is of order $ \mathcal{O}(\sqrt{\varepsilon }) $, where $ \varepsilon\ll 1 $ is the rescaled Planck's constant. By expanding the solution of the $ w $ equation as a superposition of Hagedorn's wave-packets, we construct a spectral method while the $ \mathcal{O}(\sqrt{\varepsilon}) $ perturbation part is treated by the Galerkin approximation. This numerical implementation of the GWPT avoids imposing artificial boundary conditions and facilitates rigorous numerical analysis. For arbitrary dimensional cases, we establish how the error of solving the semi-classical Schr\"odinger equation with the GWPT is determined by the errors of solving the $ w $ equation and the GWPT parameters. We prove that this scheme has the spectral convergence with respect to the number of Hagedorn's wave-packets in one dimension. Extensive numerical tests are provided to demonstrate the properties of the proposed method.

In this paper, we consider possibly misspecified stochastic differential equation models driven by L\'{e}vy processes. Regardless of whether the driving noise is Gaussian or not, Gaussian quasi-likelihood estimator can estimate unknown parameters in the drift and scale coefficients. However, in the misspecified case, the asymptotic distribution of the estimator varies by the correction of the misspecification bias, and consistent estimators for the asymptotic variance proposed in the correctly specified case may lose theoretical validity. As one of its solutions, we propose a bootstrap method for approximating the asymptotic distribution. We show that our bootstrap method theoretically works in both correctly specified case and misspecified case without assuming the precise distribution of the driving noise.

Solving for detailed chemical kinetics remains one of the major bottlenecks for computational fluid dynamics simulations of reacting flows using a finite-rate-chemistry approach. This has motivated the use of fully connected artificial neural networks to predict stiff chemical source terms as functions of the thermochemical state of the combustion system. However, due to the nonlinearities and multi-scale nature of combustion, the predicted solution often diverges from the true solution when these deep learning models are coupled with a computational fluid dynamics solver. This is because these approaches minimize the error during training without guaranteeing successful integration with ordinary differential equation solvers. In the present work, a novel neural ordinary differential equations approach to modeling chemical kinetics, termed as ChemNODE, is developed. In this deep learning framework, the chemical source terms predicted by the neural networks are integrated during training, and by computing the required derivatives, the neural network weights are adjusted accordingly to minimize the difference between the predicted and ground-truth solution. A proof-of-concept study is performed with ChemNODE for homogeneous autoignition of hydrogen-air mixture over a range of composition and thermodynamic conditions. It is shown that ChemNODE accurately captures the correct physical behavior and reproduces the results obtained using the full chemical kinetic mechanism at a fraction of the computational cost.

Sampling methods (e.g., node-wise, layer-wise, or subgraph) has become an indispensable strategy to speed up training large-scale Graph Neural Networks (GNNs). However, existing sampling methods are mostly based on the graph structural information and ignore the dynamicity of optimization, which leads to high variance in estimating the stochastic gradients. The high variance issue can be very pronounced in extremely large graphs, where it results in slow convergence and poor generalization. In this paper, we theoretically analyze the variance of sampling methods and show that, due to the composite structure of empirical risk, the variance of any sampling method can be decomposed into \textit{embedding approximation variance} in the forward stage and \textit{stochastic gradient variance} in the backward stage that necessities mitigating both types of variance to obtain faster convergence rate. We propose a decoupled variance reduction strategy that employs (approximate) gradient information to adaptively sample nodes with minimal variance, and explicitly reduces the variance introduced by embedding approximation. We show theoretically and empirically that the proposed method, even with smaller mini-batch sizes, enjoys a faster convergence rate and entails a better generalization compared to the existing methods.

We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.

We develop an approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error. Our approach builds off of techniques for distributionally robust optimization and Owen's empirical likelihood, and we provide a number of finite-sample and asymptotic results characterizing the theoretical performance of the estimator. In particular, we show that our procedure comes with certificates of optimality, achieving (in some scenarios) faster rates of convergence than empirical risk minimization by virtue of automatically balancing bias and variance. We give corroborating empirical evidence showing that in practice, the estimator indeed trades between variance and absolute performance on a training sample, improving out-of-sample (test) performance over standard empirical risk minimization for a number of classification problems.

北京阿比特科技有限公司