亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper introduces a new class of numerical methods for the time integration of evolution equations set as Cauchy problems of ODEs or PDEs. The systematic design of these methods mixes the Runge-Kutta collocation formalism with collocation techniques, in such a way that the methods are linearly implicit and have high order. The fact that these methods are implicit allows to avoid CFL conditions when the large systems to integrate come from the space discretization of evolution PDEs. Moreover, these methods are expected to be efficient since they only require to solve one linear system of equations at each time step, and efficient techniques from the literature can be used to do so. After the introduction of the methods, we set suitable definitions of consistency and stability for these methods. This allows for a proof that arbitrarily high order linearly implicit methods exist and converge when applied to ODEs. Eventually, we perform numerical experiments on ODEs and PDEs that illustrate our theoretical results for ODEs, and compare our methods with standard methods for several evolution PDEs.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志(zhi)。 Publisher:Elsevier。 SIT:

In this paper we study the numerical method for approximating the random periodic solution of semiliear stochastic evolution equations. The main challenge lies in proving a convergence over an infinite time horizon while simulating infinite-dimensional objects. We first show the existence and uniqueness of the random periodic solution to the equation as the limit of the pull-back flows of the equation, and observe that its mild form is well-defined in the intersection of a family of decreasing Hilbert spaces. Then we propose a Galerkin-type exponential integrator scheme and establish its convergence rate of the strong error to the mild solution, where the order of convergence directly depends on the space (among the family of Hilbert spaces) for the initial point to live. We finally conclude with the best order of convergence that is arbitrarily close to 0.5.

We study the conservation properties of the Hermite-discontinuous Galerkin (Hermite-DG) approximation of the Vlasov-Maxwell equations. In this semi-discrete formulation, the total mass is preserved independently for every plasma species. Further, an energy invariant exists if central numerical fluxes are used in the DG approximation of Maxwell's equations, while a dissipative term is present when upwind fluxes are employed. In general, traditional temporal integrators might fail to preserve invariants associated with conservation laws (at the continuous or semi-discrete level) during the time evolution. Hence, we analyze the capability of explicit and implicit Runge-Kutta (RK) temporal integrators to preserve such invariants. Since explicit RK methods can only ensure preservation of linear invariants but do not provide any control on the system energy, we consider modified explicit RK methods in the family of relaxation Runge-Kutta methods (RRK). These methods can be tuned to preserve the energy invariant at the continuous or semi-discrete level, a distinction that is important when upwind fluxes are used in the discretization of Maxwell's equations since upwind provides a numerical source of energy dissipation that is not present when central fluxes are used. We prove that the proposed methods are able to preserve the energy invariant and to maintain the semi-discrete energy dissipation (if present) according to the discretization of Maxwell's equations. An extensive set of numerical experiments corroborates the theoretical findings. It also suggests that maintaining the semi-discrete energy dissipation when upwind fluxes are used leads to an overall better accuracy of the method relative to using upwind fluxes while forcing exact energy conservation.

Stochastic Hamiltonian partial differential equations, which possess the multi-symplectic conservation law, are an important and fairly large class of systems. The multi-symplectic methods inheriting the geometric features of stochastic Hamiltonian partial differential equations provide numerical approximations with better numerical stability, and are of vital significance for obtaining correct numerical results. In this paper, we propose three novel multi-symplectic methods for stochastic Hamiltonian partial differential equations based on the local radial basis function collocation method, the splitting technique, and the partitioned Runge-Kutta method. Concrete numerical methods are presented for nonlinear stochastic wave equations, stochastic nonlinear Schr\"odinger equations, stochastic Korteweg-de Vries equations and stochastic Maxwell equations. We take stochastic wave equations as examples to perform numerical experiments, which indicate the validity of the proposed methods.

Empirical observation of high dimensional phenomena, such as the double descent behaviour, has attracted a lot of interest in understanding classical techniques such as kernel methods, and their implications to explain generalization properties of neural networks. Many recent works analyze such models in a certain high-dimensional regime where the covariates are independent and the number of samples and the number of covariates grow at a fixed ratio (i.e. proportional asymptotics). In this work we show that for a large class of kernels, including the neural tangent kernel of fully connected networks, kernel methods can only perform as well as linear models in this regime. More surprisingly, when the data is generated by a kernel model where the relationship between input and the response could be very nonlinear, we show that linear models are in fact optimal, i.e. linear models achieve the minimum risk among all models, linear or nonlinear. These results suggest that more complex models for the data other than independent features are needed for high-dimensional analysis.

In this paper, we present three versions of proofs of the coercivity for first-order system least-squares methods for second-order elliptic PDEs. The first version is based on the a priori error estimate of the PDEs, which has the weakest assumption. For the second and third proofs, a sufficient condition on the coefficients ensuring the coercivity of the standard variational formulation is assumed. The second proof is a simple direct proof and the third proof is based on a lemma introduced in the discontinuous Petrov-Galerkin method. By pointing out the advantages and limitations of different proofs, we hope that the paper will provide a guide for future proofs. As an application, we also discuss least-squares finite element methods for problems with $H^{-1}$ righthand side.

When applied to stiff, linear differential equations with time-dependent forcing, Runge-Kutta methods can exhibit convergence rates lower than predicted by classical order condition theory. Commonly, this order reduction phenomenon is addressed by using an expensive, fully implicit Runge-Kutta method with high stage order or a specialized scheme satisfying additional order conditions. This work develops a flexible approach of augmenting an arbitrary Runge-Kutta method with a fully implicit method used to treat the forcing such as to maintain the classical order of the base scheme. Our methods and analyses are based on the general-structure additive Runge-Kutta framework. Numerical experiments using diagonally implicit, fully implicit, and even explicit Runge-Kutta methods confirm that the new approach eliminates order reduction for the class of problems under consideration, and the base methods achieve their theoretical orders of convergence.

In this work, we investigate stochastic quasi-Newton methods for minimizing a finite sum of cost functions over a decentralized network. In Part I, we develop a general algorithmic framework that incorporates stochastic quasi-Newton approximations with variance reduction so as to achieve fast convergence. At each time each node constructs a local, inexact quasi-Newton direction that asymptotically approaches the global, exact one. To be specific, (i) A local gradient approximation is constructed by using dynamic average consensus to track the average of variance-reduced local stochastic gradients over the entire network; (ii) A local Hessian inverse approximation is assumed to be positive definite with bounded eigenvalues, and how to construct it to satisfy these assumptions will be given in Part II. Compared to the existing decentralized stochastic first-order methods, the proposed general framework introduces the second-order curvature information without incurring extra sampling or communication. With a fixed step size, we establish the conditions under which the proposed general framework linearly converges to an exact optimal solution.

We couple the L1 discretization for Caputo derivative in time with spectral Galerkin method in space to devise a scheme that solves quasilinear subdiffusion equations. Both the diffusivity and the source are allowed to be nonlinear functions of the solution. We prove method's stability and convergence with spectral accuracy in space. The temporal order depends on solution's regularity in time. Further, we support our results with numerical simulations that utilize parallelism for spatial discretization. Moreover, as a side result we find asymptotic exact values of error constants along with their remainders for discretizations of Caputo derivative and fractional integrals. These constants are the smallest possible which improves the previously established results from the literature.

We study the bilinearly coupled minimax problem: $\min_{x} \max_{y} f(x) + y^\top A x - h(y)$, where $f$ and $h$ are both strongly convex smooth functions and admit first-order gradient oracles. Surprisingly, no known first-order algorithms have hitherto achieved the lower complexity bound of $\Omega((\sqrt{\frac{L_x}{\mu_x}} + \frac{\|A\|}{\sqrt{\mu_x \mu_y}} + \sqrt{\frac{L_y}{\mu_y}}) \log(\frac1{\varepsilon}))$ for solving this problem up to an $\varepsilon$ primal-dual gap in the general parameter regime, where $L_x, L_y,\mu_x,\mu_y$ are the corresponding smoothness and strongly convexity constants. We close this gap by devising the first optimal algorithm, the Lifted Primal-Dual (LPD) method. Our method lifts the objective into an extended form that allows both the smooth terms and the bilinear term to be handled optimally and seamlessly with the same primal-dual framework. Besides optimality, our method yields a desirably simple single-loop algorithm that uses only one gradient oracle call per iteration. Moreover, when $f$ is just convex, the same algorithm applied to a smoothed objective achieves the nearly optimal iteration complexity. We also provide a direct single-loop algorithm, using the LPD method, that achieves the iteration complexity of $O(\sqrt{\frac{L_x}{\varepsilon}} + \frac{\|A\|}{\sqrt{\mu_y \varepsilon}} + \sqrt{\frac{L_y}{\varepsilon}})$. Numerical experiments on quadratic minimax problems and policy evaluation problems further demonstrate the fast convergence of our algorithm in practice.

In this paper, we propose an efficient proper orthogonal decomposition based reduced-order model(POD-ROM) for nonstationary Stokes equations, which combines the classical projection method with POD technique. This new scheme mainly owns two advantages: the first one is low computational costs since the classical projection method decouples the reduced-order velocity variable and reduced-order pressure variable, and POD technique further improves the computational efficiency; the second advantage consists of circumventing the verification of classical LBB/inf-sup condition for mixed POD spaces with the help of pressure stabilized Petrov-Galerkin(PSPG)-type projection method, where the pressure stabilization term is inherent which allows the use of non inf-sup stable elements without adding extra stabilization terms. We first obtain the convergence of PSPG-type finite element projection scheme, and then analyze the proposed projection POD-ROM's stability and convergence. Numerical experiments validate out theoretical results.

北京阿比特科技有限公司