亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper considers the numerical treatment of the time-dependent Gross-Pitaevskii equation. In order to conserve the time invariants of the equation as accurately as possible, we propose a Crank-Nicolson-type time discretization that is combined with a suitable generalized finite element discretization in space. The space discretization is based on the technique of Localized Orthogonal Decompositions (LOD) and allows to capture the time invariants with an accuracy of order $\mathcal{O}(H^6)$ with respect to the chosen mesh size $H$. This accuracy is preserved due to the conservation properties of the time stepping method. Furthermore, we prove that the resulting scheme approximates the exact solution in the $L^{\infty}(L^2)$-norm with order $\mathcal{O}(\tau^2 + H^4)$, where $\tau$ denotes the step size. The computational efficiency of the method is demonstrated in numerical experiments for a benchmark problem with known exact solution.

相關內容

Despite the many advances in the use of weakly-compressible smoothed particle hydrodynamics (SPH) for the simulation of incompressible fluid flow, it is still challenging to obtain second-order convergence numerically. In this paper we perform a systematic numerical study of convergence and accuracy of kernel-based approximation, discretization operators, and weakly-compressible SPH (WCSPH) schemes. We explore the origins of the errors and issues preventing second-order convergence. Based on the study, we propose several new variations of the basic WCSPH scheme that are all second-order accurate. Additionally, we investigate the linear and angular momentum conservation property of the WCSPH schemes. Our results show that one may construct accurate WCSPH schemes that demonstrate second-order convergence through a judicious choice of kernel, smoothing length, and discretization operators in the discretization of the governing equations.

We revisit the finite time analysis of policy gradient methods in the one of the simplest settings: finite state and action MDPs with a policy class consisting of all stochastic policies and with exact gradient evaluations. There has been some recent work viewing this setting as an instance of smooth non-linear optimization problems and showing sub-linear convergence rates with small step-sizes. Here, we take a different perspective based on connections with policy iteration and show that many variants of policy gradient methods succeed with large step-sizes and attain a linear rate of convergence.

Discretization of the uniform norm of functions from a given finite dimensional subspace of continuous functions is studied. We pay special attention to the case of trigonometric polynomials with frequencies from an arbitrary finite set with fixed cardinality. We give two different proofs of the fact that for any $N$-dimensional subspace of the space of continuous functions it is sufficient to use $e^{CN}$ sample points for an accurate upper bound for the uniform norm. Previous known results show that one cannot improve on the exponential growth of the number of sampling points for a good discretization theorem in the uniform norm. Also, we prove a general result, which connects the upper bound on the number of sampling points in the discretization theorem for the uniform norm with the best $m$-term bilinear approximation of the Dirichlet kernel associated with the given subspace. We illustrate application of our technique on the example of trigonometric polynomials.

In this paper, we establish error estimates for the numerical approximation of the parabolic optimal control problem with measure data in a two-dimensional nonconvex polygonal domain. Due to the presence of measure data in the state equation and the nonconvex nature of the domain, the finite element error analysis is not straightforward. Regularity results for the control problem based on the first-order optimality system are discussed. The state variable and co-state variable are approximated by continuous piecewise linear finite element, and the control variable is approximated by piecewise constant functions. A priori error estimates for the state and control variable are derived for spatially discrete control problem and fully discrete control problem in $L^2(L^2)$-norm. A numerical experiment is performed to illustrate our theoretical findings.

In this paper, we analyze the supercloseness result of nonsymmetric interior penalty Galerkin (NIPG) method on Shishkin mesh for a singularly perturbed convection diffusion problem. According to the characteristics of the solution and the scheme, a new analysis is proposed. More specifically, Gau{\ss} Lobatto interpolation and Gau{\ss} Radau interpolation are introduced inside and outside the layer, respectively. By selecting special penalty parameters at different mesh points, we further establish supercloseness of almost k + 1 order under the energy norm. Here k is the order of piecewise polynomials. Then, a simple post processing operator is constructed. In particular, a new analysis is proposed for the stability analysis of this operator. On the basis of that, we prove that the corresponding post-processing can make the numerical solution achieve higher accuracy. Finally, superconvergence can be derived under the discrete energy norm. These theoretical conclusions can be verified numerically.

Core decomposition is a classic technique for discovering densely connected regions in a graph with large range of applications. Formally, a $k$-core is a maximal subgraph where each vertex has at least $k$ neighbors. A natural extension of a $k$-core is a $(k, h)$-core, where each node must have at least $k$ nodes that can be reached with a path of length $h$. The downside in using $(k, h)$-core decomposition is the significant increase in the computational complexity: whereas the standard core decomposition can be done in $O(m)$ time, the generalization can require $O(n^2m)$ time, where $n$ and $m$ are the number of nodes and edges in the given graph. In this paper we propose a randomized algorithm that produces an $\epsilon$-approximation of $(k, h)$ core decomposition with a probability of $1 - \delta$ in $O(\epsilon^{-2} hm (\log^2 n - \log \delta))$ time. The approximation is based on sampling the neighborhoods of nodes, and we use Chernoff bound to prove the approximation guarantee. We demonstrate empirically that approximating the decomposition complements the exact computation: computing the approximation is significantly faster than computing the exact solution for the networks where computing the exact solution is slow.

Let $f$ be analytic on $[0,1]$ with $|f^{(k)}(1/2)|\leq A\alpha^kk!$ for some constant $A$ and $\alpha<2$. We show that the median estimate of $\mu=\int_0^1f(x)\,\mathrm{d}x$ under random linear scrambling with $n=2^m$ points converges at the rate $O(n^{-c\log(n)})$ for any $c< 3\log(2)/\pi^2\approx 0.21$. We also get a super-polynomial convergence rate for the sample median of $2k-1$ random linearly scrambled estimates, when $k=\Omega(m)$. When $f$ has a $p$'th derivative that satisfies a $\lambda$-H\"older condition then the median-of-means has error $O( n^{-(p+\lambda)+\epsilon})$ for any $\epsilon>0$, if $k\to\infty$ as $m\to\infty$.

A framework is presented to design multirate time stepping algorithms for two dissipative models with coupling across a physical interface. The coupling takes the form of boundary conditions imposed on the interface, relating the solution variables for both models to each other. The multirate aspect arises when numerical time integration is performed with different time step sizes for the component models. In this paper, we seek to identify a unified approach to develop multirate algorithms for these coupled problems. This effort is pursued though the use of discontinuous-Galerkin time stepping methods, acting as a general unified framework, with different time step sizes. The subproblems are coupled across user-defined intervals of time, called {\it coupling windows}, using polynomials that are continuous on the window. The coupling method is shown to reproduce the correct interfacial energy dissipation, discrete conservation of fluxes, and asymptotic accuracy. In principle, methods of arbitrary order are possible. As a first step, herein we focus on the presentation and analysis of monolithic methods for advection-diffusion models coupled via generalized Robin-type conditions. The monolithic methods could be computed using a Schur-complement approach. We conclude with some discussion of future developments, such as different interface conditions and partitioned methods.

This article is concerned with the nonconforming finite element method for distributed elliptic optimal control problems with pointwise constraints on the control and gradient of the state variable. We reduce the minimization problem into a pure state constraint minimization problem. In this case, the solution of the minimization problem can be characterized as fourth-order elliptic variational inequalities of the first kind. To discretize the control problem we have used the bubble enriched Morley finite element method. To ensure the existence of the solution to discrete problems three bubble functions corresponding to the mean of the edge are added to the discrete space. We derive the error in the state variable in $H^2$-type energy norm. Numerical results are presented to illustrate our analytical findings.

Stochastic gradient descent with momentum (SGDM) is the dominant algorithm in many optimization scenarios, including convex optimization instances and non-convex neural network training. Yet, in the stochastic setting, momentum interferes with gradient noise, often leading to specific step size and momentum choices in order to guarantee convergence, set aside acceleration. Proximal point methods, on the other hand, have gained much attention due to their numerical stability and elasticity against imperfect tuning. Their stochastic accelerated variants though have received limited attention: how momentum interacts with the stability of (stochastic) proximal point methods remains largely unstudied. To address this, we focus on the convergence and stability of the stochastic proximal point algorithm with momentum (SPPAM), and show that SPPAM allows a faster linear convergence to a neighborhood compared to stochastic proximal point algorithm (SPPA) with a better contraction factor, under proper hyperparameter tuning. In terms of stability, we show that SPPAM depends on problem constants more favorably than SGDM, allowing a wider range of step size and momentum that lead to convergence.

北京阿比特科技有限公司