亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we study two kinds of structure-preserving splitting methods, including the Lie--Trotter type splitting method and the finite difference type method, for the stochasticlogarithmic Schr\"odinger equation (SlogS equation) via a regularized energy approximation. We first introduce a regularized SlogS equation with a small parameter $0<\epsilon\ll1$ which approximates the SlogS equation and avoids the singularity near zero density. Then we present a priori estimates, the regularized entropy and energy, and the stochastic symplectic structure of the proposed numerical methods. Furthermore, we derive both the strong convergence rates and the convergence rates of the regularized entropy and energy. To the best of our knowledge, this is the first result concerning the construction and analysis of numerical methods for stochastic Schr\"odinger equations with logarithmic nonlinearities.

相關內容

We consider a class of statistical estimation problems in which we are given a random data matrix ${\boldsymbol X}\in {\mathbb R}^{n\times d}$ (and possibly some labels ${\boldsymbol y}\in{\mathbb R}^n$) and would like to estimate a coefficient vector ${\boldsymbol \theta}\in{\mathbb R}^d$ (or possibly a constant number of such vectors). Special cases include low-rank matrix estimation and regularized estimation in generalized linear models (e.g., sparse regression). First order methods proceed by iteratively multiplying current estimates by ${\boldsymbol X}$ or its transpose. Examples include gradient descent or its accelerated variants. Celentano, Montanari, Wu proved that for any constant number of iterations (matrix vector multiplications), the optimal first order algorithm is a specific approximate message passing algorithm (known as `Bayes AMP'). The error of this estimator can be characterized in the high-dimensional asymptotics $n,d\to\infty$, $n/d\to\delta$, and provides a lower bound to the estimation error of any first order algorithm. Here we present a simpler proof of the same result, and generalize it to broader classes of data distributions and of first order algorithms, including algorithms with non-separable nonlinearities. Most importantly, the new proof technique does not require to construct an equivalent tree-structured estimation problem, and is therefore susceptible of a broader range of applications.

In this paper, we consider the widely used but not fully understood stochastic estimator based on moving average (SEMA), which only requires {\bf a general unbiased stochastic oracle}. We demonstrate the power of SEMA on a range of stochastic non-convex optimization problems. In particular, we analyze various stochastic methods (existing or newly proposed) based on the {\bf variance recursion property} of SEMA for three families of non-convex optimization, namely standard stochastic non-convex minimization, stochastic non-convex strongly-concave min-max optimization, and stochastic bilevel optimization. Our contributions include: (i) for standard stochastic non-convex minimization, we present a simple and intuitive proof of convergence for a family of Adam-style methods (including Adam, AMSGrad, AdaBound, etc.) with an increasing or large "momentum" parameter for the first-order moment, which gives an alternative yet more natural way to guarantee Adam converge; (ii) for stochastic non-convex strongly-concave min-max optimization, we present a single-loop primal-dual stochastic momentum and adaptive methods based on the moving average estimators and establish its oracle complexity of $O(1/\epsilon^4)$ without using a large mini-batch size, addressing a gap in the literature; (iii) for stochastic bilevel optimization, we present a single-loop stochastic method based on the moving average estimators and establish its oracle complexity of $\widetilde O(1/\epsilon^4)$ without computing the SVD of the Hessian matrix, improving state-of-the-art results. For all these problems, we also establish a variance diminishing result for the used stochastic gradient estimators.

We develop an essentially optimal finite element approach for solving ergodic stochastic two-scale elliptic equations whose two-scale coefficient may depend also on the slow variable. We solve the limiting stochastic two-scale homogenized equation obtained from the stochastic two-scale convergence in the mean (A. Bourgeat, A. Mikelic and S. Wright, J. reine angew. Math, Vol. 456, 1994), whose solution comprises of the solution to the homogenized equation and the corrector, by truncating the infinite domain of the fast variable and using the sparse tensor product finite elements. We show that the convergence rate in terms of the truncation level is equivalent to that for solving the cell problems in the same truncated domain. Solving this equation, we obtain the solution to the homogenized equation and the corrector at the same time, using only a number of degrees of freedom that is essentially equivalent to that required for solving one cell problem. Optimal complexity is obtained when the corrector possesses sufficient regularity with respect to both the fast and the slow variables. Although the regularity norm of the corrector depends on the size of the truncated domain, we show that the convergence rate of the approximation for the solution to the homogenized equation is independent of the size of the truncated domain. With the availability of an analytic corrector, we construct a numerical corrector for the solution of the original stochastic two-scale equation from the finite element solution to the truncated stochastic two-scale homogenized equation. Numerical examples of quasi-periodic two-scale equations, and a stochastic two-scale equation of the checker board type, whose coefficient is discontinuous, confirm the theoretical results.

We propose a novel class of uniformly accurate integrators for the Klein--Gordon equation which capture classical $c=1$ as well as highly-oscillatory non-relativistic regimes $c\gg1$ and, at the same time, allow for low regularity approximations. In particular, the schemes converge with order $\tau$ and $\tau^2$, respectively, under lower regularity assumptions than classical schemes, such as splitting or exponential integrator methods, require. The new schemes in addition preserve the nonlinear Schr\"odinger (NLS) limit on the discrete level. More precisely, we will design our schemes in such a way that in the limit $c\to \infty$ they converge to a recently introduced class of low regularity integrators for NLS.

In this paper, we present a sharp analysis for a class of alternating projected gradient descent algorithms which are used to solve the covariate adjusted precision matrix estimation problem in the high-dimensional setting. We demonstrate that these algorithms not only enjoy a linear rate of convergence in the absence of convexity, but also attain the optimal statistical rate (i.e., minimax rate). By introducing the generic chaining, our analysis removes the impractical resampling assumption used in the previous work. Moreover, our results also reveal a time-data tradeoff in this covariate adjusted precision matrix estimation problem. Numerical experiments are provided to verify our theoretical results.

Recent quasi-optimal error estimates for the finite element approximation of total-variation regularized minimization problems using the Crouzeix--Raviart finite element require the existence of a Lipschitz continuous dual solution, which is not generally given. We provide analytic proofs showing that the Lipschitz continuity of a dual solution is not necessary, in general. Using the Lipschitz truncation technique, we, in addition, derive error estimates that depend directly on the Sobolev regularity of a given dual solution.

We propose a stochastic gradient descent approach with partitioned-truncated singular value decomposition for large-scale inverse problems of magnetic modulus data. Motivated by a uniqueness theorem in gravity inverse problem and realizing the similarity between gravity and magnetic inverse problems, we propose to solve the level-set function modeling the volume susceptibility distribution from the nonlinear magnetic modulus data. To deal with large-scale data, we employ a mini-batch stochastic gradient descent approach with random reshuffling when solving the optimization problem of the inverse problem. We propose a stepsize rule for the stochastic gradient descent according to the Courant-Friedrichs-Lewy condition of the evolution equation. In addition, we develop a partitioned-truncated singular value decomposition algorithm for the linear part of the inverse problem in the context of stochastic gradient descent. Numerical examples illustrate the efficacy of the proposed method, which turns out to have the capability of efficiently processing large-scale measurement data for the magnetic inverse problem. A possible generalization to the inverse problem of deep neural network is discussed at the end.

In this article, we propose a higher order approximation to Caputo fractional (C-F) derivative using graded mesh and standard central difference approximation for space derivatives, in order to obtain the approximate solution of time fractional partial differential equations (TFPDE). The proposed approximation for C-F derivative tackles the singularity at origin effectively and is easily applicable to diverse problems. The stability analysis and truncation error bounds of the proposed scheme are discussed, along with this, analyzed the required regularity of the solution. Few numerical examples are presented to support the theory.

Escaping saddle points is a central research topic in nonconvex optimization. In this paper, we propose a simple gradient-based algorithm such that for a smooth function $f\colon\mathbb{R}^n\to\mathbb{R}$, it outputs an $\epsilon$-approximate second-order stationary point in $\tilde{O}(\log n/\epsilon^{1.75})$ iterations. Compared to the previous state-of-the-art algorithms by Jin et al. with $\tilde{O}((\log n)^{4}/\epsilon^{2})$ or $\tilde{O}((\log n)^{6}/\epsilon^{1.75})$ iterations, our algorithm is polynomially better in terms of $\log n$ and matches their complexities in terms of $1/\epsilon$. For the stochastic setting, our algorithm outputs an $\epsilon$-approximate second-order stationary point in $\tilde{O}((\log n)^{2}/\epsilon^{4})$ iterations. Technically, our main contribution is an idea of implementing a robust Hessian power method using only gradients, which can find negative curvature near saddle points and achieve the polynomial speedup in $\log n$ compared to the perturbed gradient descent methods. Finally, we also perform numerical experiments that support our results.

We develop an approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error. Our approach builds off of techniques for distributionally robust optimization and Owen's empirical likelihood, and we provide a number of finite-sample and asymptotic results characterizing the theoretical performance of the estimator. In particular, we show that our procedure comes with certificates of optimality, achieving (in some scenarios) faster rates of convergence than empirical risk minimization by virtue of automatically balancing bias and variance. We give corroborating empirical evidence showing that in practice, the estimator indeed trades between variance and absolute performance on a training sample, improving out-of-sample (test) performance over standard empirical risk minimization for a number of classification problems.

北京阿比特科技有限公司