亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper proposes a new algorithm -- the \underline{S}ingle-timescale Do\underline{u}ble-momentum \underline{St}ochastic \underline{A}pprox\underline{i}matio\underline{n} (SUSTAIN) -- for tackling stochastic unconstrained bilevel optimization problems. We focus on bilevel problems where the lower level subproblem is strongly-convex and the upper level objective function is smooth. Unlike prior works which rely on \emph{two-timescale} or \emph{double loop} techniques, we design a stochastic momentum-assisted gradient estimator for both the upper and lower level updates. The latter allows us to control the error in the stochastic gradient updates due to inaccurate solution to both subproblems. If the upper objective function is smooth but possibly non-convex, we show that {\aname}~requires $\mathcal{O}(\epsilon^{-3/2})$ iterations (each using ${\cal O}(1)$ samples) to find an $\epsilon$-stationary solution. The $\epsilon$-stationary solution is defined as the point whose squared norm of the gradient of the outer function is less than or equal to $\epsilon$. The total number of stochastic gradient samples required for the upper and lower level objective functions matches the best-known complexity for single-level stochastic gradient algorithms. We also analyze the case when the upper level objective function is strongly-convex.

相關內容

我們給定x,函數都會輸出一個f(X),這個輸出的f(X)與真實值Y可能是相同的,也可能是不同的,為了表示擬合的好壞,就用一個函數來度量擬合的程度。這個函數就稱為損失函數(loss function),或者叫代價函數(cost function)

In this paper, we develop a robust fast method for mobile-immobile variable-order (VO) time-fractional diffusion equations (tFDEs), superiorly handling the cases of small or vanishing lower bound of the VO function. The valid fast approximation of the VO Caputo fractional derivative is obtained using integration by parts and the exponential-sum-approximation method. Compared with the general direct method, the proposed algorithm ($RF$-$L1$ formula) reduces the acting memory from $\mathcal{O}(n)$ to $\mathcal{O}(\log^2 n)$ and computational cost from $\mathcal{O}(n^2)$ to $\mathcal{O}(n \log^2 n)$, respectively, where $n$ is the number of time levels. Then $RF$-$L1$ formula is applied to construct the fast finite difference scheme for the VO tFDEs, which sharp decreases the memory requirement and computational complexity. The error estimate for the proposed scheme is studied only under some assumptions of the VO function, coefficients, and the source term, but without any regularity assumption of the true solutions. Numerical experiments are presented to verify the effectiveness of the proposed method.

We use functional mirror ascent to propose a general framework (referred to as FMA-PG) for designing policy gradient methods. The functional perspective distinguishes between a policy's functional representation (what are its sufficient statistics) and its parameterization (how are these statistics represented) and naturally results in computationally efficient off-policy updates. For simple policy parameterizations, the FMA-PG framework ensures that the optimal policy is a fixed point of the updates. It also allows us to handle complex policy parameterizations (e.g., neural networks) while guaranteeing policy improvement. Our framework unifies several PG methods and opens the way for designing sample-efficient variants of existing methods. Moreover, it recovers important implementation heuristics (e.g., using forward vs reverse KL divergence) in a principled way. With a softmax functional representation, FMA-PG results in a variant of TRPO with additional desirable properties. It also suggests an improved variant of PPO, whose robustness and efficiency we empirically demonstrate on MuJoCo. Via experiments on simple reinforcement learning problems, we evaluate algorithms instantiated by FMA-PG.

\noindent Several decades ago the Proximal Point Algorithm (PPA) stated to gain a long-lasting attraction for both abstract operator theory and numerical optimization communities. Even in modern applications, researchers still use proximal minimization theory to design scalable algorithms that overcome nonsmoothness. Remarkable works as \cite{Fer:91,Ber:82constrained,Ber:89parallel,Tom:11} established tight relations between the convergence behaviour of PPA and the regularity of the objective function. In this manuscript we derive nonasymptotic iteration complexity of exact and inexact PPA to minimize convex functions under $\gamma-$Holderian growth: $\BigO{\log(1/\epsilon)}$ (for $\gamma \in [1,2]$) and $\BigO{1/\epsilon^{\gamma - 2}}$ (for $\gamma > 2$). In particular, we recover well-known results on PPA: finite convergence for sharp minima and linear convergence for quadratic growth, even under presence of inexactness. However, without taking into account the concrete computational effort paid for computing each PPA iteration, any iteration complexity remains abstract and purely informative. Therefore, using an inner (proximal) gradient/subgradient method subroutine that computes inexact PPA iteration, we secondly show novel computational complexity bounds on a restarted inexact PPA, available when no information on the growth of the objective function is known. In the numerical experiments we confirm the practical performance and implementability of our framework.

In this paper, we propose and analyze a temporally second-order accurate, fully discrete finite element method for the magnetohydrodynamic (MHD) equations. A modified Crank--Nicolson method is used to discretize the model and appropriate semi-implicit treatments are applied to the fluid convection term and two coupling terms. These semi-implicit approximations result in a linear system with variable coefficients for which the unique solvability can be proved theoretically. In addition, we use a decoupling projection method of the Van Kan type \cite{vankan1986} in the Stokes solver, which computes the intermediate velocity field based on the gradient of the pressure from the previous time level, and enforces the incompressibility constraint via the Helmholtz decomposition of the intermediate velocity field. The energy stability of the scheme is theoretically proved, in which the decoupled Stokes solver needs to be analyzed in details. Optimal-order convergence of $\mathcal{O} (\tau^2+h^{r+1})$ in the discrete $L^\infty(0,T;L^2)$ norm is proved for the proposed decoupled projection finite element scheme, where $\tau$ and $h$ are the time stepsize and spatial mesh size, respectively, and $r$ is the degree of the finite elements. Existing error estimates of second-order projection methods of the Van Kan type \cite{vankan1986} were only established in the discrete $L^2(0,T;L^2)$ norm for the Navier--Stokes equations. Numerical examples are provided to illustrate the theoretical results.

The local minimax method (LMM) proposed in [Y. Li and J. Zhou, SIAM J. Sci. Comput. 23(3), 840--865 (2001)] and [Y. Li and J. Zhou, SIAM J. Sci. Comput. 24(3), 865--885 (2002)] is an efficient method to solve nonlinear elliptic partial differential equations (PDEs) with certain variational structures for multiple solutions. The steepest descent direction and the Armijo-type step-size search rules are adopted in the above work and playing a significant role in the performance and convergence analysis of traditional LMMs. In this paper, a new algorithm framework of the LMMs is established based on general descent directions and two normalized (strong) Wolfe-Powell-type step-size search rules. The corresponding algorithm named as the normalized Wolfe-Powell-type LMM (NWP-LMM) are introduced with its feasibility and global convergence rigorously justified for general descent directions. As a special case, the global convergence of the NWP-LMM algorithm combined with the preconditioned steepest descent (PSD) directions is also verified. Consequently, it extends the framework of traditional LMMs. In addition, conjugate gradient-type (CG-type) descent directions are utilized to speed up the LMM algorithms. Finally, extensive numerical results for several semilinear elliptic PDEs are reported to profile their multiple unstable solutions and compared for different algorithms in the LMM's family to indicate the effectiveness and robustness of our algorithms. In practice, the NWP-LMM combined with the CG-type direction indeed performs much better among its LMM companions.

Interpretation of Deep Neural Networks (DNNs) training as an optimal control problem with nonlinear dynamical systems has received considerable attention recently, yet the algorithmic development remains relatively limited. In this work, we make an attempt along this line by reformulating the training procedure from the trajectory optimization perspective. We first show that most widely-used algorithms for training DNNs can be linked to the Differential Dynamic Programming (DDP), a celebrated second-order trajectory optimization algorithm rooted in the Approximate Dynamic Programming. In this vein, we propose a new variant of DDP that can accept batch optimization for training feedforward networks, while integrating naturally with the recent progress in curvature approximation. The resulting algorithm features layer-wise feedback policies which improve convergence rate and reduce sensitivity to hyper-parameter over existing methods. We show that the algorithm is competitive against state-ofthe-art first and second order methods. Our work opens up new avenues for principled algorithmic design built upon the optimal control theory.

Quantum hardware and quantum-inspired algorithms are becoming increasingly popular for combinatorial optimization. However, these algorithms may require careful hyperparameter tuning for each problem instance. We use a reinforcement learning agent in conjunction with a quantum-inspired algorithm to solve the Ising energy minimization problem, which is equivalent to the Maximum Cut problem. The agent controls the algorithm by tuning one of its parameters with the goal of improving recently seen solutions. We propose a new Rescaled Ranked Reward (R3) method that enables stable single-player version of self-play training that helps the agent to escape local optima. The training on any problem instance can be accelerated by applying transfer learning from an agent trained on randomly generated problems. Our approach allows sampling high-quality solutions to the Ising problem with high probability and outperforms both baseline heuristics and a black-box hyperparameter optimization approach.

We consider the exploration-exploitation trade-off in reinforcement learning and we show that an agent imbued with a risk-seeking utility function is able to explore efficiently, as measured by regret. The parameter that controls how risk-seeking the agent is can be optimized exactly, or annealed according to a schedule. We call the resulting algorithm K-learning and show that the corresponding K-values are optimistic for the expected Q-values at each state-action pair. The K-values induce a natural Boltzmann exploration policy for which the `temperature' parameter is equal to the risk-seeking parameter. This policy achieves an expected regret bound of $\tilde O(L^{3/2} \sqrt{S A T})$, where $L$ is the time horizon, $S$ is the number of states, $A$ is the number of actions, and $T$ is the total number of elapsed time-steps. This bound is only a factor of $L$ larger than the established lower bound. K-learning can be interpreted as mirror descent in the policy space, and it is similar to other well-known methods in the literature, including Q-learning, soft-Q-learning, and maximum entropy policy gradient, and is closely related to optimism and count based exploration methods. K-learning is simple to implement, as it only requires adding a bonus to the reward at each state-action and then solving a Bellman equation. We conclude with a numerical example demonstrating that K-learning is competitive with other state-of-the-art algorithms in practice.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

This work considers the problem of provably optimal reinforcement learning for episodic finite horizon MDPs, i.e. how an agent learns to maximize his/her long term reward in an uncertain environment. The main contribution is in providing a novel algorithm --- Variance-reduced Upper Confidence Q-learning (vUCQ) --- which enjoys a regret bound of $\widetilde{O}(\sqrt{HSAT} + H^5SA)$, where the $T$ is the number of time steps the agent acts in the MDP, $S$ is the number of states, $A$ is the number of actions, and $H$ is the (episodic) horizon time. This is the first regret bound that is both sub-linear in the model size and asymptotically optimal. The algorithm is sub-linear in that the time to achieve $\epsilon$-average regret for any constant $\epsilon$ is $O(SA)$, which is a number of samples that is far less than that required to learn any non-trivial estimate of the transition model (the transition model is specified by $O(S^2A)$ parameters). The importance of sub-linear algorithms is largely the motivation for algorithms such as $Q$-learning and other "model free" approaches. vUCQ algorithm also enjoys minimax optimal regret in the long run, matching the $\Omega(\sqrt{HSAT})$ lower bound. Variance-reduced Upper Confidence Q-learning (vUCQ) is a successive refinement method in which the algorithm reduces the variance in $Q$-value estimates and couples this estimation scheme with an upper confidence based algorithm. Technically, the coupling of both of these techniques is what leads to the algorithm enjoying both the sub-linear regret property and the asymptotically optimal regret.

北京阿比特科技有限公司