亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Divergence-free discontinuous Galerkin (DG) finite element methods offer a suitable discretization for the pointwise divergence-free numerical solution of Borrvall and Petersson's model for the topology optimization of fluids in Stokes flow [Topology optimization of fluids in Stokes flow, International Journal for Numerical Methods in Fluids 41 (1) (2003) 77--107]. The convergence results currently found in literature only consider $H^1$-conforming discretizations for the velocity. In this work, we extend the numerical analysis of Papadopoulos and S\"uli to divergence-free DG methods with an interior penalty [I. P. A. Papadopoulos and E. S\"uli, Numerical analysis of a topology optimization problem for Stokes flow, arXiv preprint arXiv:2102.10408, (2021)]. We show that, given an isolated minimizer to the analytical problem, there exists a sequence of DG finite element solutions, satisfying necessary first-order optimality conditions, that strongly converges to the minimizer.

相關內容

We consider a generic and explicit tamed Euler--Maruyama scheme for multidimensional time-inhomogeneous stochastic differential equations with multiplicative Brownian noise. The diffusion coefficient is uniformly elliptic, H\"older continuous and weakly differentiable in the spatial variables while the drift satisfies the Ladyzhenskaya--Prodi--Serrin condition, as considered by Krylov and R\"ockner (2005). In the discrete scheme, the drift is tamed by replacing it by an approximation. A strong rate of convergence of the scheme is provided in terms of the approximation error of the drift in a suitable and possibly very weak topology. A few examples of approximating drifts are discussed in detail. The parameters of the approximating drifts can vary and be fine-tuned to achieve the standard $1/2$-strong convergence rate with a logarithmic factor.

We study the MARINA method of Gorbunov et al (2021) -- the current state-of-the-art distributed non-convex optimization method in terms of theoretical communication complexity. Theoretical superiority of this method can be largely attributed to two sources: the use of a carefully engineered biased stochastic gradient estimator, which leads to a reduction in the number of communication rounds, and the reliance on {\em independent} stochastic communication compression operators, which leads to a reduction in the number of transmitted bits within each communication round. In this paper we i) extend the theory of MARINA to support a much wider class of potentially {\em correlated} compressors, extending the reach of the method beyond the classical independent compressors setting, ii) show that a new quantity, for which we coin the name {\em Hessian variance}, allows us to significantly refine the original analysis of MARINA without any additional assumptions, and iii) identify a special class of correlated compressors based on the idea of {\em random permutations}, for which we coin the term Perm$K$, the use of which leads to $O(\sqrt{n})$ (resp. $O(1 + d/\sqrt{n})$) improvement in the theoretical communication complexity of MARINA in the low Hessian variance regime when $d\geq n$ (resp. $d \leq n$), where $n$ is the number of workers and $d$ is the number of parameters describing the model we are learning. We corroborate our theoretical results with carefully engineered synthetic experiments with minimizing the average of nonconvex quadratics, and on autoencoder training with the MNIST dataset.

An explicit numerical method is developed for a class of time-changed stochastic differential equations, whose the coefficients obey H\"older's continuity in terms of the time variable and are allowed to grow super-linearly in terms of the state variable. The strong convergence of the method in a finite time interval is proved and the convergence rate is obtained. Numerical simulations are provided, which are in line with those theoretical results.

We provide a control-theoretic perspective on optimal tensor algorithms for minimizing a convex function in a finite-dimensional Euclidean space. Given a function $\Phi: \mathbb{R}^d \rightarrow \mathbb{R}$ that is convex and twice continuously differentiable, we study a closed-loop control system that is governed by the operators $\nabla \Phi$ and $\nabla^2 \Phi$ together with a feedback control law $\lambda(\cdot)$ satisfying the algebraic equation $(\lambda(t))^p\|\nabla\Phi(x(t))\|^{p-1} = \theta$ for some $\theta \in (0, 1)$. Our first contribution is to prove the existence and uniqueness of a local solution to this system via the Banach fixed-point theorem. We present a simple yet nontrivial Lyapunov function that allows us to establish the existence and uniqueness of a global solution under certain regularity conditions and analyze the convergence properties of trajectories. The rate of convergence is $O(1/t^{(3p+1)/2})$ in terms of objective function gap and $O(1/t^{3p})$ in terms of squared gradient norm. Our second contribution is to provide two algorithmic frameworks obtained from discretization of our continuous-time system, one of which generalizes the large-step A-HPE framework and the other of which leads to a new optimal $p$-th order tensor algorithm. While our discrete-time analysis can be seen as a simplification and generalization of~\citet{Monteiro-2013-Accelerated}, it is largely motivated by the aforementioned continuous-time analysis, demonstrating the fundamental role that the feedback control plays in optimal acceleration and the clear advantage that the continuous-time perspective brings to algorithmic design. A highlight of our analysis is that we show that all of the $p$-th order optimal tensor algorithms that we discuss minimize the squared gradient norm at a rate of $O(k^{-3p})$, which complements the recent analysis.

We consider the numerical approximation of the inertial Landau-Lifshitz-Gilbert equation (iLLG), which describes the dynamics of the magnetization in ferromagnetic materials at subpicosecond time scales. We propose and analyze two fully discrete numerical schemes: The first method is based on a reformulation of the problem as a linear constrained variational formulation for the linear velocity. The second method exploits a reformulation of the problem as a first order system in time for the magnetization and the angular momentum. Both schemes are implicit, based on first-order finite elements, and generate approximations satisfying the unit-length constraint of iLLG at the vertices of the underlying mesh. For both methods, we prove convergence of the approximations towards a weak solution of the problem. Numerical experiments validate the theoretical results and show the applicability of the methods for the simulation of ultrafast magnetic processes.

Fully implicit Runge-Kutta (IRK) methods have many desirable properties as time integration schemes in terms of accuracy and stability, but high-order IRK methods are not commonly used in practice with numerical PDEs due to the difficulty of solving the stage equations. This paper introduces a theoretical and algorithmic preconditioning framework for solving the systems of equations that arise from IRK methods applied to linear numerical PDEs (without algebraic constraints). This framework also naturally applies to discontinuous Galerkin discretizations in time. Under quite general assumptions on the spatial discretization that yield stable time integration, the preconditioned operator is proven to have condition number bounded by a small, order-one constant, independent of the spatial mesh and time-step size, and with only weak dependence on number of stages/polynomial order; for example, the preconditioned operator for 10th-order Gauss IRK has condition number less than two, independent of the spatial discretization and time step. The new method can be used with arbitrary existing preconditioners for backward Euler-type time stepping schemes, and is amenable to the use of three-term recursion Krylov methods when the underlying spatial discretization is symmetric. The new method is demonstrated to be effective on various high-order finite-difference and finite-element discretizations of linear parabolic and hyperbolic problems, demonstrating fast, scalable solution of up to 10th order accuracy. The new method consistently outperforms existing block preconditioning approaches, and in several cases, the new method can achieve 4th-order accuracy using Gauss integration with roughly half the number of preconditioner applications and wallclock time as required using standard diagonally implicit RK methods.

Fully implicit Runge-Kutta (IRK) methods have many desirable accuracy and stability properties as time integration schemes, but high-order IRK methods are not commonly used in practice with large-scale numerical PDEs because of the difficulty of solving the stage equations. This paper introduces a theoretical and algorithmic framework for solving the nonlinear equations that arise from IRK methods (and discontinuous Galerkin discretizations in time) applied to nonlinear numerical PDEs, including PDEs with algebraic constraints. Several new linearizations of the nonlinear IRK equations are developed, offering faster and more robust convergence than the often-considered simplified Newton, as well as an effective preconditioner for the true Jacobian if exact Newton iterations are desired. Inverting these linearizations requires solving a set of block 2x2 systems. Under quite general assumptions, it is proven that the preconditioned 2x2 operator's condition number is bounded by a small constant close to one, independent of the spatial discretization, spatial mesh, and time step, and with only weak dependence on the number of stages or integration accuracy. Moreover, the new method is built using the same preconditioners needed for backward Euler-type time stepping schemes, so can be readily added to existing codes. The new methods are applied to several challenging fluid flow problems, including the compressible Euler and Navier Stokes equations, and the vorticity-streamfunction formulation of the incompressible Euler and Navier Stokes equations. Up to 10th-order accuracy is demonstrated using Gauss IRK, while in all cases 4th-order Gauss IRK requires roughly half the number of preconditioner applications as required by standard SDIRK methods.

We develop a space-time mortar mixed finite element method for parabolic problems. The domain is decomposed into a union of subdomains discretized with non-matching spatial grids and asynchronous time steps. The method is based on a space-time variational formulation that couples mixed finite elements in space with discontinuous Galerkin in time. Continuity of flux (mass conservation) across space-time interfaces is imposed via a coarse-scale space-time mortar variable that approximates the primary variable. Uniqueness, existence, and stability, as well as a priori error estimates for the spatial and temporal errors are established. A space-time non-overlapping domain decomposition method is developed that reduces the global problem to a space-time coarse-scale mortar interface problem. Each interface iteration involves solving in parallel space-time subdomain problems. The spectral properties of the interface operator and the convergence of the interface iteration are analyzed. Numerical experiments are provided that illustrate the theoretical results and the flexibility of the method for modeling problems with features that are localized in space and time.

This is a tutorial and survey paper on Karush-Kuhn-Tucker (KKT) conditions, first-order and second-order numerical optimization, and distributed optimization. After a brief review of history of optimization, we start with some preliminaries on properties of sets, norms, functions, and concepts of optimization. Then, we introduce the optimization problem, standard optimization problems (including linear programming, quadratic programming, and semidefinite programming), and convex problems. We also introduce some techniques such as eliminating inequality, equality, and set constraints, adding slack variables, and epigraph form. We introduce Lagrangian function, dual variables, KKT conditions (including primal feasibility, dual feasibility, weak and strong duality, complementary slackness, and stationarity condition), and solving optimization by method of Lagrange multipliers. Then, we cover first-order optimization including gradient descent, line-search, convergence of gradient methods, momentum, steepest descent, and backpropagation. Other first-order methods are explained, such as accelerated gradient method, stochastic gradient descent, mini-batch gradient descent, stochastic average gradient, stochastic variance reduced gradient, AdaGrad, RMSProp, and Adam optimizer, proximal methods (including proximal mapping, proximal point algorithm, and proximal gradient method), and constrained gradient methods (including projected gradient method, projection onto convex sets, and Frank-Wolfe method). We also cover non-smooth and $\ell_1$ optimization methods including lasso regularization, convex conjugate, Huber function, soft-thresholding, coordinate descent, and subgradient methods. Then, we explain second-order methods including Newton's method for unconstrained, equality constrained, and inequality constrained problems....

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

北京阿比特科技有限公司