亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The maximum norm error estimations for virtual element methods are studied. To establish the error estimations, we prove higher local regularity based on delicate analysis of Green's functions and high-order local error estimations for the partition of the virtual element solutions. The maximum norm of the exact gradient and the gradient of the projection of the virtual element solutions are proved to achieve optimal convergence results. For high-order virtual element methods, we establish the optimal convergence results in $L^{\infty}$ norm. Our theoretical discoveries are validated by a numerical example on general polygonal meshes.

相關內容

The level set estimation problem seeks to find all points in a domain ${\cal X}$ where the value of an unknown function $f:{\cal X}\rightarrow \mathbb{R}$ exceeds a threshold $\alpha$. The estimation is based on noisy function evaluations that may be acquired at sequentially and adaptively chosen locations in ${\cal X}$. The threshold value $\alpha$ can either be \emph{explicit} and provided a priori, or \emph{implicit} and defined relative to the optimal function value, i.e. $\alpha = (1-\epsilon)f(x_\ast)$ for a given $\epsilon > 0$ where $f(x_\ast)$ is the maximal function value and is unknown. In this work we provide a new approach to the level set estimation problem by relating it to recent adaptive experimental design methods for linear bandits in the Reproducing Kernel Hilbert Space (RKHS) setting. We assume that $f$ can be approximated by a function in the RKHS up to an unknown misspecification and provide novel algorithms for both the implicit and explicit cases in this setting with strong theoretical guarantees. Moreover, in the linear (kernel) setting, we show that our bounds are nearly optimal, namely, our upper bounds match existing lower bounds for threshold linear bandits. To our knowledge this work provides the first instance-dependent, non-asymptotic upper bounds on sample complexity of level-set estimation that match information theoretic lower bounds.

In this paper we develop a Jacobi-type algorithm for the (approximate) diagonalization of tensors of order $d\geq3$ via tensor trace maximization. For a general tensor this is an alternating least squares algorithm and the rotation matrices are chosen in each mode one-by-one to maximize the tensor trace. On the other hand, for symmetric tensors we discuss a structure-preserving variant of this algorithm where in each iteration the same rotation is applied in all modes. We show that both versions of the algorithm converge to the stationary points of the corresponding objective functions.

We overcome two major bottlenecks in the study of low rank approximation by assuming the low rank factors themselves are sparse. Specifically, (1) for low rank approximation with spectral norm error, we show how to improve the best known $\mathsf{nnz}(\mathbf A) k / \sqrt{\varepsilon}$ running time to $\mathsf{nnz}(\mathbf A)/\sqrt{\varepsilon}$ running time plus low order terms depending on the sparsity of the low rank factors, and (2) for streaming algorithms for Frobenius norm error, we show how to bypass the known $\Omega(nk/\varepsilon)$ memory lower bound and obtain an $s k (\log n)/ \mathrm{poly}(\varepsilon)$ memory bound, where $s$ is the number of non-zeros of each low rank factor. Although this algorithm is inefficient, as it must be under standard complexity theoretic assumptions, we also present polynomial time algorithms using $\mathrm{poly}(s,k,\log n,\varepsilon^{-1})$ memory that output rank $k$ approximations supported on a $O(sk/\varepsilon)\times O(sk/\varepsilon)$ submatrix. Both the prior $\mathsf{nnz}(\mathbf A) k / \sqrt{\varepsilon}$ running time and the $nk/\varepsilon$ memory for these problems were long-standing barriers; our results give a natural way of overcoming them assuming sparsity of the low rank factors.

We present and implement an algorithm for computing the invariant circle and the corresponding stable manifolds for 2-dimensional maps. The algorithm is based on the parameterization method, and it is backed up by an a-posteriori theorem established in [YdlL21]. The algorithm works irrespective of whether the internal dynamics in the invariant circle is a rotation or it is phase-locked. The algorithm converges quadratically and the number of operations and memory requirements for each step of the iteration is linear with respect to the size of the discretization. We also report on the result of running the implementation in some standard models to uncover new phenomena. In particular, we explored a bundle merging scenario in which the invariant circle loses hyperbolicity because the angle between the stable directions and the tangent becomes zero even if the rates of contraction are separated. We also discuss and implement a generalization of the algorithm to 3 dimensions, and implement it on the 3-dimensional Fattened Arnold Family (3D-FAF) map with non-resonant eigenvalues and present numerical results.

We investigate multiscale finite element methods for an elliptic distributed optimal control problem with rough coefficients. They are based on the (local) orthogonal decomposition methodology of M\aa lqvist and Peterseim.

In this work, we consider the optimization formulation for symmetric tensor decomposition recently introduced in the Subspace Power Method (SPM) of Kileel and Pereira. Unlike popular alternative functionals for tensor decomposition, the SPM objective function has the desirable properties that its maximal value is known in advance, and its global optima are exactly the rank-1 components of the tensor when the input is sufficiently low-rank. We analyze the non-convex optimization landscape associated with the SPM objective. Our analysis accounts for working with noisy tensors. We derive quantitative bounds such that any second-order critical point with SPM objective value exceeding the bound must equal a tensor component in the noiseless case, and must approximate a tensor component in the noisy case. For decomposing tensors of size $D^{\times m}$, we obtain a near-global guarantee up to rank $\widetilde{o}(D^{\lfloor m/2 \rfloor})$ under a random tensor model, and a global guarantee up to rank $\mathcal{O}(D)$ assuming deterministic frame conditions. This implies that SPM with suitable initialization is a provable, efficient, robust algorithm for low-rank symmetric tensor decomposition. We conclude with numerics that show a practical preferability for using the SPM functional over a more established counterpart.

In this paper, an abstract framework for the error analysis of discontinuous finite element method is developed for the distributed and Neumann boundary control problems governed by the stationary Stokes equation with control constraints. {\it A~priori} error estimates of optimal order are derived for velocity and pressure in the energy norm and the $L^2$-norm, respectively. Moreover, a reliable and efficient {\it a~posteriori} error estimator is derived. The results are applicable to a variety of problems just under the minimal regularity possessed by the well-posedness of the problem. In particular, we consider the abstract results with suitable stable pairs of velocity and pressure spaces like as the lowest-order Crouzeix-Raviart finite element and piecewise constant spaces, piecewise linear and constant finite element spaces. The theoretical results are illustrated by the numerical experiments.

In this paper we present a finite element analysis for a Dirichlet boundary control problem governed by the Stokes equation. The Dirichlet control is considered in a convex closed subset of the energy space $\mathbf{H}^1(\Omega).$ Most of the previous works on the Stokes Dirichlet boundary control problem deals with either tangential control or the case where the flux of the control is zero. This choice of the control is very particular and their choice of the formulation leads to the control with limited regularity. To overcome this difficulty, we introduce the Stokes problem with outflow condition and the control acts on the Dirichlet boundary only hence our control is more general and it has both the tangential and normal components. We prove well-posedness and discuss on the regularity of the control problem. The first-order optimality condition for the control leads to a Signorini problem. We develop a two-level finite element discretization by using $\mathbf{P}_1$ elements(on the fine mesh) for the velocity and the control variable and $P_0$ elements (on the coarse mesh) for the pressure variable. The standard energy error analysis gives $\frac{1}{2}+\frac{\delta}{2}$ order of convergence when the control is in $\mathbf{H}^{\frac{3}{2}+\delta}(\Omega)$ space. Here we have improved it to $\frac{1}{2}+\delta,$ which is optimal. Also, when the control lies in less regular space we derived optimal order of convergence up to the regularity. The theoretical results are corroborated by a variety of numerical tests.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司