亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

To a mesh function we associate the natural analogue of the Monge-Ampere measure. The latter is shown to be equivalent to the Monge-Ampere measure of the convex envelope. We prove that the uniform convergence to a bounded convex function of mesh functions implies the uniform convergence on compact subsets of their convex envelopes and hence the weak convergence of the associated Monge-Ampere measures. We also give conditions for mesh functions to have a subsequence which converges uniformly to a convex function. Our result can be used to give alternate proofs of the convergence of some discretizations for the second boundary value problem for the Monge-Ampere equation and was used for a recently proposed discretization of the latter. For mesh functions which are uniformly bounded and satisfy a convexity condition at the discrete level, we show that there is a subsequence which converges uniformly on compact subsets to a convex function. The convex envelopes of the mesh functions of the subsequence also converge uniformly on compact subsets. If in addition they agree with a continuous convex function on the boundary, the limit function is shown to satisfy the boundary condition strongly.

相關內容

We consider various filtered time discretizations of the periodic Korteweg--de Vries equation: a filtered exponential integrator, a filtered Lie splitting scheme as well as a filtered resonance based discretisation and establish convergence error estimates at low regularity. Our analysis is based on discrete Bourgain spaces and allows to prove convergence in $L^2$ for rough data $u_{0} \in H^s,$ $s>0$ with an explicit convergence rate.

Deep learning has achieved notable success in various fields, including image and speech recognition. One of the factors in the successful performance of deep learning is its high feature extraction ability. In this study, we focus on the adaptivity of deep learning; consequently, we treat the variable exponent Besov space, which has a different smoothness depending on the input location $x$. In other words, the difficulty of the estimation is not uniform within the domain. We analyze the general approximation error of the variable exponent Besov space and the approximation and estimation errors of deep learning. We note that the improvement based on adaptivity is remarkable when the region upon which the target function has less smoothness is small and the dimension is large. Moreover, the superiority to linear estimators is shown with respect to the convergence rate of the estimation error.

We study the problem of distributed zero-order optimization for a class of strongly convex functions. They are formed by the average of local objectives, associated to different nodes in a prescribed network of connections. We propose a distributed zero-order projected gradient descent algorithm to solve this problem. Exchange of information within the network is permitted only between neighbouring nodes. A key feature of the algorithm is that it can query only function values, subject to a general noise model, that does not require zero mean or independent errors. We derive upper bounds for the average cumulative regret and optimization error of the algorithm which highlight the role played by a network connectivity parameter, the number of variables, the noise level, the strong convexity parameter of the global objective and certain smoothness properties of the local objectives. When the bound is specified to the standard undistributed setting, we obtain an improvement over the state-of-the-art bounds, due to the novel gradient estimation procedure proposed here. We also comment on lower bounds and observe that the dependency over certain function parameters in the bound is nearly optimal.

Generalized Linear Bandits (GLBs) are powerful extensions to the Linear Bandit (LB) setting, broadening the benefits of reward parametrization beyond linearity. In this paper we study GLBs in non-stationary environments, characterized by a general metric of non-stationarity known as the variation-budget or \emph{parameter-drift}, denoted $B_T$. While previous attempts have been made to extend LB algorithms to this setting, they overlook a salient feature of GLBs which flaws their results. In this work, we introduce a new algorithm that addresses this difficulty. We prove that under a geometric assumption on the action set, our approach enjoys a $\tilde{\mathcal{O}}(B_T^{1/3}T^{2/3})$ regret bound. In the general case, we show that it suffers at most a $\tilde{\mathcal{O}}(B_T^{1/5}T^{4/5})$ regret. At the core of our contribution is a generalization of the projection step introduced in Filippi et al. (2010), adapted to the non-stationary nature of the problem. Our analysis sheds light on central mechanisms inherited from the setting by explicitly splitting the treatment of the learning and tracking aspects of the problem.

Consider the sequential optimization of an expensive to evaluate and possibly non-convex objective function $f$ from noisy feedback, that can be considered as a continuum-armed bandit problem. Upper bounds on the regret performance of several learning algorithms (GP-UCB, GP-TS, and their variants) are known under both a Bayesian (when $f$ is a sample from a Gaussian process (GP)) and a frequentist (when $f$ lives in a reproducing kernel Hilbert space) setting. The regret bounds often rely on the maximal information gain $\gamma_T$ between $T$ observations and the underlying GP (surrogate) model. We provide general bounds on $\gamma_T$ based on the decay rate of the eigenvalues of the GP kernel, whose specialisation for commonly used kernels, improves the existing bounds on $\gamma_T$, and subsequently the regret bounds relying on $\gamma_T$ under numerous settings. For the Mat\'ern family of kernels, where the lower bounds on $\gamma_T$, and regret under the frequentist setting, are known, our results close a huge polynomial in $T$ gap between the upper and lower bounds (up to logarithmic in $T$ factors).

We consider a PDE approach to numerically solving the optimal transportation problem on the sphere. We focus on both the traditional squared geodesic cost and a logarithmic cost, which arises in the reflector antenna design problem. At each point on the sphere, we replace the surface PDE with a generalized Monge-Amp\`ere type equation posed on the tangent plane using normal coordinates. The resulting nonlinear PDE can then be approximated by any consistent, monotone scheme for generalized Monge-Amp\`ere type equations on the plane. By augmenting this discretization with an additional term that constrains the solution gradient, we obtain a strong form of stability. A modification of the Barles-Souganidis convergence framework then establishes convergence to the mean-zero solution of the original PDE.

This paper considers the iterative solution of linear systems arising from discretization of the anisotropic radiative transfer equation with discontinuous elements on the sphere. In order to achieve robust convergence behavior in the discretization parameters and the physical parameters we develop preconditioned Richardson iterations in Hilbert spaces. We prove convergence of the resulting scheme. The preconditioner is constructed in two steps. The first step borrows ideas from matrix splittings and ensures mesh independence. The second step uses a subspace correction technique to reduce the influence of the optical parameters. The correction spaces are build from low-order spherical harmonics approximations generalizing well-known diffusion approximations. We discuss in detail the efficient implementation and application of the discrete operators. In particular, for the considered discontinuous spherical elements, the scattering operator becomes dense and we show that $\mathcal{H}$- or $\mathcal{H}^2$-matrix compression can be applied in a black-box fashion to obtain almost linear or linear complexity when applying the corresponding approximations. The effectiveness of the proposed method is shown in numerical examples.

A differential geometric framework to construct an asymptotically unbiased estimator of a function of a parameter is presented. The derived estimator asymptotically coincides with the uniformly minimum variance unbiased estimator, if a complete sufficient statistic exists. The framework is based on the maximum a posteriori estimation, where the prior is chosen such that the estimator is unbiased. The framework is demonstrated for the second-order asymptotic unbiasedness (unbiased up to $O(n^{-1})$ for a sample of size $n$). The condition of the asymptotic unbiasedness leads the choice of the prior such that the departure from a kind of harmonicity of the estimand is canceled out at each point of the model manifold. For a given estimand, the prior is given as an integral. On the other hand, for a given prior, we can address the bias of what estimator can be reduced by solving an elliptic partial differential equation. A family of invariant priors, which generalizes the Jeffreys prior, is mentioned as a specific example. Some illustrative examples of applications of the proposed framework are provided.

We prove lower bounds for higher-order methods in smooth non-convex finite-sum optimization. Our contribution is threefold: We first show that a deterministic algorithm cannot profit from the finite-sum structure of the objective, and that simulating a pth-order regularized method on the whole function by constructing exact gradient information is optimal up to constant factors. We further show lower bounds for randomized algorithms and compare them with the best known upper bounds. To address some gaps between the bounds, we propose a new second-order smoothness assumption that can be seen as an analogue of the first-order mean-squared smoothness assumption. We prove that it is sufficient to ensure state-of-the-art convergence guarantees, while allowing for a sharper lower bound.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

北京阿比特科技有限公司