亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This article considers the popular MCMC method of unadjusted Langevin Monte Carlo (LMC) and provides a non-asymptotic analysis of its sampling error in 2-Wasserstein distance. The proof is based on a mean-square analysis framework refined from Li et al. (2019), which works for a large class of sampling algorithms based on discretizations of contractive SDEs. We establish an $\tilde{O}(\sqrt{d}/\epsilon)$ mixing time bound for LMC, without warm start, under the common log-smooth and log-strongly-convex conditions, plus a growth condition on the 3rd-order derivative of the potential of target measures. This bound improves the best previously known $\tilde{O}(d/\epsilon)$ result and is optimal (in terms of order) in both dimension $d$ and accuracy tolerance $\epsilon$ for target measures satisfying the aforementioned assumptions. Our theoretical analysis is further validated by numerical experiments.

相關內容

A comparison of a new algorithm for line clipping in E2 and E3 by convex polygon and/or polyhedron with O(1) processing complexity and Cyrus- Beck algorithm is presented. The new algorithm in E2 is based on dual space representation and space subdivision technique. The principle of algorithm in E3 is based on the projection of polyhedron to three orthogonal E2 coordinate systems. Algorithms have optimal complexities O(1) and demonstrates that preprocessing can be used to speed up the line clipping significantly. Obvious applications are for one polygon and/or polyhedron and many clipped lines. Detailed theoretical estimations and experimental results are also presented.

Low-Rank Parity-Check (LRPC) codes are a class of rank metric codes that have many applications specifically in network coding and cryptography. Recently, LRPC codes have been extended to Galois rings which are a specific case of finite rings. In this paper, we first define LRPC codes over finite commutative local rings, which are bricks of finite rings, with an efficient decoder. We improve the theoretical bound of the failure probability of the decoder. Then, we extend the work to arbitrary finite commutative rings. Certain conditions are generally used to ensure the success of the decoder. Over finite fields, one of these conditions is to choose a prime number as the extension degree of the Galois field. We have shown that one can construct LRPC codes without this condition on the degree of Galois extension.

In this paper, we develop a novel virtual-queue-based online algorithm for online convex optimization (OCO) problems with long-term and time-varying constraints and conduct a performance analysis with respect to the dynamic regret and constraint violations. We design a new update rule of dual variables and a new way of incorporating time-varying constraint functions into the dual variables. To the best of our knowledge, our algorithm is the first parameter-free algorithm to simultaneously achieve sublinear dynamic regret and constraint violations. Our proposed algorithm also outperforms the state-of-the-art results in many aspects, e.g., our algorithm does not require the Slater condition. Meanwhile, for a group of practical and widely-studied constrained OCO problems in which the variation of consecutive constraints is smooth enough across time, our algorithm achieves $O(1)$ constraint violations. Furthermore, we extend our algorithm and analysis to the case when the time horizon $T$ is unknown. Finally, numerical experiments are conducted to validate the theoretical guarantees of our algorithm, and some applications of our proposed framework will be outlined.

In the present paper we initiate the challenging task of building a mathematically sound theory for Adaptive Virtual Element Methods (AVEMs). Among the realm of polygonal meshes, we restrict our analysis to triangular meshes with hanging nodes in 2d -- the simplest meshes with a systematic refinement procedure that preserves shape regularity and optimal complexity. A major challenge in the a posteriori error analysis of AVEMs is the presence of the stabilization term, which is of the same order as the residual-type error estimator but prevents the equivalence of the latter with the energy error. Under the assumption that any chain of recursively created hanging nodes has uniformly bounded length, we show that the stabilization term can be made arbitrarily small relative to the error estimator provided the stabilization parameter of the scheme is sufficiently large. This quantitative estimate leads to stabilization-free upper and lower a posteriori bounds for the energy error. This novel and crucial property of VEMs hinges on the largest subspace of continuous piecewise linear functions and the delicate interplay between its coarser scales and the finer ones of the VEM space. Our results apply to $H^1$-conforming (lowest order) VEMs of any kind, including the classical and enhanced VEMs.

We extend Huber's (1963) inequality for the joint distribution function of negative dependent scores in round-robin tournaments. As a byproduct, this extension implies convergence in probability of the maximal score in round-robin tournaments in a more general setting.

In this paper, we study the statistical limits of deep learning techniques for solving elliptic partial differential equations (PDEs) from random samples using the Deep Ritz Method (DRM) and Physics-Informed Neural Networks (PINNs). To simplify the problem, we focus on a prototype elliptic PDE: the Schr\"odinger equation on a hypercube with zero Dirichlet boundary condition, which has wide application in the quantum-mechanical systems. We establish upper and lower bounds for both methods, which improves upon concurrently developed upper bounds for this problem via a fast rate generalization bound. We discover that the current Deep Ritz Methods is sub-optimal and propose a modified version of it. We also prove that PINN and the modified version of DRM can achieve minimax optimal bounds over Sobolev spaces. Empirically, following recent work which has shown that the deep model accuracy will improve with growing training sets according to a power law, we supply computational experiments to show a similar behavior of dimension dependent power law for deep PDE solvers.

We study sparse linear regression over a network of agents, modeled as an undirected graph (with no centralized node). The estimation problem is formulated as the minimization of the sum of the local LASSO loss functions plus a quadratic penalty of the consensus constraint -- the latter being instrumental to obtain distributed solution methods. While penalty-based consensus methods have been extensively studied in the optimization literature, their statistical and computational guarantees in the high dimensional setting remain unclear. This work provides an answer to this open problem. Our contribution is two-fold. First, we establish statistical consistency of the estimator: under a suitable choice of the penalty parameter, the optimal solution of the penalized problem achieves near optimal minimax rate $\mathcal{O}(s \log d/N)$ in $\ell_2$-loss, where $s$ is the sparsity value, $d$ is the ambient dimension, and $N$ is the total sample size in the network -- this matches centralized sample rates. Second, we show that the proximal-gradient algorithm applied to the penalized problem, which naturally leads to distributed implementations, converges linearly up to a tolerance of the order of the centralized statistical error -- the rate scales as $\mathcal{O}(d)$, revealing an unavoidable speed-accuracy dilemma.Numerical results demonstrate the tightness of the derived sample rate and convergence rate scalings.

The matrix normal model, the family of Gaussian matrix-variate distributions whose covariance matrix is the Kronecker product of two lower dimensional factors, is frequently used to model matrix-variate data. The tensor normal model generalizes this family to Kronecker products of three or more factors. We study the estimation of the Kronecker factors of the covariance matrix in the matrix and tensor models. We show nonasymptotic bounds for the error achieved by the maximum likelihood estimator (MLE) in several natural metrics. In contrast to existing bounds, our results do not rely on the factors being well-conditioned or sparse. For the matrix normal model, all our bounds are minimax optimal up to logarithmic factors, and for the tensor normal model our bound for the largest factor and overall covariance matrix are minimax optimal up to constant factors provided there are enough samples for any estimator to obtain constant Frobenius error. In the same regimes as our sample complexity bounds, we show that an iterative procedure to compute the MLE known as the flip-flop algorithm converges linearly with high probability. Our main tool is geodesic strong convexity in the geometry on positive-definite matrices induced by the Fisher information metric. This strong convexity is determined by the expansion of certain random quantum channels. We also provide numerical evidence that combining the flip-flop algorithm with a simple shrinkage estimator can improve performance in the undersampled regime.

A class of optimal three-weight cyclic codes of dimension 3 over any finite field was presented by Vega [Finite Fields Appl., 42 (2016) 23-38]. Shortly thereafter, Heng and Yue [IEEE Trans. Inf. Theory, 62(8) (2016) 4501-4513] generalized this result by presenting several classes of cyclic codes with either optimal three weights or a few weights. On the other hand, a class of optimal five-weight cyclic codes of dimension 4 over a prime field was recently presented by Li, et al. [Adv. Math. Commun., 13(1) (2019) 137-156]. One of the purposes of this work is to present a more general description for these optimal five-weight cyclic codes, which gives place to an enlarged class of optimal five-weight cyclic codes of dimension 4 over any finite field. As an application of this enlarged class, we present the complete weight enumerator of a subclass of the optimal three-weight cyclic codes over any finite field that were studied by Vega [Finite Fields Appl., 42 (2016) 23-38]. In addition, we study the dual codes in this enlarged class of optimal five-weight cyclic codes, and show that they are cyclic codes of length $q^2-1$, dimension $q^2-5$, and minimum Hamming distance 4. In fact, through several examples, we see that those parameters are the best known parameters for linear codes.

Robust estimation is much more challenging in high dimensions than it is in one dimension: Most techniques either lead to intractable optimization problems or estimators that can tolerate only a tiny fraction of errors. Recent work in theoretical computer science has shown that, in appropriate distributional models, it is possible to robustly estimate the mean and covariance with polynomial time algorithms that can tolerate a constant fraction of corruptions, independent of the dimension. However, the sample and time complexity of these algorithms is prohibitively large for high-dimensional applications. In this work, we address both of these issues by establishing sample complexity bounds that are optimal, up to logarithmic factors, as well as giving various refinements that allow the algorithms to tolerate a much larger fraction of corruptions. Finally, we show on both synthetic and real data that our algorithms have state-of-the-art performance and suddenly make high-dimensional robust estimation a realistic possibility.

北京阿比特科技有限公司