亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we present a comprehensive convergence analysis of Laguerre spectral approximations for analytic functions. By exploiting contour integral techniques from complex analysis, we prove rigorously that Laguerre projection and interpolation methods of degree $n$ converge at the root-exponential rate $O(\exp(-2\rho\sqrt{n}))$ with $\rho>0$ when the underlying function is analytic inside and on a parabola with focus at the origin and vertex at $z=-\rho^2$. The extension to several important applications are also discussed, including Laguerre spectral differentiations, Gauss-Laguerre quadrature rules and the Weeks method for the inversion of Laplace transform, and some sharp convergence rate estimates are derived. Numerical experiments are presented to verify the theoretical results.

相關內容

This paper investigates gradient descent for solving low-rank matrix approximation problems. We begin by establishing the local linear convergence of gradient descent for symmetric matrix approximation. Building on this result, we prove the rapid global convergence of gradient descent, particularly when initialized with small random values. Remarkably, we show that even with moderate random initialization, which includes small random initialization as a special case, gradient descent achieves fast global convergence in scenarios where the top eigenvalues are identical. Furthermore, we extend our analysis to address asymmetric matrix approximation problems and investigate the effectiveness of a retraction-free eigenspace computation method. Numerical experiments strongly support our theory. In particular, the retraction-free algorithm outperforms the corresponding Riemannian gradient descent method, resulting in a significant 29\% reduction in runtime.

Functional data analysis is an important research field in statistics which treats data as random functions drawn from some infinite-dimensional functional space, and functional principal component analysis (FPCA) based on eigen-decomposition plays a central role for data reduction and representation. After nearly three decades of research, there remains a key problem unsolved, namely, the perturbation analysis of covariance operator for diverging number of eigencomponents obtained from noisy and discretely observed data. This is fundamental for studying models and methods based on FPCA, while there has not been substantial progress since Hall, M\"uller and Wang (2006)'s result for a fixed number of eigenfunction estimates. In this work, we aim to establish a unified theory for this problem, deriving the moment bounds of eigenfunctions and asymptotic distributions of eigenvalues for a wide range of sampling schemes. Our results provide insight into the phenomenon when the $\mathcal{L}^{2}$ bound of eigenfunction estimates with diverging indices is minimax optimal as if the curves are fully observed, and reveal the transition of convergence rates from nonparametric to parametric regimes in connection to sparse or dense sampling. We also propose a double truncation technique to derive the uniform convergence (in time domain) of estimated eigenfunctions for the first time. The technical arguments in this work are useful for handling the perturbation series with noisy and discretely observed data and can be applied in models or those involving inverse problems based on FPCA as regularization, such as functional linear regression.

Over the last decade, approximating functions in infinite dimensions from samples has gained increasing attention in computational science and engineering, especially in computational uncertainty quantification. This is primarily due to the relevance of functions that are solutions to parametric differential equations in various fields, e.g. chemistry, economics, engineering, and physics. While acquiring accurate and reliable approximations of such functions is inherently difficult, current benchmark methods exploit the fact that such functions often belong to certain classes of holomorphic functions to get algebraic convergence rates in infinite dimensions with respect to the number of (potentially adaptive) samples $m$. Our work focuses on providing theoretical approximation guarantees for the class of $(\boldsymbol{b},\varepsilon)$-holomorphic functions, demonstrating that these algebraic rates are the best possible for Banach-valued functions in infinite dimensions. We establish lower bounds using a reduction to a discrete problem in combination with the theory of $m$-widths, Gelfand widths and Kolmogorov widths. We study two cases, known and unknown anisotropy, in which the relative importance of the variables is known and unknown, respectively. A key conclusion of our paper is that in the latter setting, approximation from finite samples is impossible without some inherent ordering of the variables, even if the samples are chosen adaptively. Finally, in both cases, we demonstrate near-optimal, non-adaptive (random) sampling and recovery strategies which achieve close to same rates as the lower bounds.

Generative Adversarial Networks (GANs) are a popular formulation to train generative models for complex high dimensional data. The standard method for training GANs involves a gradient descent-ascent (GDA) procedure on a minimax optimization problem. This procedure is hard to analyze in general due to the nonlinear nature of the dynamics. We study the local dynamics of GDA for training a GAN with a kernel-based discriminator. This convergence analysis is based on a linearization of a non-linear dynamical system that describes the GDA iterations, under an \textit{isolated points model} assumption from [Becker et al. 2022]. Our analysis brings out the effect of the learning rates, regularization, and the bandwidth of the kernel discriminator, on the local convergence rate of GDA. Importantly, we show phase transitions that indicate when the system converges, oscillates, or diverges. We also provide numerical simulations that verify our claims.

A rigorous full-wave modal analysis based on the method of moments in the spectral domain is presented for line waveguides constituted by two-part impedance planes with arbitrary anisotropic surface impedances. An integral equation is formulated by introducing an auxiliary current sheet on one of the two half planes and extending the impedance boundary condition of the complementary half plane to hold on the entire plane. The equation is then discretized with the method of moments in the spectral domain, by employing exponentially weighted Laguerre polynomials as entire-domain basis functions and performing a Galerkin testing. Numerical results for both bound and leaky line waves are presented and validated against independent results, obtained for isotropic surface impedances with the analytical Sommerfeld-Maliuzhinets method and for the general anisotropic case with a commercial electromagnetic simulator. The proposed approach is computationally efficient, can accommodate the presence of spatial dispersion, and offers physical insight into the modal propagation regimes.

The mean shift (MS) algorithm seeks a mode of the kernel density estimate (KDE). This study presents a convergence guarantee of the mode estimate sequence generated by the MS algorithm and an evaluation of the convergence rate, under fairly mild conditions, with the help of the argument concerning the {\L}ojasiewicz inequality. Our findings, which extend existing ones covering analytic kernels and the Epanechnikov kernel, are significant in that they cover the biweight kernel that is optimal among non-negative kernels in terms of the asymptotic statistical efficiency for the KDE-based mode estimation.

We propose statistically robust and computationally efficient linear learning methods in the high-dimensional batch setting, where the number of features $d$ may exceed the sample size $n$. We employ, in a generic learning setting, two algorithms depending on whether the considered loss function is gradient-Lipschitz or not. Then, we instantiate our framework on several applications including vanilla sparse, group-sparse and low-rank matrix recovery. This leads, for each application, to efficient and robust learning algorithms, that reach near-optimal estimation rates under heavy-tailed distributions and the presence of outliers. For vanilla $s$-sparsity, we are able to reach the $s\log (d)/n$ rate under heavy-tails and $\eta$-corruption, at a computational cost comparable to that of non-robust analogs. We provide an efficient implementation of our algorithms in an open-source $\mathtt{Python}$ library called $\mathtt{linlearn}$, by means of which we carry out numerical experiments which confirm our theoretical findings together with a comparison to other recent approaches proposed in the literature.

This article introduces randomized block Gram-Schmidt process (RBGS) for QR decomposition. RBGS extends the single-vector randomized Gram-Schmidt (RGS) algorithm and inherits its key characteristics such as being more efficient and having at least as much stability as any deterministic (block) Gram-Schmidt algorithm. Block algorithms offer superior performance as they are based on BLAS3 matrix-wise operations and reduce communication cost when executed in parallel. Notably, our low-synchronization variant of RBGS can be implemented in a parallel environment using only one global reduction operation between processors per block. Moreover, the block Gram-Schmidt orthogonalization is the key element in the block Arnoldi procedure for the construction of a Krylov basis, which in turn is used in GMRES, FOM and Rayleigh-Ritz methods for the solution of linear systems and clustered eigenvalue problems. In this article, we develop randomized versions of these methods, based on RBGS, and validate them on nontrivial numerical examples.

We study the convergence to local Nash equilibria of gradient methods for two-player zero-sum differentiable games. It is well-known that such dynamics converge locally when $S \succ 0$ and may diverge when $S=0$, where $S\succeq 0$ is the symmetric part of the Jacobian at equilibrium that accounts for the "potential" component of the game. We show that these dynamics also converge as soon as $S$ is nonzero (partial curvature) and the eigenvectors of the antisymmetric part $A$ are in general position with respect to the kernel of $S$. We then study the convergence rates when $S \ll A$ and prove that they typically depend on the average of the eigenvalues of $S$, instead of the minimum as an analogy with minimization problems would suggest. To illustrate our results, we consider the problem of computing mixed Nash equilibria of continuous games. We show that, thanks to partial curvature, conic particle methods -- which optimize over both weights and supports of the mixed strategies -- generically converge faster than fixed-support methods. For min-max games, it is thus beneficial to add degrees of freedom "with curvature": this can be interpreted as yet another benefit of over-parameterization.

Double-descent curves in neural networks describe the phenomenon that the generalisation error initially descends with increasing parameters, then grows after reaching an optimal number of parameters which is less than the number of data points, but then descends again in the overparameterized regime. In this paper, we use techniques from random matrix theory to characterize the spectral distribution of the empirical feature covariance matrix as a width-dependent perturbation of the spectrum of the neural network Gaussian process (NNGP) kernel, thus establishing a novel connection between the NNGP literature and the random matrix theory literature in the context of neural networks. Our analytical expression allows us to study the generalisation behavior of the corresponding kernel and GP regression, and provides a new interpretation of the double-descent phenomenon, namely as governed by the discrepancy between the width-dependent empirical kernel and the width-independent NNGP kernel.

北京阿比特科技有限公司