亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We obtain bounds to quantify the distributional approximation in the delta method for vector statistics (the sample mean of $n$ independent random vectors) for normal and non-normal limits, measured using smooth test functions. For normal limits, we obtain bounds of the optimal order $n^{-1/2}$ rate of convergence, but for a wide class of non-normal limits, which includes quadratic forms amongst others, we achieve bounds with a faster order $n^{-1}$ convergence rate. We apply our general bounds to derive explicit bounds to quantify distributional approximations of an estimator for Bernoulli variance, several statistics of sample moments, order $n^{-1}$ bounds for the chi-square approximation of a family of rank-based statistics, and we also provide an efficient independent derivation of an order $n^{-1}$ bound for the chi-square approximation of Pearson's statistic. In establishing our general results, we generalise recent results on Stein's method for functions of multivariate normal random vectors to vector-valued functions and sums of independent random vectors whose components may be dependent. These bounds are widely applicable and are of independent interest.

相關內容

Over the last several decades, improvements in the fields of analytic combinatorics and computer algebra have made determining the asymptotic behaviour of sequences satisfying linear recurrence relations with polynomial coefficients largely a matter of routine, under assumptions that hold often in practice. The algorithms involved typically take a sequence, encoded by a recurrence relation and initial terms, and return the leading terms in an asymptotic expansion up to a big-O error term. Less studied, however, are effective techniques giving an explicit bound on asymptotic error terms. Among other things, such explicit bounds typically allow the user to automatically prove sequence positivity (an active area of enumerative and algebraic combinatorics) by exhibiting an index when positive leading asymptotic behaviour dominates any error terms. In this article, we present a practical algorithm for computing such asymptotic approximations with rigorous error bounds, under the assumption that the generating series of the sequence is a solution of a differential equation with regular (Fuchsian) dominant singularities. Our algorithm approximately follows the singularity analysis method of Flajolet and Odlyzko, except that all big-O terms involved in the derivation of the asymptotic expansion are replaced by explicit error terms. The computation of the error terms combines analytic bounds from the literature with effective techniques from rigorous numerics and computer algebra. We implement our algorithm in the SageMath computer algebra system and exhibit its use on a variety of applications (including our original motivating example, solution uniqueness in the Canham model for the shape of genus one biomembranes).

The widely used stochastic gradient methods for minimizing nonconvex composite objective functions require the Lipschitz smoothness of the differentiable part. But the requirement does not hold true for problem classes including quadratic inverse problems and training neural networks. To address this issue, we investigate a family of stochastic Bregman proximal gradient (SBPG) methods, which only require smooth adaptivity of the differentiable part. SBPG replaces the upper quadratic approximation used in SGD with the Bregman proximity measure, resulting in a better approximation model that captures the non-Lipschitz gradients of the nonconvex objective. We formulate the vanilla SBPG and establish its convergence properties under nonconvex setting without finite-sum structure. Experimental results on quadratic inverse problems testify the robustness of SBPG. Moreover, we propose a momentum-based version of SBPG (MSBPG) and prove it has improved convergence properties. We apply MSBPG to the training of deep neural networks with a polynomial kernel function, which ensures the smooth adaptivity of the loss function. Experimental results on representative benchmarks demonstrate the effectiveness and robustness of MSBPG in training neural networks. Since the additional computation cost of MSBPG compared with SGD is negligible in large-scale optimization, MSBPG can potentially be employed an universal open-source optimizer in the future.

In multivariate time series analysis, the coherence measures the linear dependency between two-time series at different frequencies. However, real data applications often exhibit nonlinear dependency in the frequency domain. Conventional coherence analysis fails to capture such dependency. The quantile coherence, on the other hand, characterizes nonlinear dependency by defining the coherence at a set of quantile levels based on trigonometric quantile regression. Although quantile coherence is a more powerful tool, its estimation remains challenging due to the high level of noise. This paper introduces a new estimation technique for quantile coherence. The proposed method is semi-parametric, which uses the parametric form of the spectrum of the vector autoregressive (VAR) model as an approximation to the quantile spectral matrix, along with nonparametric smoothing across quantiles. For each fixed quantile level, we obtain the VAR parameters from the quantile periodograms, then, using the Durbin-Levinson algorithm, we calculate the preliminary estimate of quantile coherence using the VAR parameters. Finally, we smooth the preliminary estimate of quantile coherence across quantiles using a nonparametric smoother. Numerical results show that the proposed estimation method outperforms nonparametric methods. We show that quantile coherence-based bivariate time series clustering has advantages over the ordinary VAR coherence. For applications, the identified clusters of financial stocks by quantile coherence with a market benchmark are shown to have an intriguing and more accurate structure of diversified investment portfolios that may be used by investors to make better decisions.

Approximate message passing (AMP) is a scalable, iterative approach to signal recovery. For structured random measurement ensembles, including independent and identically distributed (i.i.d.) Gaussian and rotationally-invariant matrices, the performance of AMP can be characterized by a scalar recursion called state evolution (SE). The pseudo-Lipschitz (polynomial) smoothness is conventionally assumed. In this work, we extend the SE for AMP to a new class of measurement matrices with independent (not necessarily identically distributed) entries. We also extend it to a general class of functions, called controlled functions which are not constrained by the polynomial smoothness; unlike the pseudo-Lipschitz function that has polynomial smoothness, the controlled function grows exponentially. The lack of structure in the assumed measurement ensembles is addressed by leveraging Lindeberg-Feller. The lack of smoothness of the assumed controlled function is addressed by a proposed conditioning technique leveraging the empirical statistics of the AMP instances. The resultants grant the use of the SE to a broader class of measurement ensembles and a new class of functions.

We consider a problem of approximation of $d$-variate functions defined on $\mathbb{R}^d$ which belong to the Hilbert space with tensor product-type reproducing Gaussian kernel with constant shape parameter. Within worst case setting, we investigate the growth of the information complexity as $d\to\infty$. The asymptotics are obtained for the case of fixed error threshold and for the case when it goes to zero as $d\to\infty$.

This paper develops a fully distributed differentially-private learning algorithm to solve nonsmooth optimization problems. We distribute the Alternating Direction Method of Multipliers (ADMM) to comply with the distributed setting and employ an approximation of the augmented Lagrangian to handle nonsmooth objective functions. Furthermore, we ensure zero-concentrated differential privacy (zCDP) by perturbing the outcome of the computation at each agent with a variance-decreasing Gaussian noise. This privacy-preserving method allows for better accuracy than the conventional $(\epsilon, \delta)$-DP and stronger guarantees than the more recent R\'enyi-DP. The developed fully distributed algorithm has a competitive privacy accuracy trade-off and handles nonsmooth and non-necessarily strongly convex problems. We provide complete theoretical proof for the privacy guarantees and the convergence of the algorithm to the exact solution. We also prove under additional assumptions that the algorithm converges in linear time. Finally, we observe in simulations that the developed algorithm outperforms all of the existing methods.

We study a class of stochastic semilinear damped wave equations driven by additive Wiener noise. Owing to the damping term, under appropriate conditions on the nonlinearity, the solution admits a unique invariant distribution. We apply semi-discrete and fully-discrete methods in order to approximate this invariant distribution, using a spectral Galerkin method and an exponential Euler integrator for spatial and temporal discretization respectively. We prove that the considered numerical schemes also admit unique invariant distributions, and we prove error estimates between the approximate and exact invariant distributions, with identification of the orders of convergence. To the best of our knowledge this is the first result in the literature concerning numerical approximation of invariant distributions for stochastic damped wave equations.

We consider the numerical approximation of second-order semi-linear parabolic stochastic partial differential equations interpreted in the mild sense which we solve on general two-dimensional domains with a $\mathcal{C}^2$ boundary with homogeneous Dirichlet boundary conditions. The equations are driven by Gaussian additive noise, and several Lipschitz-like conditions are imposed on the nonlinear function. We discretize in space with a spectral Galerkin method and in time using an explicit Euler-like scheme. For irregular shapes, the necessary Dirichlet eigenvalues and eigenfunctions are obtained from a boundary integral equation method. This yields a nonlinear eigenvalue problem, which is discretized using a boundary element collocation method and is solved with the Beyn contour integral algorithm. We present an error analysis as well as numerical results on an exemplary asymmetric shape, and point out limitations of the approach.

In this paper, we consider the problem of joint parameter estimation for drift and diffusion coefficients of a stochastic McKean-Vlasov equation and for the associated system of interacting particles. The analysis is provided in a general framework, as both coefficients depend on the solution of the process and on the law of the solution itself. Starting from discrete observations of the interacting particle system over a fixed interval $[0, T]$, we propose a contrast function based on a pseudo likelihood approach. We show that the associated estimator is consistent when the discretization step ($\Delta_n$) and the number of particles ($N$) satisfy $\Delta_n \rightarrow 0$ and $N \rightarrow \infty$, and asymptotically normal when additionally the condition $\Delta_n N \rightarrow 0$ holds.

In stochastic zeroth-order optimization, a problem of practical relevance is understanding how to fully exploit the local geometry of the underlying objective function. We consider a fundamental setting in which the objective function is quadratic, and provide the first tight characterization of the optimal Hessian-dependent sample complexity. Our contribution is twofold. First, from an information-theoretic point of view, we prove tight lower bounds on Hessian-dependent complexities by introducing a concept called energy allocation, which captures the interaction between the searching algorithm and the geometry of objective functions. A matching upper bound is obtained by solving the optimal energy spectrum. Then, algorithmically, we show the existence of a Hessian-independent algorithm that universally achieves the asymptotic optimal sample complexities for all Hessian instances. The optimal sample complexities achieved by our algorithm remain valid for heavy-tailed noise distributions, which are enabled by a truncation method.

北京阿比特科技有限公司