亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper addresses the problem of regression to reconstruct functions, which are observed with superimposed errors at random locations. We address the problem in reproducing kernel Hilbert spaces. It is demonstrated that the estimator, which is often derived by employing Gaussian random fields, converges in the mean norm of the reproducing kernel Hilbert space to the conditional expectation and this implies local and uniform convergence of this function estimator. By preselecting the kernel, the problem does not suffer from the curse of dimensionality. The paper analyzes the statistical properties of the estimator. We derive convergence properties and provide a conservative rate of convergence for increasing sample sizes.

相關內容

This paper deals with the kernel-based approximation of a multivariate periodic function by interpolation at the points of an integration lattice -- a setting that, as pointed out by Zeng, Leung, Hickernell (MCQMC2004, 2006) and Zeng, Kritzer, Hickernell (Constr. Approx., 2009), allows fast evaluation by fast Fourier transform, so avoiding the need for a linear solver. The main contribution of the paper is the application to the approximation problem for uncertainty quantification of elliptic partial differential equations, with the diffusion coefficient given by a random field that is periodic in the stochastic variables, in the model proposed recently by Kaarnioja, Kuo, Sloan (SIAM J. Numer. Anal., 2020). The paper gives a full error analysis, and full details of the construction of lattices needed to ensure a good (but inevitably not optimal) rate of convergence and an error bound independent of dimension. Numerical experiments support the theory.

We provide explicit bounds on the number of sample points required to estimate tangent spaces and intrinsic dimensions of (smooth, compact) Euclidean submanifolds via local principal component analysis. Our approach directly estimates covariance matrices locally, which simultaneously allows estimating both the tangent spaces and the intrinsic dimension of a manifold. The key arguments involve a matrix concentration inequality, a Wasserstein bound for flattening a manifold, and a Lipschitz relation for the covariance matrix with respect to the Wasserstein distance.

In this short paper, we study the simulation of a large system of stochastic processes subject to a common driving noise and fast mean-reverting stochastic volatilities. This model may be used to describe the firm values of a large pool of financial entities. We then seek an efficient estimator for the probability of a default, indicated by a firm value below a certain threshold, conditional on common factors. We consider approximations where coefficients containing the fast volatility are replaced by certain ergodic averages (a type of law of large numbers), and study a correction term (of central limit theorem-type). The accuracy of these approximations is assessed by numerical simulation of pathwise losses and the estimation of payoff functions as they appear in basket credit derivatives.

We consider the problem of making inference about the population outcome mean of an outcome variable subject to nonignorable missingness. By leveraging a so-called shadow variable for the outcome, we propose a novel condition that ensures nonparametric identification of the outcome mean, although the full data distribution is not identified. The identifying condition requires the existence of a function as a solution to a representer equation that connects the shadow variable to the outcome mean. Under this condition, we use sieves to nonparametrically solve the representer equation and propose an estimator which avoids modeling the propensity score or the outcome regression. We establish the asymptotic properties of the proposed estimator. We also show that the estimator is locally efficient and attains the semiparametric efficiency bound for the shadow variable model under certain regularity conditions. We illustrate the proposed approach via simulations and a real data application on home pricing.

The class of Gibbs point processes (GPP) is a large class of spatial point processes in the sense that they can model both clustered and repulsive point patterns. They are specified by their conditional intensity, which for a point pattern $\mathbf{x}$ and a location $u$, is roughly speaking the probability that an event occurs in an infinitesimal ball around $u$ given the rest of the configuration is $\mathbf{x}$. The most simple, natural and easiest to interpret class of models is the class of pairwise interaction point processes where the conditional intensity depends on the number of points and pairwise distances between them. Estimating this function non parametrically has almost never been considered in the literature. We tackle this question and propose an orthogonal series estimation procedure of the log pairwise interaction function. Under some conditions provided on the spatial GPP and on the basis system, we show that this orthogonal series estimator is consistent and asymptotically normal. The estimation procedure is simple, fast and completely data-driven. We show its efficiency through simulation experiments and we apply it to three datasets.

We study a numerical approximation for a nonlinear variable-order fractional differential equation via an integral equation method. Due to the lack of the monotonicity of the discretization coefficients of the variable-order fractional derivative in standard approximation schemes, existing numerical analysis techniques do not apply directly. By an approximate inversion technique, the proposed model is transformed as a second kind Volterra integral equation, based on which a collocation method under uniform or graded mesh is developed and analyzed. In particular, the error estimates improve the existing results by proving a consistent and sharper mesh grading parameter and characterizing the convergence rates in terms of the initial value of the variable order, which demonstrates its critical role in determining the smoothness of the solutions and thus the numerical accuracy.

In this study, we extend the three-regime bubble model of Pang et al. (2021) to allow the forth regime followed by the unit root process after recovery. We provide the asymptotic and finite sample justification of the consistency of the collapse date estimator in the two-regime AR(1) model. The consistency allows us to split the sample before and after the date of collapse and to consider the estimation of the date of exuberation and date of recovery separately. We have also found that the limiting behavior of the recovery date varies depending on the extent of explosiveness and recovering.

We introduce kernel thinning, a new procedure for compressing a distribution $\mathbb{P}$ more effectively than i.i.d. sampling or standard thinning. Given a suitable reproducing kernel $\mathbf{k}$ and $\mathcal{O}(n^2)$ time, kernel thinning compresses an $n$-point approximation to $\mathbb{P}$ into a $\sqrt{n}$-point approximation with comparable worst-case integration error across the associated reproducing kernel Hilbert space. With high probability, the maximum discrepancy in integration error is $\mathcal{O}_d(n^{-\frac{1}{2}}\sqrt{\log n})$ for compactly supported $\mathbb{P}$ and $\mathcal{O}_d(n^{-\frac{1}{2}} \sqrt{(\log n)^{d+1}\log\log n})$ for sub-exponential $\mathbb{P}$ on $\mathbb{R}^d$. In contrast, an equal-sized i.i.d. sample from $\mathbb{P}$ suffers $\Omega(n^{-\frac14})$ integration error. Our sub-exponential guarantees resemble the classical quasi-Monte Carlo error rates for uniform $\mathbb{P}$ on $[0,1]^d$ but apply to general distributions on $\mathbb{R}^d$ and a wide range of common kernels. We use our results to derive explicit non-asymptotic maximum mean discrepancy bounds for Gaussian, Mat\'ern, and B-spline kernels and present two vignettes illustrating the practical benefits of kernel thinning over i.i.d. sampling and standard Markov chain Monte Carlo thinning, in dimensions $d=2$ through $100$.

High-order clustering aims to identify heterogeneous substructures in multiway datasets that arise commonly in neuroimaging, genomics, social network studies, etc. The non-convex and discontinuous nature of this problem pose significant challenges in both statistics and computation. In this paper, we propose a tensor block model and the computationally efficient methods, \emph{high-order Lloyd algorithm} (HLloyd), and \emph{high-order spectral clustering} (HSC), for high-order clustering. The convergence guarantees and statistical optimality are established for the proposed procedure under a mild sub-Gaussian noise assumption. Under the Gaussian tensor block model, we completely characterize the statistical-computational trade-off for achieving high-order exact clustering based on three different signal-to-noise ratio regimes. The analysis relies on new techniques of high-order spectral perturbation analysis and a "singular-value-gap-free" error bound in tensor estimation, which are substantially different from the matrix spectral analyses in the literature. Finally, we show the merits of the proposed procedures via extensive experiments on both synthetic and real datasets.

This paper develops a general methodology to conduct statistical inference for observations indexed by multiple sets of entities. We propose a novel multiway empirical likelihood statistic that converges to a chi-square distribution under the non-degenerate case, where corresponding Hoeffding type decomposition is dominated by linear terms. Our methodology is related to the notion of jackknife empirical likelihood but the leave-out pseudo values are constructed by leaving columns or rows. We further develop a modified version of our multiway empirical likelihood statistic, which converges to a chi-square distribution regardless of the degeneracy, and discover its desirable higher-order property compared to the t-ratio by the conventional Eicker-White type variance estimator. The proposed methodology is illustrated by several important statistical problems, such as bipartite network, two-stage sampling, generalized estimating equations, and three-way observations.

北京阿比特科技有限公司