亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Covariance functions are the core of spatial statistics, stochastic processes, machine learning as well as many other theoretical and applied disciplines. The properties of the covariance function at small and large distances determine the geometric attributes of the associated Gaussian random field. Having covariance functions that allow to specify both local and global properties is certainly on demand. This paper provides a method to find new classes of covariance functions having such properties. We term these models hybrid as they are obtained as scale mixtures of piecewise covariance kernels against measures that are also defined as piecewise linear combination of parametric families of measures. In order to illustrate our methodology, we provide new families of covariance functions that are proved to be richer with respect to other well known families that have been proposed by earlier literature. More precisely, we derive a hybrid Cauchy-Mat\'ern model, which allows us to index both long memory and mean square differentiability of the random field, and a hybrid Hole-Effect-Mat\'ern model, which is capable of attaining negative values (hole effect), while preserving the local attributes of the traditional Mat\'ern model. Our findings are illustrated through numerical studies with both simulated and real data.

相關內容

In this paper, we establish novel data-dependent upper bounds on the generalization error through the lens of a "variable-size compressibility" framework that we introduce newly here. In this framework, the generalization error of an algorithm is linked to a variable-size 'compression rate' of its input data. This is shown to yield bounds that depend on the empirical measure of the given input data at hand, rather than its unknown distribution. Our new generalization bounds that we establish are tail bounds, tail bounds on the expectation, and in-expectations bounds. Moreover, it is shown that our framework also allows to derive general bounds on any function of the input data and output hypothesis random variables. In particular, these general bounds are shown to subsume and possibly improve over several existing PAC-Bayes and data-dependent intrinsic dimension-based bounds that are recovered as special cases, thus unveiling a unifying character of our approach. For instance, a new data-dependent intrinsic dimension based bounds is established, which connects the generalization error to the optimization trajectories and reveals various interesting connections with rate-distortion dimension of process, R\'enyi information dimension of process, and metric mean dimension.

We propose a data-driven way to reduce the noise of covariance matrices of nonstationary systems. In the case of stationary systems, asymptotic approaches were proved to converge to the optimal solutions. Such methods produce eigenvalues that are highly dependent on the inputs, as common sense would suggest. Our approach proposes instead to use a set of eigenvalues totally independent from the inputs and that encode the long-term averaging of the influence of the future on present eigenvalues. Such an influence can be the predominant factor in nonstationary systems. Using real and synthetic data, we show that our data-driven method outperforms optimal methods designed for stationary systems for the filtering of both covariance matrix and its inverse, as illustrated by financial portfolio variance minimization, which makes out method generically relevant to many problems of multivariate inference.

Semiparametric models are useful in econometrics, social sciences and medicine application. In this paper, a new estimator based on least square methods is proposed to estimate the direction of unknown parameters in semi-parametric models. The proposed estimator is consistent and has asymptotic distribution under mild conditions without the knowledge of the form of link function. Simulations show that the proposed estimator is significantly superior to maximum score estimator given by Manski (1975) for binary response variables. When the error term is long-tailed distributions or distribution with infinity moments, the proposed estimator perform well. Its application is illustrated with data of exporting participation of manufactures in Guangdong.

In this paper, we study the generalization performance of min $\ell_2$-norm overfitting solutions for the neural tangent kernel (NTK) model of a two-layer neural network with ReLU activation that has no bias term. We show that, depending on the ground-truth function, the test error of overfitted NTK models exhibits characteristics that are different from the "double-descent" of other overparameterized linear models with simple Fourier or Gaussian features. Specifically, for a class of learnable functions, we provide a new upper bound of the generalization error that approaches a small limiting value, even when the number of neurons $p$ approaches infinity. This limiting value further decreases with the number of training samples $n$. For functions outside of this class, we provide a lower bound on the generalization error that does not diminish to zero even when $n$ and $p$ are both large.

This paper deals with the scenario approach to robust optimization. This relies on a random sampling of the possibly infinite number of constraints induced by uncertainties in the parameters of an optimization problem. Solving the resulting random program yields a solution for which the quality is measured in terms of the probability of violating the constraints for a random value of the uncertainties, typically unseen before. Another central issue is the determination of the sample complexity, i.e., the number of random constraints (or scenarios) that one must consider in order to guarantee a certain level of reliability. In this paper, we introduce the notion of margin to improve upon standard results in this field. In particular, using tools from statistical learning theory, we show that the sample complexity of a class of random programs does not explicitly depend on the number of variables. In addition, within the considered class, that includes polynomial constraints among others, this result holds for both convex and nonconvex instances with the same level of guarantees. We also derive a posteriori bounds on the probability of violation and sketch a regularization approach that could be used to improve the reliability of computed solutions on the basis of these bounds.

We consider the extreme eigenvalues of the sample covariance matrix $Q=YY^*$ under the generalized elliptical model that $Y=\Sigma^{1/2}XD.$ Here $\Sigma$ is a bounded $p \times p$ positive definite deterministic matrix representing the population covariance structure, $X$ is a $p \times n$ random matrix containing either independent columns sampled from the unit sphere in $\mathbb{R}^p$ or i.i.d. centered entries with variance $n^{-1},$ and $D$ is a diagonal random matrix containing i.i.d. entries and independent of $X.$ Such a model finds important applications in statistics and machine learning. In this paper, assuming that $p$ and $n$ are comparably large, we prove that the extreme edge eigenvalues of $Q$ can have several types of distributions depending on $\Sigma$ and $D$ asymptotically. These distributions include: Gumbel, Fr\'echet, Weibull, Tracy-Widom, Gaussian and their mixtures. On the one hand, when the random variables in $D$ have unbounded support, the edge eigenvalues of $Q$ can have either Gumbel or Fr\'echet distribution depending on the tail decay property of $D.$ On the other hand, when the random variables in $D$ have bounded support, under some mild regularity assumptions on $\Sigma,$ the edge eigenvalues of $Q$ can exhibit Weibull, Tracy-Widom, Gaussian or their mixtures. Based on our theoretical results, we consider two important applications. First, we propose some statistics and procedure to detect and estimate the possible spikes for elliptically distributed data. Second, in the context of a factor model, by using the multiplier bootstrap procedure via selecting the weights in $D,$ we propose a new algorithm to infer and estimate the number of factors in the factor model. Numerical simulations also confirm the accuracy and powerfulness of our proposed methods and illustrate better performance compared to some existing methods in the literature.

In the context of state-space models, skeleton-based smoothing algorithms rely on a backward sampling step which by default has a $\mathcal O(N^2)$ complexity (where $N$ is the number of particles). Existing improvements in the literature are unsatisfactory: a popular rejection sampling -- based approach, as we shall show, might lead to badly behaved execution time; another rejection sampler with stopping lacks complexity analysis; yet another MCMC-inspired algorithm comes with no stability guarantee. We provide several results that close these gaps. In particular, we prove a novel non-asymptotic stability theorem, thus enabling smoothing with truly linear complexity and adequate theoretical justification. We propose a general framework which unites most skeleton-based smoothing algorithms in the literature and allows to simultaneously prove their convergence and stability, both in online and offline contexts. Furthermore, we derive, as a special case of that framework, a new coupling-based smoothing algorithm applicable to models with intractable transition densities. We elaborate practical recommendations and confirm those with numerical experiments.

We obtain a recurrence relation in $d$ for the average singular value $% \alpha (d)$ of a complex valued $d\times d$\ matrix $\frac{1}{\sqrt{d}}X$ with random i.i.d., N( 0,1) entries, and use it to show that $\alpha (d)$ decreases monotonically with $d$ to the limit given by the Marchenko-Pastur distribution.\ The monotonicity of $\alpha (d)$ has been recently conjectured by Bandeira, Kennedy and Singer in their study of the Little Grothendieck problem over the unitary group $\mathcal{U}_{d}$ \cite{BKS}, a combinatorial optimization problem. The result implies sharp global estimates for $\alpha (d)$, new bounds for the expected minimum and maximum singular values, and a lower bound for the ratio of the expected maximum and the expected minimum singular value. The proof is based on a connection with the theory of Tur\'{a}n determinants of orthogonal polynomials. We also discuss some applications to the problem that originally motivated the conjecture.

Click-through rate (CTR) prediction plays a critical role in recommender systems and online advertising. The data used in these applications are multi-field categorical data, where each feature belongs to one field. Field information is proved to be important and there are several works considering fields in their models. In this paper, we proposed a novel approach to model the field information effectively and efficiently. The proposed approach is a direct improvement of FwFM, and is named as Field-matrixed Factorization Machines (FmFM, or $FM^2$). We also proposed a new explanation of FM and FwFM within the FmFM framework, and compared it with the FFM. Besides pruning the cross terms, our model supports field-specific variable dimensions of embedding vectors, which acts as soft pruning. We also proposed an efficient way to minimize the dimension while keeping the model performance. The FmFM model can also be optimized further by caching the intermediate vectors, and it only takes thousands of floating-point operations (FLOPs) to make a prediction. Our experiment results show that it can out-perform the FFM, which is more complex. The FmFM model's performance is also comparable to DNN models which require much more FLOPs in runtime.

With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.

北京阿比特科技有限公司