亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the misspecified kernel ridge regression problem, researchers usually assume the underground true function $f_{\rho}^{*} \in [\mathcal{H}]^{s}$, a less-smooth interpolation space of a reproducing kernel Hilbert space (RKHS) $\mathcal{H}$ for some $s\in (0,1)$. The existing minimax optimal results require $\|f_{\rho}^{*}\|_{L^{\infty}}<\infty$ which implicitly requires $s > \alpha_{0}$ where $\alpha_{0}\in (0,1)$ is the embedding index, a constant depending on $\mathcal{H}$. Whether the KRR is optimal for all $s\in (0,1)$ is an outstanding problem lasting for years. In this paper, we show that KRR is minimax optimal for any $s\in (0,1)$ when the $\mathcal{H}$ is a Sobolev RKHS.

相關內容

This paper investigates Support Vector Regression (SVR) in the context of the fundamental risk quadrangle theory, which links optimization, risk management, and statistical estimation. It is shown that both formulations of SVR, $\varepsilon$-SVR and $\nu$-SVR, correspond to the minimization of equivalent error measures (Vapnik error and CVaR norm, respectively) with a regularization penalty. These error measures, in turn, define the corresponding risk quadrangles. By constructing the fundamental risk quadrangle, which corresponds to SVR, we show that SVR is the asymptotically unbiased estimator of the average of two symmetric conditional quantiles. Further, we prove the equivalence of the $\varepsilon$-SVR and $\nu$-SVR in a general stochastic setting. Additionally, SVR is formulated as a regular deviation minimization problem with a regularization penalty. Finally, the dual formulation of SVR in the risk quadrangle framework is derived.

Ridge regression with random coefficients provides an important alternative to fixed coefficients regression in high dimensional setting when the effects are expected to be small but not zeros. This paper considers estimation and prediction of random coefficient ridge regression in the setting of transfer learning, where in addition to observations from the target model, source samples from different but possibly related regression models are available. The informativeness of the source model to the target model can be quantified by the correlation between the regression coefficients. This paper proposes two estimators of regression coefficients of the target model as the weighted sum of the ridge estimates of both target and source models, where the weights can be determined by minimizing the empirical estimation risk or prediction risk. Using random matrix theory, the limiting values of the optimal weights are derived under the setting when $p/n \rightarrow \gamma$, where $p$ is the number of the predictors and $n$ is the sample size, which leads to an explicit expression of the estimation or prediction risks. Simulations show that these limiting risks agree very well with the empirical risks. An application to predicting the polygenic risk scores for lipid traits shows such transfer learning methods lead to smaller prediction errors than the single sample ridge regression or Lasso-based transfer learning.

In this paper, we consider the generalization ability of deep wide feedforward ReLU neural networks defined on a bounded domain $\mathcal X \subset \mathbb R^{d}$. We first demonstrate that the generalization ability of the neural network can be fully characterized by that of the corresponding deep neural tangent kernel (NTK) regression. We then investigate on the spectral properties of the deep NTK and show that the deep NTK is positive definite on $\mathcal{X}$ and its eigenvalue decay rate is $(d+1)/d$. Thanks to the well established theories in kernel regression, we then conclude that multilayer wide neural networks trained by gradient descent with proper early stopping achieve the minimax rate, provided that the regression function lies in the reproducing kernel Hilbert space (RKHS) associated with the corresponding NTK. Finally, we illustrate that the overfitted multilayer wide neural networks can not generalize well on $\mathbb S^{d}$. We believe our technical contributions in determining the eigenvalue decay rate of NTK on $\mathbb R^{d}$ might be of independent interests.

Consider a random sample $(X_{1},\ldots,X_{n})$ from an unknown discrete distribution $P=\sum_{j\geq1}p_{j}\delta_{s_{j}}$ on a countable alphabet $\mathbb{S}$, and let $(Y_{n,j})_{j\geq1}$ be the empirical frequencies of distinct symbols $s_{j}$'s in the sample. We consider the problem of estimating the $r$-order missing mass, which is a discrete functional of $P$ defined as $$\theta_{r}(P;\mathbf{X}_{n})=\sum_{j\geq1}p^{r}_{j}I(Y_{n,j}=0).$$ This is generalization of the missing mass whose estimation is a classical problem in statistics, being the subject of numerous studies both in theory and methods. First, we introduce a nonparametric estimator of $\theta_{r}(P;\mathbf{X}_{n})$ and a corresponding non-asymptotic confidence interval through concentration properties of $\theta_{r}(P;\mathbf{X}_{n})$. Then, we investigate minimax estimation of $\theta_{r}(P;\mathbf{X}_{n})$, which is the main contribution of our work. We show that minimax estimation is not feasible over the class of all discrete distributions on $\mathbb{S}$, and not even for distributions with regularly varying tails, which only guarantee that our estimator is consistent for $\theta_{r}(P;\mathbf{X}_{n})$. This leads to introduce the stronger assumption of second-order regular variation for the tail behaviour of $P$, which is proved to be sufficient for minimax estimation of $\theta_r(P;\mathbf{X}_{n})$, making the proposed estimator an optimal minimax estimator of $\theta_{r}(P;\mathbf{X}_{n})$. Our interest in the $r$-order missing mass arises from forensic statistics, where the estimation of the $2$-order missing mass appears in connection to the estimation of the likelihood ratio $T(P,\mathbf{X}_{n})=\theta_{1}(P;\mathbf{X}_{n})/\theta_{2}(P;\mathbf{X}_{n})$, known as the "fundamental problem of forensic mathematics". We present theoretical guarantees to nonparametric estimation of $T(P,\mathbf{X}_{n})$.

Existing theories on deep nonparametric regression have shown that when the input data lie on a low-dimensional manifold, deep neural networks can adapt to the intrinsic data structures. In real world applications, such an assumption of data lying exactly on a low dimensional manifold is stringent. This paper introduces a relaxed assumption that the input data are concentrated around a subset of $\mathbb{R}^d$ denoted by $\mathcal{S}$, and the intrinsic dimension of $\mathcal{S}$ can be characterized by a new complexity notation -- effective Minkowski dimension. We prove that, the sample complexity of deep nonparametric regression only depends on the effective Minkowski dimension of $\mathcal{S}$ denoted by $p$. We further illustrate our theoretical findings by considering nonparametric regression with an anisotropic Gaussian random design $N(0,\Sigma)$, where $\Sigma$ is full rank. When the eigenvalues of $\Sigma$ have an exponential or polynomial decay, the effective Minkowski dimension of such an Gaussian random design is $p=\mathcal{O}(\sqrt{\log n})$ or $p=\mathcal{O}(n^\gamma)$, respectively, where $n$ is the sample size and $\gamma\in(0,1)$ is a small constant depending on the polynomial decay rate. Our theory shows that, when the manifold assumption does not hold, deep neural networks can still adapt to the effective Minkowski dimension of the data, and circumvent the curse of the ambient dimensionality for moderate sample sizes.

We propose a numerical method for the Vlasov-Poisson-Fokker-Planck model written as an hyperbolic system thanks to a spectral decomposition in the basis of Hermite functions with respect to the velocity variable and a structure preserving finite volume scheme for the space variable. On the one hand, we show that this scheme naturally preserves both stationary solutions and linearized free-energy estimate. On the other hand, we adapt previous arguments based on hypocoercivity methods to get quantitative estimates ensuring the exponential relaxation to equilibrium of the discrete solution for the linearized Vlasov-Poisson-Fokker-Planck system, uniformly with respect to both scaling and discretization parameters. Finally, we perform substantial numerical simulations for the nonlinear system to illustrate the efficiency of this approach for a large variety of collisional regimes (plasma echos for weakly collisional regimes and trend to equilibrium for collisional plasmas) and to highlight its robustness (unconditional stability, asymptotic preserving properties).

This study addresses a fundamental, yet overlooked, gap between standard theory and empirical modelling practices in the OLS regression model $\boldsymbol{y}=\boldsymbol{X\beta}+\boldsymbol{u}$ with collinearity. In fact, while an estimated model in practice is desired to have stability and efficiency in its "individual OLS estimates", $\boldsymbol{y}$ itself has no capacity to identify and control the collinearity in $\boldsymbol{X}$ and hence no theory including model selection process (MSP) would fill this gap unless $\boldsymbol{X}$ is controlled in view of sampling theory. In this paper, first introducing a new concept of "empirically effective modelling" (EEM), we propose our EEM methodology (EEM-M) as an integrated process of two MSPs with data $(\boldsymbol{y^o,X})$ given. The first MSP uses $\boldsymbol{X}$ only, called the XMSP, and pre-selects a class $\scr{D}$ of models with individually inefficiency-controlled and collinearity-controlled OLS estimates, where the corresponding two controlling variables are chosen from predictive standard error of each estimate. Next, defining an inefficiency-collinearity risk index for each model, a partial ordering is introduced onto the set of models to compare without using $\boldsymbol{y^o}$, where the better-ness and admissibility of models are discussed. The second MSP is a commonly used MSP that uses $(\boldsymbol{y^o,X})$, and evaluates total model performance as a whole by such AIC, BIC, etc. to select an optimal model from $\scr{D}$. Third, to materialize the XMSP, two algorithms are proposed.

Penalized regression methods such as ridge regression heavily rely on the choice of a tuning or penalty parameter, which is often computed via cross-validation. Discrepancies in the value of the penalty parameter may lead to substantial differences in regression coefficient estimates and predictions. In this paper, we investigate the effect of single observations on the optimal choice of the tuning parameter, showing how the presence of influential points can change it dramatically. We distinguish between points as ``expanders'' and ``shrinkers'', based on their effect on the model complexity. Our approach supplies a visual exploratory tool to identify influential points, naturally implementable for high-dimensional data where traditional approaches usually fail. Applications to simulated and real data examples, both low- and high-dimensional, are presented. The visual tool is implemented in the R package influridge.

The {\it inversion} of a set $X$ of vertices in a digraph $D$ consists of reversing the direction of all arcs of $D\langle X\rangle$. We study $sinv'_k(D)$ (resp. $sinv_k(D)$) which is the minimum number of inversions needed to transform $D$ into a $k$-arc-strong (resp. $k$-strong) digraph and $sinv'_k(n) = \max\{sinv'_k(D) \mid D~\mbox{is a $2k$-edge-connected digraph of order $n$}\}$. We show : $(i): \frac{1}{2} \log (n - k+1) \leq sinv'_k(n) \leq \log n + 4k -3$ ; $(ii):$ for any fixed positive integers $k$ and $t$, deciding whether a given oriented graph $\vec{G}$ satisfies $sinv'_k(\vec{G}) \leq t$ (resp. $sinv_k(\vec{G}) \leq t$) is NP-complete ; $(iii):$ if $T$ is a tournament of order at least $2k+1$, then $sinv'_k(T) \leq sinv_k(T) \leq 2k$, and $sinv'_k(T) \leq \frac{4}{3}k+o(k)$; $(iv):\frac{1}{2}\log(2k+1) \leq sinv'_k(T) \leq sinv_k(T)$ for some tournament $T$ of order $2k+1$; $(v):$ if $T$ is a tournament of order at least $19k-2$ (resp. $11k-2$), then $sinv'_k(T) \leq sinv_k(T) \leq 1$ (resp. $sinv_k(T) \leq 3$); $(vi):$ for every $\epsilon>0$, there exists $C$ such that $sinv'_k(T) \leq sinv_k(T) \leq C$ for every tournament $T$ on at least $2k+1 + \epsilon k$ vertices.

We study the cost of overfitting in noisy kernel ridge regression (KRR), which we define as the ratio between the test error of the interpolating ridgeless model and the test error of the optimally-tuned model. We take an "agnostic" view in the following sense: we consider the cost as a function of sample size for any target function, even if the sample size is not large enough for consistency or the target is outside the RKHS. We analyze the cost of overfitting under a Gaussian universality ansatz using recently derived (non-rigorous) risk estimates in terms of the task eigenstructure. Our analysis provides a more refined characterization of benign, tempered and catastrophic overfitting (qv Mallinar et al. 2022).

北京阿比特科技有限公司