亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Quantile regression is a statistical method for estimating conditional quantiles of a response variable. In addition, for mean estimation, it is well known that quantile regression is more robust to outliers than $l_2$-based methods. By using the fused lasso penalty over a $K$-nearest neighbors graph, we propose an adaptive quantile estimator in a non-parametric setup. We show that the estimator attains optimal rate of $n^{-1/d}$ up to a logarithmic factor, under mild assumptions on the data generation mechanism of the $d$-dimensional data. We develop algorithms to compute the estimator and discuss methodology for model selection. Numerical experiments on simulated and real data demonstrate clear advantages of the proposed estimator over state of the art methods.

相關內容

A piecewise Pad\'e-Chebyshev type (PiPCT) approximation method is proposed to minimize the Gibbs phenomenon in approximating piecewise smooth functions. A theorem on $L^1$-error estimate is proved for sufficiently smooth functions using a decay property of the Chebyshev coefficients. Numerical experiments are performed to show that the PiPCT method accurately captures isolated singularities of a function without using the positions and the types of singularities. Further, an adaptive partition approach to the PiPCT method is developed (referred to as the APiPCT method) to achieve the required accuracy with a lesser computational cost. Numerical experiments are performed to show some advantages of using the PiPCT and APiPCT methods compared to some well-known methods in the literature.

Functional data are frequently accompanied by parametric templates that describe the typical shapes of the functions. Although the templates incorporate critical domain knowledge, parametric functional data models can incur significant bias, which undermines the usefulness and interpretability of these models. To correct for model misspecification, we augment the parametric templates with an infinite-dimensional nonparametric functional basis. Crucially, the nonparametric factors are regularized with an ordered spike-and-slab prior, which implicitly provides consistent rank selection and satisfies several appealing theoretical properties. This prior is accompanied by a parameter-expansion scheme customized to boost MCMC efficiency, and is broadly applicable for Bayesian factor models. The nonparametric basis functions are learned from the data, yet constrained to be orthogonal to the parametric template in order to preserve distinctness between the parametric and nonparametric terms. The versatility of the proposed approach is illustrated through applications to synthetic data, human motor control data, and dynamic yield curve data. Relative to parametric alternatives, the proposed semiparametric functional factor model eliminates bias, reduces excessive posterior and predictive uncertainty, and provides reliable inference on the effective number of nonparametric terms--all with minimal additional computational costs.

Causal inference for extreme events has many potential applications in fields such as medicine, climate science and finance. We study the extremal quantile treatment effect of a binary treatment on a continuous, heavy-tailed outcome. Existing methods are limited to the case where the quantile of interest is within the range of the observations. For applications in risk assessment, however, the most relevant cases relate to extremal quantiles that go beyond the data range. We introduce an estimator of the extremal quantile treatment effect that relies on asymptotic tail approximations and uses a new causal Hill estimator for the extreme value indices of potential outcome distributions. We establish asymptotic normality of the estimators even in the setting of extremal quantiles, and we propose a consistent variance estimator to achieve valid statistical inference. In simulation studies we illustrate the advantages of our methodology over competitors, and we apply it to a real data set.

Survival analysis is a critical tool for the modelling of time-to-event data, such as life expectancy after a cancer diagnosis or optimal maintenance scheduling for complex machinery. However, current neural network models provide an imperfect solution for survival analysis as they either restrict the shape of the target probability distribution or restrict the estimation to pre-determined times. As a consequence, current survival neural networks lack the ability to estimate a generic function without prior knowledge of its structure. In this article, we present the metaparametric neural network framework that encompasses existing survival analysis methods and enables their extension to solve the aforementioned issues. This framework allows survival neural networks to satisfy the same independence of generic function estimation from the underlying data structure that characterizes their regression and classification counterparts. Further, we demonstrate the application of the metaparametric framework using both simulated and large real-world datasets and show that it outperforms the current state-of-the-art methods in (i) capturing nonlinearities, and (ii) identifying temporal patterns, leading to more accurate overall estimations whilst placing no restrictions on the underlying function structure.

Kernel-based schemes are state-of-the-art techniques for learning by data. In this work we extend some ideas about kernel-based greedy algorithms to exponential-polynomial splines, whose main drawback consists in possible overfitting and consequent oscillations of the approximant. To partially overcome this issue, we introduce two algorithms which perform an adaptive selection of the spline interpolation points based on the minimization either of the sample residuals ($f$-greedy), or of an upper bound for the approximation error based on the spline Lebesgue function ($\lambda$-greedy). Both methods allow us to obtain an adaptive selection of the sampling points, i.e. the spline nodes. However, while the {$f$-greedy} selection is tailored to one specific target function, the $\lambda$-greedy algorithm is independent of the function values and enables us to define a priori optimal interpolation nodes.

There is an increasing realization that algorithmic inductive biases are central in preventing overfitting; empirically, we often see a benign overfitting phenomenon in overparameterized settings for natural learning algorithms, such as stochastic gradient descent (SGD), where little to no explicit regularization has been employed. This work considers this issue in arguably the most basic setting: constant-stepsize SGD (with iterate averaging or tail averaging) for linear regression in the overparameterized regime. Our main result provides a sharp excess risk bound, stated in terms of the full eigenspectrum of the data covariance matrix, that reveals a bias-variance decomposition characterizing when generalization is possible: (i) the variance bound is characterized in terms of an effective dimension (specific for SGD) and (ii) the bias bound provides a sharp geometric characterization in terms of the location of the initial iterate (and how it aligns with the data covariance matrix). More specifically, for SGD with iterate averaging, we demonstrate the sharpness of the established excess risk bound by proving a matching lower bound (up to constant factors). For SGD with tail averaging, we show its advantage over SGD with iterate averaging by proving a better excess risk bound together with a nearly matching lower bound. Moreover, we reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares (minimum-norm interpolation) and ridge regression. Experimental results on synthetic data corroborate our theoretical findings.

This paper presents several strategies to tune the parameters of metaheuristic methods for (discrete) design optimization of reinforced concrete (RC) structures. A novel utility metric is proposed, based on the area under the average performance curve. The process of modelling, analysis and design of realistic RC structures leads to objective functions for which the evaluation is computationally very expensive. To avoid costly simulations, two types of surrogate models are used. The first one consists of the creation of a database containing all possible solutions. The second one uses benchmark functions to create a discrete sub-space of them, simulating the main features of realistic problems. Parameter tuning of four metaheuristics is performed based on two strategies. The main difference between them is the parameter control established to perform partial assessments. The simplest strategy is suitable to tune good `generalist' methods, i.e., methods with good performance regardless the parameter configuration. The other one is more expensive, but is well suited to assess any method. Tuning results prove that Biogeography-Based Optimization, a relatively new evolutionary algorithm, outperforms other methods such as GA or PSO for such optimization problems, due to its particular approach of applying recombination and mutation operators.

CensSpatial is an R package for analyzing spatial censored data through linear models. It offers a set of tools for simulating, estimating, making predictions, and performing local influence diagnostics for outlier detection. The package provides four algorithms for estimation and prediction. One of them is based on the stochastic approximation of the EM (SAEM) algorithm, which allows easy and fast estimation of the parameters of linear spatial models when censoring is present. The package provides worthy measures to perform diagnostic analysis using the Hessian matrix of the completed log-likelihood function. This work is divided into two parts. The first part discusses and illustrates the utilities that the package offers for estimating and predicting spatial censored data. The second one describes the valuable tools to perform diagnostic analysis. Several examples in spatial environmental data are also provided.

Heatmap-based methods dominate in the field of human pose estimation by modelling the output distribution through likelihood heatmaps. In contrast, regression-based methods are more efficient but suffer from inferior performance. In this work, we explore maximum likelihood estimation (MLE) to develop an efficient and effective regression-based methods. From the perspective of MLE, adopting different regression losses is making different assumptions about the output density function. A density function closer to the true distribution leads to a better regression performance. In light of this, we propose a novel regression paradigm with Residual Log-likelihood Estimation (RLE) to capture the underlying output distribution. Concretely, RLE learns the change of the distribution instead of the unreferenced underlying distribution to facilitate the training process. With the proposed reparameterization design, our method is compatible with off-the-shelf flow models. The proposed method is effective, efficient and flexible. We show its potential in various human pose estimation tasks with comprehensive experiments. Compared to the conventional regression paradigm, regression with RLE bring 12.4 mAP improvement on MSCOCO without any test-time overhead. Moreover, for the first time, especially on multi-person pose estimation, our regression method is superior to the heatmap-based methods. Our code is available at //github.com/Jeff-sjtu/res-loglikelihood-regression

Intersection over Union (IoU) is the most popular evaluation metric used in the object detection benchmarks. However, there is a gap between optimizing the commonly used distance losses for regressing the parameters of a bounding box and maximizing this metric value. The optimal objective for a metric is the metric itself. In the case of axis-aligned 2D bounding boxes, it can be shown that $IoU$ can be directly used as a regression loss. However, $IoU$ has a plateau making it infeasible to optimize in the case of non-overlapping bounding boxes. In this paper, we address the weaknesses of $IoU$ by introducing a generalized version as both a new loss and a new metric. By incorporating this generalized $IoU$ ($GIoU$) as a loss into the state-of-the art object detection frameworks, we show a consistent improvement on their performance using both the standard, $IoU$ based, and new, $GIoU$ based, performance measures on popular object detection benchmarks such as PASCAL VOC and MS COCO.

北京阿比特科技有限公司