亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Extropy and its properties are explored to quantify the uncertainty. In this paper, we obtain alternative expressions for cumulative residual extropy and negative cumulative extropy. We obtain simple estimators of cumulative (residual) extropy. Asymptotic properties of the proposed estimators are studied. We also present new estimators of cumulative (residual) extropy when the data is right censored. The finite sample performance of the estimators is evaluated through Monte Carlo simulation studies. We use the proposed estimators to analyse different real data sets. Finally, we obtain the relationship between different dynamic and weighted extropy measures and reliability concepts, which leads to several open problems associated with these measures.

相關內容

Discrete and especially binary random variables occur in many machine learning models, notably in variational autoencoders with binary latent states and in stochastic binary networks. When learning such models, a key tool is an estimator of the gradient of the expected loss with respect to the probabilities of binary variables. The straight-through (ST) estimator gained popularity due to its simplicity and efficiency, in particular in deep networks where unbiased estimators are impractical. Several techniques were proposed to improve over ST while keeping the same low computational complexity: Gumbel-Softmax, ST-Gumbel-Softmax, BayesBiNN, FouST. We conduct a theoretical analysis of bias and variance of these methods in order to understand tradeoffs and verify the originally claimed properties. The presented theoretical results allow for better understanding of these methods and in some cases reveal serious issues.

Partial least squares (PLS) is a dimensionality reduction technique used as an alternative to ordinary least squares (OLS) in situations where the data is colinear or high dimensional. Both PLS and OLS provide mean based estimates, which are extremely sensitive to the presence of outliers or heavy tailed distributions. In contrast, quantile regression is an alternative to OLS that computes robust quantile based estimates. In this work, the multivariate PLS is extended to the quantile regression framework, obtaining a theoretical formulation of the problem and a robust dimensionality reduction technique that we call fast partial quantile regression (fPQR), that provides quantile based estimates. An efficient implementation of fPQR is also derived, and its performance is studied through simulation experiments and the chemometrics well known biscuit dough dataset, a real high dimensional example.

Nonlinear metrics, such as the F1-score, Matthews correlation coefficient, and Fowlkes-Mallows index, are often used to evaluate the performance of machine learning models, in particular, when facing imbalanced datasets that contain more samples of one class than the other. Recent optimal decision tree algorithms have shown remarkable progress in producing trees that are optimal with respect to linear criteria, such as accuracy, but unfortunately nonlinear metrics remain a challenge. To address this gap, we propose a novel algorithm based on bi-objective optimisation, which treats misclassifications of each binary class as a separate objective. We show that, for a large class of metrics, the optimal tree lies on the Pareto frontier. Consequently, we obtain the optimal tree by using our method to generate the set of all nondominated trees. To the best of our knowledge, this is the first method to compute provably optimal decision trees for nonlinear metrics. Our approach leads to a trade-off when compared to optimising linear metrics: the resulting trees may be more desirable according to the given nonlinear metric at the expense of higher runtimes. Nevertheless, the experiments illustrate that runtimes are reasonable for majority of the tested datasets.

When studying treatment effects in multilevel studies, investigators commonly use (semi-)parametric estimators, which make strong parametric assumptions about the outcome, the treatment, and/or the correlation between individuals. We propose two nonparametric, doubly robust, asymptotically Normal estimators of treatment effects that do not make such assumptions. The first estimator is an extension of the cross-fitting estimator applied to clustered settings. The second estimator is a new estimator that uses conditional propensity scores and an outcome covariance model to improve efficiency. We apply our estimators in simulation and empirical studies and find that they consistently obtain the smallest standard errors.

This article develops new closed-form variance expressions for power analyses for commonly used difference-in-differences (DID) and comparative interrupted time series (CITS) panel data estimators. The main contribution is to incorporate variation in treatment timing into the analysis. The power formulas also account for other key design features that arise in practice: autocorrelated errors, unequal measurement intervals, and clustering due to the unit of treatment assignment. We consider power formulas for both cross-sectional and longitudinal models and allow for covariates. An illustrative power analysis provides guidance on appropriate sample sizes. The key finding is that accounting for treatment timing increases required sample sizes. Further, DID estimators have considerably more power than standard CITS and ITS estimators. An available Shiny R dashboard performs the sample size calculations for the considered estimators.

We introduce a kernel estimator, to the tail index of a right-censored Pareto-type distribution, that generalizes Worms's one (Worms and Worms, 2014)in terms of weight coefficients. Under some regularity conditions, the asymptotic normality of the proposed estimator is established. In the framework of the second-order condition, we derive an asymptotically bias-reduced version to the new estimator. Through a simulation study, we conclude that one of the main features of the proposed kernel estimator is its smoothness contrary to Worms's one, which behaves, rather erratically, as a function of the number of largest extreme values. As expected, the bias significantly decreases compared to that of the non-smoothed estimator with however a slight increase in the mean squared error.

We develop a new robust geographically weighted regression method in the presence of outliers. We embed the standard geographically weighted regression in robust objective function based on $\gamma$-divergence. A novel feature of the proposed approach is that two tuning parameters that control robustness and spatial smoothness are automatically tuned in a data-dependent manner. Further, the proposed method can produce robust standard error estimates of the robust estimator and give us a reasonable quantity for local outlier detection. We demonstrate that the proposed method is superior to the existing robust version of geographically weighted regression through simulation and data analysis.

Causal inference for extreme events has many potential applications in fields such as medicine, climate science and finance. We study the extremal quantile treatment effect of a binary treatment on a continuous, heavy-tailed outcome. Existing methods are limited to the case where the quantile of interest is within the range of the observations. For applications in risk assessment, however, the most relevant cases relate to extremal quantiles that go beyond the data range. We introduce an estimator of the extremal quantile treatment effect that relies on asymptotic tail approximations and uses a new causal Hill estimator for the extreme value indices of potential outcome distributions. We establish asymptotic normality of the estimators even in the setting of extremal quantiles, and we propose a consistent variance estimator to achieve valid statistical inference. In simulation studies we illustrate the advantages of our methodology over competitors, and we apply it to a real data set.

Survival analysis is a critical tool for the modelling of time-to-event data, such as life expectancy after a cancer diagnosis or optimal maintenance scheduling for complex machinery. However, current neural network models provide an imperfect solution for survival analysis as they either restrict the shape of the target probability distribution or restrict the estimation to pre-determined times. As a consequence, current survival neural networks lack the ability to estimate a generic function without prior knowledge of its structure. In this article, we present the metaparametric neural network framework that encompasses existing survival analysis methods and enables their extension to solve the aforementioned issues. This framework allows survival neural networks to satisfy the same independence of generic function estimation from the underlying data structure that characterizes their regression and classification counterparts. Further, we demonstrate the application of the metaparametric framework using both simulated and large real-world datasets and show that it outperforms the current state-of-the-art methods in (i) capturing nonlinearities, and (ii) identifying temporal patterns, leading to more accurate overall estimations whilst placing no restrictions on the underlying function structure.

Heatmap-based methods dominate in the field of human pose estimation by modelling the output distribution through likelihood heatmaps. In contrast, regression-based methods are more efficient but suffer from inferior performance. In this work, we explore maximum likelihood estimation (MLE) to develop an efficient and effective regression-based methods. From the perspective of MLE, adopting different regression losses is making different assumptions about the output density function. A density function closer to the true distribution leads to a better regression performance. In light of this, we propose a novel regression paradigm with Residual Log-likelihood Estimation (RLE) to capture the underlying output distribution. Concretely, RLE learns the change of the distribution instead of the unreferenced underlying distribution to facilitate the training process. With the proposed reparameterization design, our method is compatible with off-the-shelf flow models. The proposed method is effective, efficient and flexible. We show its potential in various human pose estimation tasks with comprehensive experiments. Compared to the conventional regression paradigm, regression with RLE bring 12.4 mAP improvement on MSCOCO without any test-time overhead. Moreover, for the first time, especially on multi-person pose estimation, our regression method is superior to the heatmap-based methods. Our code is available at //github.com/Jeff-sjtu/res-loglikelihood-regression

北京阿比特科技有限公司