亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Wilcoxon rank-sum test is one of the most popular distribution-free procedures for testing the equality of two univariate probability distributions. One of the main reasons for its popularity can be attributed to the remarkable result of Hodges and Lehmann (1956), which shows that the asymptotic relative efficiency of Wilcoxon's test with respect to Student's $t$-test, under location alternatives, never falls below 0.864, despite the former being exactly distribution-free for all sample sizes. Even more striking is the result of Chernoff and Savage (1958), which shows that the efficiency of a Gaussian score transformed Wilcoxon's test, against the $t$-test, is lower bounded by 1. In this paper we study the two-sample problem in the multivariate setting and propose distribution-free analogues of the Hotelling $T^2$ test (the natural multidimensional counterpart of Student's $t$-test) based on optimal transport and obtain extensions of the above celebrated results over various natural families of multivariate distributions. Our proposed tests are consistent against a general class of alternatives and satisfy Hodges-Lehmann and Chernoff-Savage-type efficiency lower bounds, despite being entirely agnostic to the underlying data generating mechanism. In particular, a collection of our proposed tests suffer from no loss in asymptotic efficiency, when compared to Hotelling $T^2$. To the best of our knowledge, these are the first collection of multivariate, nonparametric, exactly distribution-free tests that provably achieve such attractive efficiency lower bounds. We also demonstrate the broader scope of our methods in optimal transport based nonparametric inference by constructing exactly distribution-free multivariate tests for mutual independence, which suffer from no loss in asymptotic efficiency against the classical Wilks' likelihood ratio test, under Konijn alternatives.

相關內容

Rough path theory provides one with the notion of signature, a graded family of tensors which characterise, up to a negligible equivalence class, and ordered stream of vector-valued data. In the last few years, use of the signature has gained traction in time-series analysis, machine learning , deep learning and more recently in kernel methods. In this article, we lay down the theoretical foundations for a connection between signature asymptotics, the theory of empirical processes, and Wasserstein distances, opening up the landscape and toolkit of the second and third in the study of the first. Our main contribution is to show that the Hambly-Lyons limit can be reinterpreted as a statement about the asymptotic behaviour of Wasserstein distances between two independent empirical measures of samples from the same underlying distribution. In the setting studied here, these measures are derived from samples from a probability distribution which is determined by geometrical properties of the underlying path. The general question of rates of convergence for these objects has been studied in depth in the recent monograph of Bobkov and Ledoux. By using these results, we generalise the original result of Hambly and Lyons from $C^3$ curves to a broad class of $C^2$ ones. We conclude by providing an explicit way to compute the limit in terms of a second-order differential equation.

Nonlinear metrics, such as the F1-score, Matthews correlation coefficient, and Fowlkes-Mallows index, are often used to evaluate the performance of machine learning models, in particular, when facing imbalanced datasets that contain more samples of one class than the other. Recent optimal decision tree algorithms have shown remarkable progress in producing trees that are optimal with respect to linear criteria, such as accuracy, but unfortunately nonlinear metrics remain a challenge. To address this gap, we propose a novel algorithm based on bi-objective optimisation, which treats misclassifications of each binary class as a separate objective. We show that, for a large class of metrics, the optimal tree lies on the Pareto frontier. Consequently, we obtain the optimal tree by using our method to generate the set of all nondominated trees. To the best of our knowledge, this is the first method to compute provably optimal decision trees for nonlinear metrics. Our approach leads to a trade-off when compared to optimising linear metrics: the resulting trees may be more desirable according to the given nonlinear metric at the expense of higher runtimes. Nevertheless, the experiments illustrate that runtimes are reasonable for majority of the tested datasets.

Test of independence is of fundamental importance in modern data analysis, with broad applications in variable selection, graphical models, and causal inference. When the data is high dimensional and the potential dependence signal is sparse, independence testing becomes very challenging without distributional or structural assumptions. In this paper we propose a general framework for independence testing by first fitting a classifier that distinguishes the joint and product distributions, and then testing the significance of the fitted classifier. This framework allows us to borrow the strength of the most advanced classification algorithms developed from the modern machine learning community, making it applicable to high dimensional, complex data. By combining a sample split and a fixed permutation, our test statistic has a universal, fixed Gaussian null distribution that is independent of the underlying data distribution. Extensive simulations demonstrate the advantages of the newly proposed test compared with existing methods. We further apply the new test to a single cell data set to test the independence between two types of single cell sequencing measurements, whose high dimensionality and sparsity make existing methods hard to apply.

The matrix normal model, the family of Gaussian matrix-variate distributions whose covariance matrix is the Kronecker product of two lower dimensional factors, is frequently used to model matrix-variate data. The tensor normal model generalizes this family to Kronecker products of three or more factors. We study the estimation of the Kronecker factors of the covariance matrix in the matrix and tensor models. We show nonasymptotic bounds for the error achieved by the maximum likelihood estimator (MLE) in several natural metrics. In contrast to existing bounds, our results do not rely on the factors being well-conditioned or sparse. For the matrix normal model, all our bounds are minimax optimal up to logarithmic factors, and for the tensor normal model our bound for the largest factor and overall covariance matrix are minimax optimal up to constant factors provided there are enough samples for any estimator to obtain constant Frobenius error. In the same regimes as our sample complexity bounds, we show that an iterative procedure to compute the MLE known as the flip-flop algorithm converges linearly with high probability. Our main tool is geodesic strong convexity in the geometry on positive-definite matrices induced by the Fisher information metric. This strong convexity is determined by the expansion of certain random quantum channels. We also provide numerical evidence that combining the flip-flop algorithm with a simple shrinkage estimator can improve performance in the undersampled regime.

We introduce a kernel estimator, to the tail index of a right-censored Pareto-type distribution, that generalizes Worms's one (Worms and Worms, 2014)in terms of weight coefficients. Under some regularity conditions, the asymptotic normality of the proposed estimator is established. In the framework of the second-order condition, we derive an asymptotically bias-reduced version to the new estimator. Through a simulation study, we conclude that one of the main features of the proposed kernel estimator is its smoothness contrary to Worms's one, which behaves, rather erratically, as a function of the number of largest extreme values. As expected, the bias significantly decreases compared to that of the non-smoothed estimator with however a slight increase in the mean squared error.

Optimal transport is a framework for comparing measures whereby a cost is incurred for transporting one measure to another. Recent works have aimed to improve optimal transport plans through the introduction of various forms of structure. We introduce novel order constraints into the optimal transport formulation to allow for the incorporation of structure. While there will are now quadratically many constraints as before, we prove a $\delta-$approximate solution to the order-constrained optimal transport problem can be obtained in $\mathcal{O}(L^2\delta^{-2} \kappa(\delta(2cL_\infty (1+(mn)^{1/2}))^{-1}) \cdot mn\log mn)$ time. We derive computationally efficient lower bounds that allow for an explainable approach to adding structure to the optimal transport plan through order constraints. We demonstrate experimentally that order constraints improve explainability using the e-SNLI (Stanford Natural Language Inference) dataset that includes human-annotated rationales for each assignment.

Optimal $k$-thresholding algorithms are a class of sparse signal recovery algorithms that overcome the shortcomings of traditional hard thresholding algorithms caused by the oscillation of the residual function. In this paper, we provide a novel theoretical analysis for the data-time tradeoffs of optimal $k$-thresholding algorithms. Both the analysis and numerical results demonstrate that when the number of measurements is small, the algorithms cannot converge; when the number of measurements is suitably large, the number of measurements required for successful recovery has a negative correlation with the number of iterations and the algorithms can achieve linear convergence. Furthermore, the theory presents that the transition point of the number of measurements is on the order of $k \log({en}/{k})$, where $n$ is the dimension of the target signal.

The Gaussian distribution is widely used in mechanism design for differential privacy (DP). Thanks to its sub-Gaussian tail, it significantly reduces the chance of outliers when responding to queries. However, it can only provide approximate $(\epsilon, \delta(\epsilon))$-DP. In practice, $\delta(\epsilon)$ must be much smaller than the size of the dataset, which may limit the use of the Gaussian mechanism for large datasets with strong privacy requirements. In this paper, we introduce and analyze a new distribution for use in DP that is based on the Gaussian distribution, but has improved privacy performance. The so-called offset-symmetric Gaussian tail (OSGT) distribution is obtained through using the normalized tails of two symmetric Gaussians around zero. Consequently, it can still have sub-Gaussian tail and lend itself to analytical derivations. We analytically derive the variance of the OSGT random variable and the $\delta(\epsilon)$ of the OSGT mechanism. We then numerically show that at the same variance, the OSGT mechanism can offer a lower $\delta(\epsilon)$ than the Gaussian mechanism. We extend the OSGT mechanism to $k$-dimensional queries and derive an easy-to-compute analytical upper bound for its zero-concentrated differential privacy (zCDP) performance. We analytically prove that at the same variance, the same global query sensitivity and for sufficiently large concentration orders $\alpha$, the OSGT mechanism performs better than the Gaussian mechanism in terms of zCDP.

Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.

In this paper, we propose to tackle the problem of reducing discrepancies between multiple domains referred to as multi-source domain adaptation and consider it under the target shift assumption: in all domains we aim to solve a classification problem with the same output classes, but with labels' proportions differing across them. We design a method based on optimal transport, a theory that is gaining momentum to tackle adaptation problems in machine learning due to its efficiency in aligning probability distributions. Our method performs multi-source adaptation and target shift correction simultaneously by learning the class probabilities of the unlabeled target sample and the coupling allowing to align two (or more) probability distributions. Experiments on both synthetic and real-world data related to satellite image segmentation task show the superiority of the proposed method over the state-of-the-art.

北京阿比特科技有限公司