亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Maximum Mean Discrepancy (MMD) has been widely used in the areas of machine learning and statistics to quantify the distance between two distributions in the $p$-dimensional Euclidean space. The asymptotic property of the sample MMD has been well studied when the dimension $p$ is fixed using the theory of U-statistic. As motivated by the frequent use of MMD test for data of moderate/high dimension, we propose to investigate the behavior of the sample MMD in a high-dimensional environment and develop a new studentized test statistic. Specifically, we obtain the central limit theorems for the studentized sample MMD as both the dimension $p$ and sample sizes $n,m$ diverge to infinity. Our results hold for a wide range of kernels, including popular Gaussian and Laplacian kernels, and also cover energy distance as a special case. We also derive the explicit rate of convergence under mild assumptions and our results suggest that the accuracy of normal approximation can improve with dimensionality. Additionally, we provide a general theory on the power analysis under the alternative hypothesis and show that our proposed test can detect difference between two distributions in the moderately high dimensional regime. Numerical simulations demonstrate the effectiveness of our proposed test statistic and normal approximation.

相關內容

The asymptotic behaviour of Linear Spectral Statistics (LSS) of the smoothed periodogram estimator of the spectral coherency matrix of a complex Gaussian high-dimensional time series $(\y_n)_{n \in \mathbb{Z}}$ with independent components is studied under the asymptotic regime where the sample size $N$ converges towards $+\infty$ while the dimension $M$ of $\y$ and the smoothing span of the estimator grow to infinity at the same rate in such a way that $\frac{M}{N} \rightarrow 0$. It is established that, at each frequency, the estimated spectral coherency matrix is close from the sample covariance matrix of an independent identically $\mathcal{N}_{\mathbb{C}}(0,\I_M)$ distributed sequence, and that its empirical eigenvalue distribution converges towards the Marcenko-Pastur distribution. This allows to conclude that each LSS has a deterministic behaviour that can be evaluated explicitly. Using concentration inequalities, it is shown that the order of magnitude of the supremum over the frequencies of the deviation of each LSS from its deterministic approximation is of the order of $\frac{1}{M} + \frac{\sqrt{M}}{N}+ (\frac{M}{N})^{3}$ where $N$ is the sample size. Numerical simulations supports our results.

Constructed by the neural network, variational autoencoder has the overfitting problem caused by setting too many neural units, we develop an adaptive dimension reduction algorithm that can automatically learn the dimension of latent variable vector, moreover, the dimension of every hidden layer. This approach not only apply to the variational autoencoder but also other variants like Conditional VAE(CVAE), and we show the empirical results on six data sets which presents the universality and efficiency of this algorithm. The key advantages of this algorithm is that it can converge the dimension of latent variable vector which approximates the dimension reaches minimum loss of variational autoencoder(VAE), also speeds up the generating and computing speed by reducing the neural units.

In the random geometric graph model $\mathsf{Geo}_d(n,p)$, we identify each of our $n$ vertices with an independently and uniformly sampled vector from the $d$-dimensional unit sphere, and we connect pairs of vertices whose vectors are ``sufficiently close'', such that the marginal probability of an edge is $p$. We investigate the problem of testing for this latent geometry, or in other words, distinguishing an Erd\H{o}s-R\'enyi graph $\mathsf{G}(n, p)$ from a random geometric graph $\mathsf{Geo}_d(n, p)$. It is not too difficult to show that if $d\to \infty$ while $n$ is held fixed, the two distributions become indistinguishable; we wish to understand how fast $d$ must grow as a function of $n$ for indistinguishability to occur. When $p = \frac{\alpha}{n}$ for constant $\alpha$, we prove that if $d \ge \mathrm{polylog} n$, the total variation distance between the two distributions is close to $0$; this improves upon the best previous bound of Brennan, Bresler, and Nagaraj (2020), which required $d \gg n^{3/2}$, and further our result is nearly tight, resolving a conjecture of Bubeck, Ding, Eldan, \& R\'{a}cz (2016) up to logarithmic factors. We also obtain improved upper bounds on the statistical indistinguishability thresholds in $d$ for the full range of $p$ satisfying $\frac{1}{n}\le p\le \frac{1}{2}$, improving upon the previous bounds by polynomial factors. Our analysis uses the Belief Propagation algorithm to characterize the distributions of (subsets of) the random vectors {\em conditioned on producing a particular graph}. In this sense, our analysis is connected to the ``cavity method'' from statistical physics. To analyze this process, we rely on novel sharp estimates for the area of the intersection of a random sphere cap with an arbitrary subset of the sphere, which we prove using optimal transport maps and entropy-transport inequalities on the unit sphere.

In this paper, we propose a deterministic variational inference approach and generate low-discrepancy points by minimizing the kernel discrepancy, also known as the Maximum Mean Discrepancy or MMD. Based on the general energetic variational inference framework by Wang et. al. (2021), minimizing the kernel discrepancy is transformed to solving a dynamic ODE system via the explicit Euler scheme. We name the resulting algorithm EVI-MMD and demonstrate it through examples in which the target distribution is fully specified, partially specified up to the normalizing constant, and empirically known in the form of training data. Its performances are satisfactory compared to alternative methods in the applications of distribution approximation, numerical integration, and generative learning. The EVI-MMD algorithm overcomes the bottleneck of the existing MMD-descent algorithms, which are mostly applicable to two-sample problems. Algorithms with more sophisticated structures and potential advantages can be developed under the EVI framework.

In this paper, we consider the problem of determining the presence of a given signal in a high-dimensional observation with unknown covariance matrix by using an adaptive matched filter. Traditionally such filters are formed from the sample covariance matrix of some given training data, but, as is well-known, the performance of such filters is poor when the number of training data $n$ is not much larger than the data dimension $p$. We thus seek a covariance estimator to replace sample covariance. To account for the fact that $n$ and $p$ may be of comparable size, we adopt the "large-dimensional asymptotic model" in which $n$ and $p$ go to infinity in a fixed ratio. Under this assumption, we identify a covariance estimator that is asymptotically detection-theoretic optimal within a general shrinkage class inspired by C. Stein, and we give consistent estimates for conditional false-alarm and detection rate of the corresponding adaptive matched filter.

We introduce a general non-parametric independence test between right-censored survival times and covariates, which may be multivariate. Our test statistic has a dual interpretation, first in terms of the supremum of a potentially infinite collection of weight-indexed log-rank tests, with weight functions belonging to a reproducing kernel Hilbert space (RKHS) of functions; and second, as the norm of the difference of embeddings of certain finite measures into the RKHS, similar to the Hilbert-Schmidt Independence Criterion (HSIC) test-statistic. We study the asymptotic properties of the test, finding sufficient conditions to ensure our test correctly rejects the null hypothesis under any alternative. The test statistic can be computed straightforwardly, and the rejection threshold is obtained via an asymptotically consistent Wild Bootstrap procedure. Extensive investigations on both simulated and real data suggest that our testing procedure generally performs better than competing approaches in detecting complex non-linear dependence.

We develop necessary conditions for geometrically fast convergence in the Wasserstein distance for Metropolis-Hastings algorithms on $\mathbb{R}^d$ when the metric used is a norm. This is accomplished through a lower bound which is of independent interest. We show exact convergence expressions in more general Wasserstein distances (e.g. total variation) can be achieved for a large class of distributions by centering an independent Gaussian proposal, that is, matching the optimal points of the proposal and target densities. This approach has applications for sampling posteriors of many popular Bayesian generalized linear models. In the case of Bayesian binary response regression, we show when the sample size $n$ and the dimension $d$ grow in such a way that the ratio $d/n \to \gamma \in (0, +\infty)$, the exact convergence rate can be upper bounded asymptotically.

Covariate shifts are a common problem in predictive modeling on real-world problems. This paper proposes addressing the covariate shift problem by minimizing Maximum Mean Discrepancy (MMD) statistics between the training and test sets in either feature input space, feature representation space, or both. We designed three techniques that we call MMD Representation, MMD Mask, and MMD Hybrid to deal with the scenarios where only a distribution shift exists, only a missingness shift exists, or both types of shift exist, respectively. We find that integrating an MMD loss component helps models use the best features for generalization and avoid dangerous extrapolation as much as possible for each test sample. Models treated with this MMD approach show better performance, calibration, and extrapolation on the test set.

In the field of topological data analysis, persistence modules are used to express geometrical features of data sets. The matching distance $d_\mathcal{M}$ measures the difference between $2$-parameter persistence modules by taking the maximum bottleneck distance between $1$-parameter slices of the modules. The previous fastest algorithm to compute $d_\mathcal{M}$ exactly runs in $O(n^{8+\omega})$, where $\omega$ is the matrix multiplication constant. We improve significantly on this by describing an algorithm with expected running time $O(n^5 \log^3 n)$. We first solve the decision problem $d_\mathcal{M}\leq \lambda$ for a constant $\lambda$ in $O(n^5\log n)$ by traversing a line arrangement in the dual plane, where each point represents a slice. Then we lift the line arrangement to a plane arrangement in $\mathbb{R}^3$ whose vertices represent possible values for $d_\mathcal{M}$, and use a randomized incremental method to search through the vertices and find $d_\mathcal{M}$. The expected running time of this algorithm is $O((n^4+T(n))\log^2 n)$, where $T(n)$ is an upper bound for the complexity of deciding if $d_\mathcal{M}\leq \lambda$.

While Generative Adversarial Networks (GANs) have empirically produced impressive results on learning complex real-world distributions, recent work has shown that they suffer from lack of diversity or mode collapse. The theoretical work of Arora et al.~\cite{AroraGeLiMaZh17} suggests a dilemma about GANs' statistical properties: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse. In contrast, we show in this paper that GANs can in principle learn distributions in Wasserstein distance (or KL-divergence in many cases) with polynomial sample complexity, if the discriminator class has strong distinguishing power against the particular generator class (instead of against all possible generators). For various generator classes such as mixture of Gaussians, exponential families, and invertible neural networks generators, we design corresponding discriminators (which are often neural nets of specific architectures) such that the Integral Probability Metric (IPM) induced by the discriminators can provably approximate the Wasserstein distance and/or KL-divergence. This implies that if the training is successful, then the learned distribution is close to the true distribution in Wasserstein distance or KL divergence, and thus cannot drop modes. Our preliminary experiments show that on synthetic datasets the test IPM is well correlated with KL divergence, indicating that the lack of diversity may be caused by the sub-optimality in optimization instead of statistical inefficiency.

北京阿比特科技有限公司