亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study empirical variants of the halfspace (Tukey) depth of a probability measure $\mu$, which are obtained by replacing $\mu$ with the corresponding weighted empirical measure. We prove analogues of the Marcinkiewicz--Zygmund strong law of large numbers and of the law of the iterated logarithm in terms of set inclusions and for the Hausdorff distance between the theoretical and empirical variants of depth trimmed regions. In the special case of $\mu$ being the uniform distribution on a convex body $K$, the depth trimmed regions are convex floating bodies of $K$, and we obtain strong limit theorems for their empirical estimators.

相關內容

Binary codes of length $n$ may be viewed as subsets of vertices of the Boolean hypercube $\{0,1\}^n$. The ability of a linear error-correcting code to recover erasures is connected to influences of particular monotone Boolean functions. These functions provide insight into the role that particular coordinates play in a code's erasure repair capability. In this paper, we consider directly the influences of coordinates of a code. We describe a family of codes, called codes with minimum disjoint support, for which all influences may be determined. As a consequence, we find influences of repetition codes and certain distinct weight codes. Computing influences is typically circumvented by appealing to the transitivity of the automorphism group of the code. Some of the codes considered here fail to meet the transitivity conditions requires for these standard approaches, yet we can compute them directly.

Recently, Letzter proved that any graph of order $n$ contains a collection $\mathcal{P}$ of $O(n\log^\star n)$ paths with the following property: for all distinct edges $e$ and $f$ there exists a path in $\mathcal{P}$ which contains $e$ but not $f$. We improve this upper bound to $19 n$, thus answering a question of G.O.H. Katona and confirming a conjecture independently posed by Balogh, Csaba, Martin, and Pluh\'ar and by Falgas-Ravry, Kittipassorn, Kor\'andi, Letzter, and Narayanan. Our proof is elementary and self-contained.

We propose MNPCA, a novel non-linear generalization of (2D)$^2${PCA}, a classical linear method for the simultaneous dimension reduction of both rows and columns of a set of matrix-valued data. MNPCA is based on optimizing over separate non-linear mappings on the left and right singular spaces of the observations, essentially amounting to the decoupling of the two sides of the matrices. We develop a comprehensive theoretical framework for MNPCA by viewing it as an eigenproblem in reproducing kernel Hilbert spaces. We study the resulting estimators on both population and sample levels, deriving their convergence rates and formulating a coordinate representation to allow the method to be used in practice. Simulations and a real data example demonstrate MNPCA's good performance over its competitors.

We consider the problem $(\rm P)$ of exactly fitting an ellipsoid (centered at $0$) to $n$ standard Gaussian random vectors in $\mathbb{R}^d$, as $n, d \to \infty$ with $n / d^2 \to \alpha > 0$. This problem is conjectured to undergo a sharp transition: with high probability, $(\rm P)$ has a solution if $\alpha < 1/4$, while $(\rm P)$ has no solutions if $\alpha > 1/4$. So far, only a trivial bound $\alpha > 1/2$ is known to imply the absence of solutions, while the sharpest results on the positive side assume $\alpha \leq \eta$ (for $\eta > 0$ a small constant) to prove that $(\rm P)$ is solvable. In this work we study universality between this problem and a so-called "Gaussian equivalent", for which the same transition can be rigorously analyzed. Our main results are twofold. On the positive side, we prove that if $\alpha < 1/4$, there exist an ellipsoid fitting all the points up to a small error, and that the lengths of its principal axes are bounded above and below. On the other hand, for $\alpha > 1/4$, we show that achieving small fitting error is not possible if the length of the ellipsoid's shortest axis does not approach $0$ as $d \to \infty$ (and in particular there does not exist any ellipsoid fit whose shortest axis length is bounded away from $0$ as $d \to \infty$). To the best of our knowledge, our work is the first rigorous result characterizing the expected phase transition in ellipsoid fitting at $\alpha = 1/4$. In a companion non-rigorous work, the first author and D. Kunisky give a general analysis of ellipsoid fitting using the replica method of statistical physics, which inspired the present work.

We consider the problem of approximating the solution to $A(\mu) x(\mu) = b$ for many different values of the parameter $\mu$. Here we assume $A(\mu)$ is large, sparse, and nonsingular with a nonlinear dependence on $\mu$. Our method is based on a companion linearization derived from an accurate Chebyshev interpolation of $A(\mu)$ on the interval $[-a,a]$, $a \in \mathbb{R}$. The solution to the linearization is approximated in a preconditioned BiCG setting for shifted systems, where the Krylov basis matrix is formed once. This process leads to a short-term recurrence method, where one execution of the algorithm produces the approximation to $x(\mu)$ for many different values of the parameter $\mu \in [-a,a]$ simultaneously. In particular, this work proposes one algorithm which applies a shift-and-invert preconditioner exactly as well as an algorithm which applies the preconditioner inexactly. The competitiveness of the algorithms are illustrated with large-scale problems arising from a finite element discretization of a Helmholtz equation with parameterized material coefficient. The software used in the simulations is publicly available online, and thus all our experiments are reproducible.

We propose a new method for estimating the minimizer $\boldsymbol{x}^*$ and the minimum value $f^*$ of a smooth and strongly convex regression function $f$ from the observations contaminated by random noise. Our estimator $\boldsymbol{z}_n$ of the minimizer $\boldsymbol{x}^*$ is based on a version of the projected gradient descent with the gradient estimated by a regularized local polynomial algorithm. Next, we propose a two-stage procedure for estimation of the minimum value $f^*$ of regression function $f$. At the first stage, we construct an accurate enough estimator of $\boldsymbol{x}^*$, which can be, for example, $\boldsymbol{z}_n$. At the second stage, we estimate the function value at the point obtained in the first stage using a rate optimal nonparametric procedure. We derive non-asymptotic upper bounds for the quadratic risk and optimization error of $\boldsymbol{z}_n$, and for the risk of estimating $f^*$. We establish minimax lower bounds showing that, under certain choice of parameters, the proposed algorithms achieve the minimax optimal rates of convergence on the class of smooth and strongly convex functions.

In 1986, Flagg and Friedman \cite{ff} gave an elegant alternative proof of the faithfulness of G\"{o}del translation $(\cdot)^\Box$ of Heyting arithmetic $\bf HA$ to Shapiro's epistemic arithmetic $\bf EA$. In \S 2, we shall prove the faithfulness of $(\cdot)^\Box$ without using stability, by introducing another translation from an epistemic system to corresponding intuitionistic system which we shall call \it the modified Rasiowa-Sikorski translation\rm . This introduction of the new translation simplifies the original Flagg and Friedman's proof. In \S 3, we shall give some applications of the modified one for the disjunction property ($\mathsf{DP}$) and the numerical existence property ($\mathsf{NEP}$) of Heyting arithmetic. In \S 4, we shall show that epistemic Markov's rule $\mathsf{EMR}$ in $\bf EA$ is proved via $\bf HA$. So $\bf EA$ $\vdash \mathsf{EMR}$ and $\bf HA$ $\vdash \mathsf{MR}$ are equivalent. In \S 5, we shall give some relations among the translations treated in the previous sections. In \S 6, we shall give an alternative proof of Glivenko's theorem. In \S 7, we shall propose several (modal-)epistemic versions of Markov's rule for Horsten's modal-epistemic arithmetic $\bf MEA$. And, as in \S 4, we shall study some meta-implications among those versions of Markov's rules in $\bf MEA$ and one in $\bf HA$. Friedman and Sheard gave a modal analogue $\mathsf{FS}$ (i.e. Theorem in \cite{fs}) of Friedman's theorem $\mathsf{F}$ (i.e. Theorem 1 in \cite {friedman}): \it Any recursively enumerable extension of $\bf HA$ which has $\mathsf{DP}$ also has $\mathsf{NPE}$\rm . In \S 8, it is shown that $\mathsf{FS}$ and the modal $\mathsf{DP}$ of $\bf EA$ imply that $\mathsf{DP} \Leftrightarrow \mathsf{NEP}$ holds in $\bf HA$, i.e. the case of the trivial extension of $\bf HA$ in $\mathsf{F}$. This is a partial result for our conjecture. $ In {\S}9, we shall give some discussions.

We expound on some known lower bounds of the quadratic Wasserstein distance between random vectors in $\mathbb{R}^n$ with an emphasis on affine transformations that have been used in manifold learning of data in Wasserstein space. In particular, we give concrete lower bounds for rotated copies of random vectors in $\mathbb{R}^2$ with uncorrelated components by computing the Bures metric between the covariance matrices. We also derive upper bounds for compositions of affine maps which yield a fruitful variety of diffeomorphisms applied to an initial data measure. We apply these bounds to various distributions including those lying on a 1-dimensional manifold in $\mathbb{R}^2$ and illustrate the quality of the bounds. Finally, we give a framework for mimicking handwritten digit or alphabet datasets that can be applied in a manifold learning framework.

We consider Gibbs distributions, which are families of probability distributions over a discrete space $\Omega$ with probability mass function of the form $\mu^\Omega_\beta(\omega) \propto e^{\beta H(\omega)}$ for $\beta$ in an interval $[\beta_{\min}, \beta_{\max}]$ and $H( \omega ) \in \{0 \} \cup [1, n]$. The partition function is the normalization factor $Z(\beta)=\sum_{\omega \in\Omega}e^{\beta H(\omega)}$. Two important parameters of these distributions are the log partition ratio $q = \log \tfrac{Z(\beta_{\max})}{Z(\beta_{\min})}$ and the counts $c_x = |H^{-1}(x)|$. These are correlated with system parameters in a number of physical applications and sampling algorithms. Our first main result is to estimate the counts $c_x$ using roughly $\tilde O( \frac{q}{\varepsilon^2})$ samples for general Gibbs distributions and $\tilde O( \frac{n^2}{\varepsilon^2} )$ samples for integer-valued distributions (ignoring some second-order terms and parameters), and we show this is optimal up to logarithmic factors. We illustrate with improved algorithms for counting connected subgraphs, independent sets, and perfect matchings. As a key subroutine, we also develop algorithms to compute the partition function $Z$ using $\tilde O(\frac{q}{\varepsilon^2})$ samples for general Gibbs distributions and using $\tilde O(\frac{n^2}{\varepsilon^2})$ samples for integer-valued distributions.

Markov chain Monte Carlo (MCMC) algorithms are based on the construction of a Markov Chain with transition probabilities $P_\mu(x,\cdot)$, where $\mu$ indicates an invariant distribution of interest. In this work, we look at these transition probabilities as functions of their invariant distributions, and we develop a notion of derivative in the invariant distribution of a MCMC kernel. We build around this concept a set of tools that we refer to as Markov Chain Monte Carlo Calculus. This allows us to compare Markov chains with different invariant distributions within a suitable class via what we refer to as mean value inequalities. We explain how MCMC Calculus provides a natural framework to study algorithms using an approximation of an invariant distribution, also illustrating how it suggests practical guidelines for MCMC algorithms efficiency. We conclude this work by showing how the tools developed can be applied to prove convergence of interacting and sequential MCMC algorithms, which arise in the context of particle filtering.

北京阿比特科技有限公司