亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We prove the inequality $E[(X/\mu)^k] \le (\frac{k/\mu}{\log(k/\mu+1)})^k \le \exp(k^2/(2\mu))$ for sub-Poissonian random variables, such as Binomially or Poisson distributed random variables with mean $\mu$. The asymptotics $1+O(k^2/\mu)$ can be shown to be tight for small $k$. This improves over previous uniform bounds for the raw moments of those distributions by a factor exponential in $k$.

相關內容

In this paper we analyze the behavior of the Oja's algorithm for online/streaming principal component subspace estimation. It is proved that with high probability it performs an efficient, gap-free, global convergence rate to approximate an principal component subspace for any sub-Gaussian distribution. Moreover, it is the first time to show that the convergence rate, namely the upper bound of the approximation, exactly matches the lower bound of an approximation obtained by the offline/classical PCA up to a constant factor.

The aim of this note is to state a couple of general results about the properties of the penalized maximum likelihood estimators (pMLE) and of the posterior distribution for parametric models in a non-asymptotic setup and for possibly large or even infinite parameter dimension. We consider a special class of stochastically linear smooth (SLS) models satisfying two major conditions: the stochastic component of the log-likelihood is linear in the model parameter, while the expected log-likelihood is a smooth function. The main results simplify a lot if the expected log-likelihood is concave. For the pMLE, we establish a number of finite sample bounds about its concentration and large deviations as well as the Fisher and Wilks expansion. The later results extend the classical asymptotic Fisher and Wilks Theorems about the MLE to the non-asymptotic setup with large parameter dimension which can depend on the sample size. For the posterior distribution, our main result states a Gaussian approximation of the posterior which can be viewed as a finite sample analog of the prominent Bernstein--von Mises Theorem. In all bounds, the remainder is given explicitly and can be evaluated in terms of the effective sample size and effective parameter dimension. The results are dimension and coordinate free. In spite of generality, all the presented bounds are nearly sharp and the classical asymptotic results can be obtained as simple corollaries. An interesting case of logit regression with smooth or truncation priors is used to specify the results and to explain the main notions.

Recently, random walks on dynamic graphs have been studied because of their adaptivity to the time-varying structure of real-world networks. In general, there is a tremendous gap between static and dynamic graph settings for the lazy simple random walk: Although $O(n^3)$ cover time was shown for any static graphs of $n$ vertices, there is an edge-changing dynamic graph with an exponential hitting time. On the other hand, previous works indicate that the random walk on a dynamic graph with a time-homogeneous stationary distribution behaves almost identically to that on a static graph. In this paper, we strengthen this insight by obtaining general and improved bounds. Specifically, we consider a random walk according to a sequence $(P_t)_{t\geq 1}$ of irreducible and reversible transition matrices such that all $P_t$ have the same stationary distribution. We bound the mixing, hitting, and cover times in terms of the hitting and relaxation times of the random walk according to the worst fixed $P_t$. Moreover, we obtain the first bounds of the hitting and cover times of multiple random walks and the coalescing time on dynamic graphs. These bounds can be seen as an extension of the well-known bounds of random walks on static graphs. Our results generalize the previous upper bounds for specific random walks on dynamic graphs, e.g., lazy simple random walks and $d_{\max}$-lazy walks, and give improved and tight upper bounds in various cases. As an interesting consequence of our generalization, we obtain tight bounds for the lazy Metropolis walk [Nonaka, Ono, Sadakane, and Yamashita, TCS10] on any dynamic graph: $O(n^2)$ mixing time, $O(n^2)$ hitting time, and $O(n^2\log n)$ cover time. Additionally, our coalescing time bound implies the consensus time bound of the pull voting on a dynamic graph.

Maximum likelihood estimates (MLEs) are asymptotically normally distributed, and this property is used in meta-analyses to test the heterogeneity of estimates, either for a single cluster or for several sub-groups. More recently, MLEs for associations between risk factors and diseases have been hierarchically clustered to search for diseases with shared underlying causes, but the approach needs an objective statistical criterion to determine the optimum number and composition of clusters. Conventional statistical tests are briefly reviewed, before considering the posterior distribution associated with partitioning data into clusters. The posterior distribution is calculated by marginalising out the unknown cluster centres, and is different to the likelihood associated with mixture models. The calculation is equivalent to that used to obtain the Bayesian Information Criterion (BIC), but is exact, without a Laplace approximation. The result includes a sum of squares term, and terms that depend on the number and composition of clusters, that penalise the number of free parameters in the model. The usual BIC is shown to be unsuitable for clustering applications unless the number of items in all clusters are sufficiently large.

We introduce weak barycenters of a family of probability distributions, based on the recently developed notion of optimal weak transport of mass by Gozlanet al. (2017) and Backhoff-Veraguas et al. (2020). We provide a theoretical analysis of this object and discuss its interpretation in the light of convex ordering between probability measures. In particular, we show that, rather than averaging the input distributions in a geometric way (as the Wasserstein barycenter based on classic optimal transport does) weak barycenters extract common geometric information shared by all the input distributions, encoded as a latent random variable that underlies all of them. We also provide an iterative algorithm to compute a weak barycenter for a finite family of input distributions, and a stochastic algorithm that computes them for arbitrary populations of laws. The latter approach is particularly well suited for the streaming setting, i.e., when distributions are observed sequentially. The notion of weak barycenter and our approaches to compute it are illustrated on synthetic examples, validated on 2D real-world data and compared to standard Wasserstein barycenters.

This paper deals with robust inference for parametric copula models. Estimation using Canonical Maximum Likelihood might be unstable, especially in the presence of outliers. We propose to use a procedure based on the Maximum Mean Discrepancy (MMD) principle. We derive non-asymptotic oracle inequalities, consistency and asymptotic normality of this new estimator. In particular, the oracle inequality holds without any assumption on the copula family, and can be applied in the presence of outliers or under misspecification. Moreover, in our MMD framework, the statistical inference of copula models for which there exists no density with respect to the Lebesgue measure on $[0,1]^d$, as the Marshall-Olkin copula, becomes feasible. A simulation study shows the robustness of our new procedures, especially compared to pseudo-maximum likelihood estimation. An R package implementing the MMD estimator for copula models is available.

We study the problem of generating a hyperplane tessellation of an arbitrary set $T$ in $\mathbb{R}^n$, ensuring that the Euclidean distance between any two points corresponds to the fraction of hyperplanes separating them up to a pre-specified error $\delta$. We focus on random gaussian tessellations with uniformly distributed shifts and derive sharp bounds on the number of hyperplanes $m$ that are required. Surprisingly, our lower estimates falsify the conjecture that $m\sim \ell_*^2(T)/\delta^2$, where $\ell_*^2(T)$ is the gaussian width of $T$, is optimal.

Approaches based on deep neural networks have achieved striking performance when testing data and training data share similar distribution, but can significantly fail otherwise. Therefore, eliminating the impact of distribution shifts between training and testing data is crucial for building performance-promising deep models. Conventional methods assume either the known heterogeneity of training data (e.g. domain labels) or the approximately equal capacities of different domains. In this paper, we consider a more challenging case where neither of the above assumptions holds. We propose to address this problem by removing the dependencies between features via learning weights for training samples, which helps deep models get rid of spurious correlations and, in turn, concentrate more on the true connection between discriminative features and labels. Extensive experiments clearly demonstrate the effectiveness of our method on multiple distribution generalization benchmarks compared with state-of-the-art counterparts. Through extensive experiments on distribution generalization benchmarks including PACS, VLCS, MNIST-M, and NICO, we show the effectiveness of our method compared with state-of-the-art counterparts.

Discovering causal structure among a set of variables is a fundamental problem in many empirical sciences. Traditional score-based casual discovery methods rely on various local heuristics to search for a Directed Acyclic Graph (DAG) according to a predefined score function. While these methods, e.g., greedy equivalence search, may have attractive results with infinite samples and certain model assumptions, they are usually less satisfactory in practice due to finite data and possible violation of assumptions. Motivated by recent advances in neural combinatorial optimization, we propose to use Reinforcement Learning (RL) to search for the DAG with the best scoring. Our encoder-decoder model takes observable data as input and generates graph adjacency matrices that are used to compute rewards. The reward incorporates both the predefined score function and two penalty terms for enforcing acyclicity. In contrast with typical RL applications where the goal is to learn a policy, we use RL as a search strategy and our final output would be the graph, among all graphs generated during training, that achieves the best reward. We conduct experiments on both synthetic and real datasets, and show that the proposed approach not only has an improved search ability but also allows a flexible score function under the acyclicity constraint.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司