亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A Dirichlet polynomial $d$ in one variable ${\mathcal{y}}$ is a function of the form $d({\mathcal{y}})=a_n n^{\mathcal{y}}+\cdots+a_22^{\mathcal{y}}+a_11^{\mathcal{y}}+a_00^{\mathcal{y}}$ for some $n,a_0,\ldots,a_n\in\mathbb{N}$. We will show how to think of a Dirichlet polynomial as a set-theoretic bundle, and thus as an empirical distribution. We can then consider the Shannon entropy $H(d)$ of the corresponding probability distribution, and we define its length (or, classically, its perplexity) by $L(d)=2^{H(d)}$. On the other hand, we will define a rig homomorphism $h\colon\mathsf{Dir}\to\mathsf{Rect}$ from the rig of Dirichlet polynomials to the so-called rectangle rig, whose underlying set is $\mathbb{R}_{\geq0}\times\mathbb{R}_{\geq0}$ and whose additive structure involves the weighted geometric mean; we write $h(d)=(A(d),W(d))$, and call the two components area and width (respectively). The main result of this paper is the following: the rectangle-area formula $A(d)=L(d)W(d)$ holds for any Dirichlet polynomial $d$. In other words, the entropy of an empirical distribution can be calculated entirely in terms of the homomorphism $h$ applied to its corresponding Dirichlet polynomial. We also show that similar results hold for the cross entropy.

相關內容

We study the phase synchronization problem with noisy measurements $Y=z^*z^{*H}+\sigma W\in\mathbb{C}^{n\times n}$, where $z^*$ is an $n$-dimensional complex unit-modulus vector and $W$ is a complex-valued Gaussian random matrix. It is assumed that each entry $Y_{jk}$ is observed with probability $p$. We prove that an SDP relaxation of the MLE achieves the error bound $(1+o(1))\frac{\sigma^2}{2np}$ under a normalized squared $\ell_2$ loss. This result matches the minimax lower bound of the problem, and even the leading constant is sharp. The analysis of the SDP is based on an equivalent non-convex programming whose solution can be characterized as a fixed point of the generalized power iteration lifted to a higher dimensional space. This viewpoint unifies the proofs of the statistical optimality of three different methods: MLE, SDP, and generalized power method. The technique is also applied to the analysis of the SDP for $\mathbb{Z}_2$ synchronization, and we achieve the minimax optimal error $\exp\left(-(1-o(1))\frac{np}{2\sigma^2}\right)$ with a sharp constant in the exponent.

Positive dependence is present in many real world data sets and has appealing stochastic properties. In particular, the notion of multivariate total positivity of order 2 ($ \text{MTP}_2 $) is a convex constraint and acts as an implicit regularizer in the Gaussian case. We study positive dependence in multivariate extremes and introduce $ \text{EMTP}_2 $, an extremal version of $ \text{MTP}_2 $. This notion turns out to appear prominently in extremes and, in fact, it is satisfied by many classical models. For a H\"usler--Reiss distribution, the analogue of a Gaussian distribution in extremes, we show that it is $ \text{EMTP}_2 $ if and only if its precision matrix is a Laplacian of a connected graph. We propose an estimator for the parameters of the H\"usler--Reiss distribution under $ \text{EMTP}_2 $ as the solution of a convex optimization problem with Laplacian constraint. We prove that this estimator is consistent and typically yields a sparse model with possibly non-decomposable extremal graphical structure. At the example of two data sets, we illustrate this regularization and the superior performance compared to existing methods.

The cost of both generalized least squares (GLS) and Gibbs sampling in a crossed random effects model can easily grow faster than $N^{3/2}$ for $N$ observations. Ghosh et al. (2020) develop a backfitting algorithm that reduces the cost to $O(N)$. Here we extend that method to a generalized linear mixed model for logistic regression. We use backfitting within an iteratively reweighted penalized least square algorithm. The specific approach is a version of penalized quasi-likelihood due to Schall (1991). A straightforward version of Schall's algorithm would also cost more than $N^{3/2}$ because it requires the trace of the inverse of a large matrix. We approximate that quantity at cost $O(N)$ and prove that this substitution makes an asymptotically negligible difference. Our backfitting algorithm also collapses the fixed effect with one random effect at a time in a way that is analogous to the collapsed Gibbs sampler of Papaspiliopoulos et al. (2020). We use a symmetric operator that facilitates efficient covariance computation. We illustrate our method on a real dataset from Stitch Fix. By properly accounting for crossed random effects we show that a naive logistic regression could underestimate sampling variances by several hundred fold.

Huang proved that every set of more than half the vertices of the $d$-dimensional hypercube $Q_d$ induces a subgraph of maximum degree at least $\sqrt{d}$, which is tight by a result of Chung, F\"uredi, Graham, and Seymour. Huang asked whether similar results can be obtained for other highly symmetric graphs. First, we present three infinite families of Cayley graphs of unbounded degree that contain induced subgraphs of maximum degree $1$ on more than half the vertices. In particular, this refutes a conjecture of Potechin and Tsang, for which first counterexamples were shown recently by Lehner and Verret. The first family consists of dihedrants and contains a sporadic counterexample encountered earlier by Lehner and Verret. The second family are star graphs, these are edge-transitive Cayley graphs of the symmetric group. All members of the third family are $d$-regular containing an induced matching on a $\frac{d}{2d-1}$-fraction of the vertices. This is largest possible and answers a question of Lehner and Verret. Second, we consider Huang's lower bound for graphs with subcubes and show that the corresponding lower bound is tight for products of Coxeter groups of type $\mathbf{A_n}$, $\mathbf{I_2}(2k+1)$, and most exceptional cases. We believe that Coxeter groups are a suitable generalization of the hypercube with respect to Huang's question. Finally, we show that induced subgraphs on more than half the vertices of Levi graphs of projective planes and of the Ramanujan graphs of Lubotzky, Phillips, and Sarnak have unbounded degree. This gives classes of Cayley graphs with properties similar to the ones provided by Huang's results. However, in contrast to Coxeter groups these graphs have no subcubes.

We study continuity of the roots of nonmonic polynomials as a function of their coefficients using only the most elementary results from an introductory course in real analysis and the theory of single variable polynomials. Our approach gives both qualitative and quantitative results in the case that the degree of the unperturbed polynomial can change under a perturbation of its coefficients, a case that naturally occurs, for instance, in stability theory of polynomials, singular perturbation theory, or in the perturbation theory for generalized eigenvalue problems. An application of our results in multivariate stability theory is provided which is important in, for example, the study of hyperbolic polynomials or realizability and synthesis problems in passive electrical network theory, and will be of general interest to mathematicians as well as physicists and engineers.

We derive a sufficient condition for a sparse random matrix with given numbers of non-zero entries in the rows and columns having full row rank. The result covers both matrices over finite fields with independent non-zero entries and $\{0,1\}$-matrices over the rationals. The sufficient condition is generally necessary as well.

The aim of noisy phase retrieval is to estimate a signal $\mathbf{x}_0\in \mathbb{C}^d$ from $m$ noisy intensity measurements $b_j=\left\lvert \langle \mathbf{a}_j,\mathbf{x}_0 \rangle \right\rvert^2+\eta_j, \; j=1,\ldots,m$, where $\mathbf{a}_j \in \mathbb{C}^d$ are known measurement vectors and $\eta=(\eta_1,\ldots,\eta_m)^\top \in \mathbb{R}^m$ is a noise vector. A commonly used model for estimating $\mathbf{x}_0$ is the intensity-based model $\widehat{\mathbf{x}}:=\mbox{argmin}_{\mathbf{x} \in \mathbb{C}^d} \sum_{j=1}^m \big(\left\lvert \langle \mathbf{a}_j,\mathbf{x} \rangle \right\rvert^2-b_j \big)^2$. Although one has already developed many efficient algorithms to solve the intensity-based model, there are very few results about its estimation performance. In this paper, we focus on the estimation performance of the intensity-based model and prove that the error bound satisfies $\min_{\theta\in \mathbb{R}}\|\widehat{\mathbf{x}}-e^{i\theta}\mathbf{x}_0\|_2 \lesssim \min\Big\{\frac{\sqrt{\|\eta\|_2}}{{m}^{1/4}}, \frac{\|\eta\|_2}{\| \mathbf{x}_0\|_2 \cdot \sqrt{m}}\Big\}$ under the assumption of $m \gtrsim d$ and $\mathbf{a}_j, j=1,\ldots,m,$ being Gaussian random vectors. We also show that the error bound is sharp. For the case where $\mathbf{x}_0$ is a $s$-sparse signal, we present a similar result under the assumption of $m \gtrsim s \log (ed/s)$. To the best of our knowledge, our results are the first theoretical guarantees for the intensity-based model and its sparse version. Our proofs employ Mendelson's small ball method which can deliver an effective lower bound on a nonnegative empirical process.

In this monograph, I introduce the basic concepts of Online Learning through a modern view of Online Convex Optimization. Here, online learning refers to the framework of regret minimization under worst-case assumptions. I present first-order and second-order algorithms for online learning with convex losses, in Euclidean and non-Euclidean settings. All the algorithms are clearly presented as instantiation of Online Mirror Descent or Follow-The-Regularized-Leader and their variants. Particular attention is given to the issue of tuning the parameters of the algorithms and learning in unbounded domains, through adaptive and parameter-free online learning algorithms. Non-convex losses are dealt through convex surrogate losses and through randomization. The bandit setting is also briefly discussed, touching on the problem of adversarial and stochastic multi-armed bandits. These notes do not require prior knowledge of convex analysis and all the required mathematical tools are rigorously explained. Moreover, all the proofs have been carefully chosen to be as simple and as short as possible.

We consider the exploration-exploitation trade-off in reinforcement learning and we show that an agent imbued with a risk-seeking utility function is able to explore efficiently, as measured by regret. The parameter that controls how risk-seeking the agent is can be optimized exactly, or annealed according to a schedule. We call the resulting algorithm K-learning and show that the corresponding K-values are optimistic for the expected Q-values at each state-action pair. The K-values induce a natural Boltzmann exploration policy for which the `temperature' parameter is equal to the risk-seeking parameter. This policy achieves an expected regret bound of $\tilde O(L^{3/2} \sqrt{S A T})$, where $L$ is the time horizon, $S$ is the number of states, $A$ is the number of actions, and $T$ is the total number of elapsed time-steps. This bound is only a factor of $L$ larger than the established lower bound. K-learning can be interpreted as mirror descent in the policy space, and it is similar to other well-known methods in the literature, including Q-learning, soft-Q-learning, and maximum entropy policy gradient, and is closely related to optimism and count based exploration methods. K-learning is simple to implement, as it only requires adding a bonus to the reward at each state-action and then solving a Bellman equation. We conclude with a numerical example demonstrating that K-learning is competitive with other state-of-the-art algorithms in practice.

Latent Dirichlet Allocation (LDA) is a topic model widely used in natural language processing and machine learning. Most approaches to training the model rely on iterative algorithms, which makes it difficult to run LDA on big corpora that are best analyzed in parallel and distributed computational environments. Indeed, current approaches to parallel inference either don't converge to the correct posterior or require storage of large dense matrices in memory. We present a novel sampler that overcomes both problems, and we show that this sampler is faster, both empirically and theoretically, than previous Gibbs samplers for LDA. We do so by employing a novel P\'olya-urn-based approximation in the sparse partially collapsed sampler for LDA. We prove that the approximation error vanishes with data size, making our algorithm asymptotically exact, a property of importance for large-scale topic models. In addition, we show, via an explicit example, that -- contrary to popular belief in the topic modeling literature -- partially collapsed samplers can be more efficient than fully collapsed samplers. We conclude by comparing the performance of our algorithm with that of other approaches on well-known corpora.

北京阿比特科技有限公司