亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this short note, we present a refined approximation for the log-ratio of the density of the von Mises$(\mu,\kappa)$ distribution (also called the circular normal distribution) to the standard (linear) normal distribution when the concentration parameter \k{appa} is large. Our work complements the one of Hill (1976), who obtained a very similar approximation along with quantile couplings, using earlier approximations by Hill & Davis (1968) of Cornish-Fisher type. One motivation for this note is to highlight the connection between the circular and linear normal distributions through their circular variance and (linear) variance.

相關內容

We study the problem of approximating a matrix $\mathbf{A}$ with a matrix that has a fixed sparsity pattern (e.g., diagonal, banded, etc.), when $\mathbf{A}$ is accessed only by matrix-vector products. We describe a simple randomized algorithm that returns an approximation with the given sparsity pattern with Frobenius-norm error at most $(1+\varepsilon)$ times the best possible error. When each row of the desired sparsity pattern has at most $s$ nonzero entries, this algorithm requires $O(s/\varepsilon)$ non-adaptive matrix-vector products with $\mathbf{A}$. We proceed to prove a matching lower-bound. Specifically, we show that for any $s\geq 1$, there are matrices $\mathbf{A}$ such that, for any sparsity pattern with $\Theta(s)$ nonzeros per row and column, any algorithm which obtains a $(1+\varepsilon)$ accurate approximation of the given sparsity from matrix-vector products requires at least $\Omega(s/\varepsilon)$ matrix-vector products. Our bounds therefore resolve the matrix-vector product query complexity of the problem up to constant factors, even for the well-studied case of diagonal approximation, for which no previous lower bounds were known.

We derive entropy bounds for the absolute convex hull of vectors $X= (x_1 , \ldots , x_p)\in \mathbb{R}^{n \times p} $ in $\mathbb{R}^n$ and apply this to the case where $X$ is the $d$-fold tensor matrix $$X = \underbrace{\Psi \otimes \cdots \otimes \Psi}_{d \ {\rm times} }\in \mathbb{R}^{m^d \times r^d },$$ with a given $\Psi = ( \psi_1 , \ldots , \psi_r ) \in \mathbb{R}^{m \times r} $, normalized to that $ \| \psi_j \|_2 \le 1$ for all $j \in \{1 , \ldots , r\}$. For $\epsilon >0$ we let ${\cal V} \subset \mathbb{R}^m$ be the linear space with smallest dimension $M ( \epsilon , \Psi)$ such that $ \max_{1 \le j \le r } \min_{v \in {\cal V} } \| \psi_j - v \|_2 \le \epsilon$. We call $M( \epsilon , \psi)$ the $\epsilon$-approximation of $\Psi$ and assume it is -- up to log terms -- polynomial in $\epsilon$. We show that the entropy of the absolute convex hull of the $d$-fold tensor matrix $X$ is up to log-terms of the same order as the entropy for the case $d=1$. The results are generalized to absolute convex hulls of tensors of functions in $L_2 (\mu)$ where $\mu$ is Lebesgue measure on $[0,1]$. As an application we consider the space of functions on $[0,1]^d$ with bounded $q$-th order Vitali total variation for a given $q \in \mathbb{N}$. As a by-product, we construct an orthonormal, piecewise polynomial, wavelet dictionary for functions that are well-approximated by piecewise polynomials.

Positive linear programs (LPs) model many graph and operations research problems. One can solve for a $(1+\epsilon)$-approximation for positive LPs, for any selected $\epsilon$, in polylogarithmic depth and near-linear work via variations of the multiplicative weight update (MWU) method. Despite extensive theoretical work on these algorithms through the decades, their empirical performance is not well understood. In this work, we implement and test an efficient parallel algorithm for solving positive LP relaxations, and apply it to graph problems such as densest subgraph, bipartite matching, vertex cover and dominating set. We accelerate the algorithm via a new step size search heuristic. Our implementation uses sparse linear algebra optimization techniques such as fusion of vector operations and use of sparse format. Furthermore, we devise an implicit representation for graph incidence constraints. We demonstrate the parallel scalability with the use of threading OpenMP and MPI on the Stampede2 supercomputer. We compare this implementation with exact libraries and specialized libraries for the above problems in order to evaluate MWU's practical standing for both accuracy and performance among other methods. Our results show this implementation is faster than general purpose LP solvers (IBM CPLEX, Gurobi) in all of our experiments, and in some instances, outperforms state-of-the-art specialized parallel graph algorithms.

A limit theorem for the largest interpoint distance of $p$ independent and identically distributed points in $\mathbb{R}^n$ to the Gumbel distribution is proved, where the number of points $p=p_n$ tends to infinity as the dimension of the points $n\to\infty$. The theorem holds under moment assumptions and corresponding conditions on the growth rate of $p$. We obtain a plethora of ancillary results such as the joint convergence of maximum and minimum interpoint distances. Using the inherent sum structure of interpoint distances, our result is generalized to maxima of dependent random walks with non-decaying correlations and we also derive point process convergence. An application of the maximum interpoint distance to testing the equality of means for high-dimensional random vectors is presented. Moreover, we study the largest off-diagonal entry of a sample covariance matrix. The proofs are based on the Chen-Stein Poisson approximation method and Gaussian approximation to large deviation probabilities.

At STOC 2002, Eiter, Gottlob, and Makino presented a technique called ordered generation that yields an $n^{O(d)}$-delay algorithm listing all minimal transversals of an $n$-vertex hypergraph of degeneracy $d$. Recently at IWOCA 2019, Conte, Kant\'e, Marino, and Uno asked whether this XP-delay algorithm parameterized by $d$ could be made FPT-delay for a weaker notion of degeneracy, or even parameterized by the maximum degree $\Delta$, i.e., whether it can be turned into an algorithm with delay $f(\Delta)\cdot n^{O(1)}$ for some computable function $f$. Moreover, and as a first step toward answering that question, they note that they could not achieve these time bounds even for the particular case of minimal dominating sets enumeration. In this paper, using ordered generation, we show that an FPT-delay algorithm can be devised for minimal transversals enumeration parameterized by the maximum degree and dimension, giving a positive and more general answer to the latter question.

We consider an inverse problem for a finite graph $(X,E)$ where we are given a subset of vertices $B\subset X$ and the distances $d_{(X,E)}(b_1,b_2)$ of all vertices $b_1,b_2\in B$. The distance of points $x_1,x_2\in X$ is defined as the minimal number of edges needed to connect two vertices, so all edges have length 1. The inverse problem is a discrete version of the boundary rigidity problem in Riemannian geometry or the inverse travel time problem in geophysics. We will show that this problem has unique solution under certain conditions and develop quantum computing methods to solve it. We prove the following uniqueness result: when $(X,E)$ is a tree and $B$ is the set of leaves of the tree, the graph $(X,E)$ can be uniquely determined in the class of all graphs having a fixed number of vertices. We present a quantum computing algorithm which produces a graph $(X,E)$, or one of those, which has a given number of vertices and the required distances between vertices in $B$. To this end we develop an algorithm that takes in a qubit representation of a graph and combine it with Grover's search algorithm. The algorithm can be implemented using only $O(|X|^2)$ qubits, the same order as the number of elements in the adjacency matrix of $(X,E)$. It also has a quadratic improvement in computational cost compared to standard classical algorithms. Finally, we consider applications in theory of computation, and show that a slight modification of the above inverse problem is NP-complete: all NP-problems can be reduced to a discrete inverse problem we consider.

We propose and study a new multilevel method for the numerical approximation of a Gibbs distribution $\pi$ on $\mathbb{R}^d$, based on (overdamped) Langevin diffusions. This method inspired by \cite{mainPPlangevin} and \cite{giles_szpruch_invariant} relies on a multilevel occupation measure, $i.e.$ on an appropriate combination of $R$ occupation measures of (constant-step) Euler schemes with respective steps $\gamma_r = \gamma_0 2^{-r}$, $r=0,\ldots,R$. We first state a quantitative result under general assumptions which guarantees an \textit{$\varepsilon$-approximation} (in a $L^2$-sense) with a cost of the order $\varepsilon^{-2}$ or $\varepsilon^{-2}|\log \varepsilon|^3$ under less contractive assumptions. We then apply it to overdamped Langevin diffusions with strongly convex potential $U:\mathbb{R}^d\rightarrow\mathbb{R}$ and obtain an \textit{$\varepsilon$-complexity} of the order ${\cal O}(d\varepsilon^{-2}\log^3(d\varepsilon^{-2}))$ or ${\cal O}(d\varepsilon^{-2})$ under additional assumptions on $U$. More precisely, up to universal constants, an appropriate choice of the parameters leads to a cost controlled by ${(\bar{\lambda}_U\vee 1)^2}{\underline{\lambda}_U^{-3}} d\varepsilon^{-2}$ (where $\bar{\lambda}_U$ and $\underline{\lambda}_U$ respectively denote the supremum and the infimum of the largest and lowest eigenvalue of $D^2U$). We finally complete these theoretical results with some numerical illustrations including comparisons to other algorithms in Bayesian learning and opening to non strongly convex setting.

Krylov methods rely on iterated matrix-vector products $A^k u_j$ for an $n\times n$ matrix $A$ and vectors $u_1,\ldots,u_m$. The space spanned by all iterates $A^k u_j$ admits a particular basis -- the \emph{maximal Krylov basis} -- which consists of iterates of the first vector $u_1, Au_1, A^2u_1,\ldots$, until reaching linear dependency, then iterating similarly the subsequent vectors until a basis is obtained. Finding minimal polynomials and Frobenius normal forms is closely related to computing maximal Krylov bases. The fastest way to produce these bases was, until this paper, Keller-Gehrig's 1985 algorithm whose complexity bound $O(n^\omega \log(n))$ comes from repeated squarings of $A$ and logarithmically many Gaussian eliminations. Here $\omega>2$ is a feasible exponent for matrix multiplication over the base field. We present an algorithm computing the maximal Krylov basis in $O(n^\omega\log\log(n))$ field operations when $m \in O(n)$, and even $O(n^\omega)$ as soon as $m\in O(n/\log(n)^c)$ for some fixed real $c>0$. As a consequence, we show that the Frobenius normal form together with a transformation matrix can be computed deterministically in $O(n^\omega \log\log(n)^2)$, and therefore matrix exponentiation~$A^k$ can be performed in the latter complexity if $\log(k) \in O(n^{\omega-1-\varepsilon})$, for $\varepsilon>0$. A key idea for these improvements is to rely on fast algorithms for $m\times m$ polynomial matrices of average degree $n/m$, involving high-order lifting and minimal kernel bases.

In this contribution, we consider a zero-dimensional polynomial system in $n$ variables defined over a field $\mathbb{K}$. In the context of computing a Rational Univariate Representation (RUR) of its solutions, we address the problem of certifying a separating linear form and, once certified, calculating the RUR that comes from it, without any condition on the ideal else than being zero-dimensional. Our key result is that the RUR can be read (closed formula) from lexicographic Groebner bases of bivariate elimination ideals, even in the case where the original ideal that is not in shape position, so that one can use the same core as the well known FGLM method to propose a simple algorithm. Our first experiments, either with a very short code (300 lines) written in Maple or with a Julia code using straightforward implementations performing only classical Gaussian reductions in addition to Groebner bases for the degree reverse lexicographic ordering, show that this new method is already competitive with sophisticated state of the art implementations which do not certify the parameterizations.

We study the weak recovery problem on the $r$-uniform hypergraph stochastic block model ($r$-HSBM) with two balanced communities. In this model, $n$ vertices are randomly divided into two communities, and size-$r$ hyperedges are added randomly depending on whether all vertices in the hyperedge are in the same community. The goal of the weak recovery problem is to recover a non-trivial fraction of the communities given the hypergraph. Previously, Pal and Zhu (2021) established that weak recovery is always possible above a natural threshold called the Kesten-Stigum (KS) threshold. Gu and Polyanskiy (2023) proved that the KS threshold is tight if $r\le 4$ or the expected degree $d$ is small. It remained open whether the KS threshold is tight for $r\ge 5$ and large $d$. In this paper we determine the tightness of the KS threshold for any fixed $r$ and large $d$. We prove that for $r\le 6$ and $d$ large enough, the KS threshold is tight. This shows that there is no information-computation gap in this regime. This partially confirms a conjecture of Angelini et al. (2015). For $r\ge 7$, we prove that for $d$ large enough, the KS threshold is not tight, providing more evidence supporting the existence of an information-computation gap in this regime. Furthermore, we establish asymptotic bounds on the weak recovery threshold for fixed $r$ and large $d$. We also obtain a number of results regarding the closely-related broadcasting on hypertrees (BOHT) model, including the asymptotics of the reconstruction threshold for $r\ge 7$ and impossibility of robust reconstruction at criticality.

北京阿比特科技有限公司