亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The simple random walk on $\mathbb{Z}^p$ shows two drastically different behaviours depending on the value of $p$: it is recurrent when $p\in\{1,2\}$ while it escapes (with a rate increasing with $p$) as soon as $p\geq3$. This classical example illustrates that the asymptotic properties of a random walk provides some information on the structure of its state space. This paper aims to explore analogous questions on space made up of combinatorial objects with no algebraic structure. We take as a model for this problem the space of unordered unlabeled rooted trees endowed with Zhang edit distance. To this end, it defines the canonical unbiased random walk on the space of trees and provides an efficient algorithm to evaluate its escape rate. Compared to Zhang algorithm, it is incremental and computes the edit distance along the random walk approximately 100 times faster on trees of size $500$ on average. The escape rate of the random walk on trees is precisely estimated using intensive numerical simulations, out of reasonable reach without the incremental algorithm.

相關內容

A locating-dominating set $D$ of a graph $G$ is a dominating set of $G$ where each vertex not in $D$ has a unique neighborhood in $D$, and the Locating-Dominating Set problem asks if $G$ contains such a dominating set of bounded size. This problem is known to be $\mathsf{NP-hard}$ even on restricted graph classes, such as interval graphs, split graphs, and planar bipartite subcubic graphs. On the other hand, it is known to be solvable in polynomial time for some graph classes, such as trees and, more generally, graphs of bounded cliquewidth. While these results have numerous implications on the parameterized complexity of the problem, little is known in terms of kernelization under structural parameterizations. In this work, we begin filling this gap in the literature. Our first result shows that Locating-Dominating Set, when parameterized by the solution size $d$, admits no $2^{o(d \log d)}$ time algorithm unless the Exponential Time Hypothesis fails; as a corollary, we also show that no $n^{o(d)}$ time algorithm exists under ETH, implying that the naive $\mathsf{XP}$ algorithm is essentially optimal. We present an exponential kernel for the distance to cluster parameterization and show that, unless $\mathsf{NP-hard} \subseteq \mathsf{NP-hard}/$\mathsf{poly}$, no polynomial kernel exists for Locating-Dominating Set when parameterized by vertex cover nor when parameterized by distance to clique. We then turn our attention to parameters not bounded by neither of the previous two, and exhibit a linear kernel when parameterizing by the max leaf number; in this context, we leave the parameterization by feedback edge set as the primary open problem in our study.

We consider the problem $(\rm P)$ of exactly fitting an ellipsoid (centered at $0$) to $n$ standard Gaussian random vectors in $\mathbb{R}^d$, as $n, d \to \infty$ with $n / d^2 \to \alpha > 0$. This problem is conjectured to undergo a sharp transition: with high probability, $(\rm P)$ has a solution if $\alpha < 1/4$, while $(\rm P)$ has no solutions if $\alpha > 1/4$. So far, only a trivial bound $\alpha > 1/2$ is known to imply the absence of solutions, while the sharpest results on the positive side assume $\alpha \leq \eta$ (for $\eta > 0$ a small constant) to prove that $(\rm P)$ is solvable. In this work we study universality between this problem and a so-called "Gaussian equivalent", for which the same transition can be rigorously analyzed. Our main results are twofold. On the positive side, we prove that if $\alpha < 1/4$, there exist an ellipsoid fitting all the points up to a small error, and that the lengths of its principal axes are bounded above and below. On the other hand, for $\alpha > 1/4$, we show that achieving small fitting error is not possible if the length of the ellipsoid's shortest axis does not approach $0$ as $d \to \infty$ (and in particular there does not exist any ellipsoid fit whose shortest axis length is bounded away from $0$ as $d \to \infty$). To the best of our knowledge, our work is the first rigorous result characterizing the expected phase transition in ellipsoid fitting at $\alpha = 1/4$. In a companion non-rigorous work, the first author and D. Kunisky give a general analysis of ellipsoid fitting using the replica method of statistical physics, which inspired the present work.

A widely used formulation for null hypotheses in the analysis of multivariate $d$-dimensional data is $\mathcal{H}_0: \boldsymbol{H} \boldsymbol{\theta} =\boldsymbol{y}$ with $\boldsymbol{H}$ $\in\mathbb{R}^{m\times d}$, $\boldsymbol{\theta}$ $\in \mathbb{R}^d$ and $\boldsymbol{y}\in\mathbb{R}^m$, where $m\leq d$. Here the unknown parameter vector $\boldsymbol{\theta}$ can, for example, be the expectation vector $\boldsymbol{\mu}$, a vector $\boldsymbol{\beta} $ containing regression coefficients or a quantile vector $\boldsymbol{q}$. Also, the vector of nonparametric relative effects $\boldsymbol{p}$ or an upper triangular vectorized covariance matrix $\textbf{v}$ are useful choices. However, even without multiplying the hypothesis with a scalar $\gamma\neq 0$, there is a multitude of possibilities to formulate the same null hypothesis with different hypothesis matrices $\boldsymbol{H}$ and corresponding vectors $\boldsymbol{y}$. Although it is a well-known fact that in case of $\boldsymbol{y}=\boldsymbol{0}$ there exists a unique projection matrix $\boldsymbol{P}$ with $\boldsymbol{H}\boldsymbol{\theta}=\boldsymbol{0}\Leftrightarrow \boldsymbol{P}\boldsymbol{\theta}=\boldsymbol{0}$, for $\boldsymbol{y}\neq \boldsymbol{0}$ such a projection matrix does not necessarily exist. Moreover, since such hypotheses are often investigated using a quadratic form as the test statistic, the corresponding projection matrices often contain zero rows; so, they are not even effective from a computational aspect. In this manuscript, we show that for the Wald-type-statistic (WTS), which is one of the most frequently used quadratic forms, the choice of the concrete hypothesis matrix does not affect the test decision. Moreover, some simulations are conducted to investigate the possible influence of the hypothesis matrix on the computation time.

We show that every graph with pathwidth strictly less than $a$ that contains no path on $2^b$ vertices as a subgraph has treedepth at most $10ab$. The bound is best possible up to a constant factor.

We expound on some known lower bounds of the quadratic Wasserstein distance between random vectors in $\mathbb{R}^n$ with an emphasis on affine transformations that have been used in manifold learning of data in Wasserstein space. In particular, we give concrete lower bounds for rotated copies of random vectors in $\mathbb{R}^2$ with uncorrelated components by computing the Bures metric between the covariance matrices. We also derive upper bounds for compositions of affine maps which yield a fruitful variety of diffeomorphisms applied to an initial data measure. We apply these bounds to various distributions including those lying on a 1-dimensional manifold in $\mathbb{R}^2$ and illustrate the quality of the bounds. Finally, we give a framework for mimicking handwritten digit or alphabet datasets that can be applied in a manifold learning framework.

We consider Gibbs distributions, which are families of probability distributions over a discrete space $\Omega$ with probability mass function of the form $\mu^\Omega_\beta(\omega) \propto e^{\beta H(\omega)}$ for $\beta$ in an interval $[\beta_{\min}, \beta_{\max}]$ and $H( \omega ) \in \{0 \} \cup [1, n]$. The partition function is the normalization factor $Z(\beta)=\sum_{\omega \in\Omega}e^{\beta H(\omega)}$. Two important parameters of these distributions are the log partition ratio $q = \log \tfrac{Z(\beta_{\max})}{Z(\beta_{\min})}$ and the counts $c_x = |H^{-1}(x)|$. These are correlated with system parameters in a number of physical applications and sampling algorithms. Our first main result is to estimate the counts $c_x$ using roughly $\tilde O( \frac{q}{\varepsilon^2})$ samples for general Gibbs distributions and $\tilde O( \frac{n^2}{\varepsilon^2} )$ samples for integer-valued distributions (ignoring some second-order terms and parameters), and we show this is optimal up to logarithmic factors. We illustrate with improved algorithms for counting connected subgraphs, independent sets, and perfect matchings. As a key subroutine, we also develop algorithms to compute the partition function $Z$ using $\tilde O(\frac{q}{\varepsilon^2})$ samples for general Gibbs distributions and using $\tilde O(\frac{n^2}{\varepsilon^2})$ samples for integer-valued distributions.

A $(a,b)$-coloring of a graph $G$ associates to each vertex a $b$-subset of a set of $a$ colors in such a way that the color-sets of adjacent vertices are disjoint. We define general reduction tools for $(a,b)$-coloring of graphs for $2\le a/b\le 3$. In particular, using necessary and sufficient conditions for the existence of a $(a,b)$-coloring of a path with prescribed color-sets on its end-vertices, more complex $(a,b)$-colorability reductions are presented. The utility of these tools is exemplified on finite triangle-free induced subgraphs of the triangular lattice for which McDiarmid-Reed's conjecture asserts that they are all $(9,4)$-colorable. Computations on millions of such graphs generated randomly show that our tools allow to find a $(9,4)$-coloring for each of them except for one specific regular shape of graphs (that can be $(9,4)$-colored by an easy ad-hoc process). We thus obtain computational evidence towards the conjecture of McDiarmid\&Reed.

We study the phase reconstruction of signals $f$ belonging to complex Gaussian shift-invariant spaces $V^\infty(\varphi)$ from spectrogram measurements $|\mathcal{G} f(X)|$ where $\mathcal{G}$ is the Gabor transform and $X \subseteq \mathbb{R}^2$. An explicit reconstruction formula will demonstrate that such signals can be recovered from measurements located on parallel lines in the time-frequency plane by means of a Riesz basis expansion. Moreover, connectedness assumptions on $|f|$ result in stability estimates in the situation where one aims to reconstruct $f$ on compact intervals. Driven by a recent observation that signals in Gaussian shift-invariant spaces are determined by lattice measurements [Grohs, P., Liehr, L., Injectivity of Gabor phase retrieval from lattice measurements, Appl. Comput. Harmon. Anal. 62 (2023), pp. 173-193] we prove a sampling result on the stable approximation from finitely many spectrogram samples. The resulting algorithm provides a provably stable and convergent approximation technique. In addition, it constitutes a method of approximating signals in function spaces beyond $V^\infty(\varphi)$, such as Paley-Wiener spaces.

The property that the velocity $\boldsymbol{u}$ belongs to $L^\infty(0,T;L^2(\Omega)^d)$ is an essential requirement in the definition of energy solutions of models for incompressible fluids. It is, therefore, highly desirable that the solutions produced by discretisation methods are uniformly stable in the $L^\infty(0,T;L^2(\Omega)^d)$-norm. In this work, we establish that this is indeed the case for Discontinuous Galerkin (DG) discretisations (in time and space) of non-Newtonian models with $p$-structure, assuming that $p\geq \frac{3d+2}{d+2}$; the time discretisation is equivalent to the RadauIIA Implicit Runge-Kutta method. We also prove (weak) convergence of the numerical scheme to the weak solution of the system; this type of convergence result for schemes based on quadrature seems to be new. As an auxiliary result, we also derive Gagliardo-Nirenberg-type inequalities on DG spaces, which might be of independent interest.

The greedy and nearest-neighbor TSP heuristics can both have $\log n$ approximation factors from optimal in worst case, even just for $n$ points in Euclidean space. In this note, we show that this approximation factor is only realized when the optimal tour is unusually short. In particular, for points from any fixed $d$-Ahlfor's regular metric space (which includes any $d$-manifold like the $d$-cube $[0,1]^d$ in the case $d$ is an integer but also fractals of dimension $d$ when $d$ is real-valued), our results imply that the greedy and nearest-neighbor heuristics have \emph{additive} errors from optimal on the order of the \emph{optimal} tour length through \emph{random} points in the same space, for $d>1$.

北京阿比特科技有限公司