亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Given a graph $G$, when is it possible to reconstruct with high probability a uniformly random colouring of its vertices in $r$ colours from its $k$-deck, i.e. a set of its induced (coloured) subgraphs of size $k$? In this paper, we reconstruct random colourings of lattices and random graphs. Recently, Narayanan and Yap proved that, for $d=2$, with high probability a random colouring of vertices of a $d$-dimensional $n$-lattice ($n\times n$ grid) is reconstructibe from its deck of all $k$-subgrids ($k\times k$ grids) if $k\geq\sqrt{2\log_2 n}+\frac{3}{4}$ and is not reconstructible if $k<\sqrt{2\log_2 n}-\frac{1}{4}$. We prove that the same "two-point concentration" result for the minimum size of subgrids that determine the entire colouring holds true in any dimension $d\geq 2$. We also prove that with high probability a uniformly random $r$-colouring of the vertices of the random graph $G(n,1/2)$ is reconstructible from its full $k$-deck if $k\geq 2\log_2 n+8$ and is not reconstructible if $k\leq\sqrt{2\log_2 n}$. We further show that the colour reconstruction algorithm for random graphs can be modified and used for graph reconstruction: we prove that with high probability $G(n,1/2)$ is reconstructible from its full $k$-deck if $k\geq 2\log_2 n+11$ (while it is not reconstructible with high probability if $k\leq 2\sqrt{\log_2 n}$). This significantly improves the best known upper bound for the minimum size of subgraphs in a deck that can be used to reconstruct the random graph with high probability.

相關內容

Let $(M,g)$ be a Riemannian manifold. If $\mu$ is a probability measure on $M$ given by a continuous density function, one would expect the Fr\'{e}chet means of data-samples $Q=(q_1,q_2,\dots, q_N)\in M^N$, with respect to $\mu$, to behave ``generically''; e.g. the probability that the Fr\'{e}chet mean set $\mbox{FM}(Q)$ has any elements that lie in a given, positive-codimension submanifold, should be zero for any $N\geq 1$. Even this simplest instance of genericity does not seem to have been proven in the literature, except in special cases. The main result of this paper is a general, and stronger, genericity property: given i.i.d. absolutely continuous $M$-valued random variables $X_1,\dots, X_N$, and a subset $A\subset M$ of volume-measure zero, $\mbox{Pr}\left\{\mbox{FM}(\{X_1,\dots,X_N\})\subset M\backslash A\right\}=1.$ We also establish a companion theorem for equivariant Fr\'{e}chet means, defined when $(M,g)$ arises as the quotient of a Riemannian manifold $(\widetilde{M},\tilde{g})$ by a free, isometric action of a finite group. The equivariant Fr\'{e}chet means lie in $\widetilde{M}$, but, as we show, project down to the ordinary Fr\'{e}chet sample means, and enjoy a similar genericity property. Both these theorems are proven as consequences of a purely geometric (and quite general) result that constitutes the core mathematics in this paper: If $A\subset M$ has volume zero in $M$ , then the set $\{Q\in M^N : \mbox{FM}(Q) \cap A\neq\emptyset\}$ has volume zero in $M^N$. We conclude the paper with an application to partial scaling-rotation means, a type of mean for symmetric positive-definite matrices.

Join-preserving maps on the discrete time scale $\omega^+$, referred to as time warps, have been proposed as graded modalities that can be used to quantify the growth of information in the course of program execution. The set of time warps forms a simple distributive involutive residuated lattice -- called the time warp algebra -- that is equipped with residual operations relevant to potential applications. In this paper, we show that although the time warp algebra generates a variety that lacks the finite model property, it nevertheless has a decidable equational theory. We also describe an implementation of a procedure for deciding equations in this algebra, written in the OCaml programming language, that makes use of the Z3 theorem prover.

Probability measures on the sphere form an important class of statistical models and are used, for example, in modeling directional data or shapes. Due to their widespread use, but also as an algorithmic building block, efficient sampling of distributions on the sphere is highly desirable. We propose a shrinkage based and an idealized geodesic slice sampling Markov chain, designed to generate approximate samples from distributions on the sphere. In particular, the shrinkage based algorithm works in any dimension, is straight-forward to implement and has no tuning parameters. We verify reversibility and show that under weak regularity conditions geodesic slice sampling is uniformly ergodic. Numerical experiments show that the proposed slice samplers achieve excellent mixing on challenging targets including the Bingham distribution and mixtures of von Mises-Fisher distributions. In these settings our approach outperforms standard samplers such as random-walk Metropolis Hastings and Hamiltonian Monte Carlo.

This paper studies the extreme singular values of non-harmonic Fourier matrices. Such a matrix of size $m\times s$ can be written as $\Phi=[ e^{-2\pi i j x_k}]_{j=0,1,\dots,m-1, k=1,2,\dots,s}$ for some set $\mathcal{X}=\{x_k\}_{k=1}^s$. The main results provide explicit lower bounds for the smallest singular value of $\Phi$ under the assumption $m\geq 6s$ and without any restrictions on $\mathcal{X}$. They show that for an appropriate scale $\tau$ determined by a density criteria, interactions between elements in $\mathcal{X}$ at scales smaller than $\tau$ are most significant and depends on the multiscale structure of $\mathcal{X}$ at fine scales, while distances larger than $\tau$ are less important and only depend on the local sparsity of the far away points. Theoretical and numerical comparisons show that the main results significantly improve upon classical bounds and achieve the same rate that was previously discovered for more restrictive settings.

For an infinite class of finite graphs of unbounded size, we define a limit object, to be called wide limit, relative to some computationally restricted class of functions. The properties of the wide limit then reflect how a computationally restricted viewer "sees" a generic instance from the class. The construction uses arithmetic forcing with random variables [10]. We prove sufficient conditions for universal and existential sentences to be valid in the limit, provide several examples, and prove that such a limit object can then be expanded to a model of weak arithmetic. We then take the wide limit of all finite pointed paths to obtain a model of arithmetic where the problem OntoWeakPigeon is total but Leaf (the complete problem for $\textbf{PPA}$) is not. This logical separation of the oracle classes of total NP search problems in our setting implies that Leaf is not reducible to OntoWeakPigeon even if some errors are allowed in the reductions.

Recent experiments have shown that, often, when training a neural network with gradient descent (GD) with a step size $\eta$, the operator norm of the Hessian of the loss grows until it approximately reaches $2/\eta$, after which it fluctuates around this value. The quantity $2/\eta$ has been called the "edge of stability" based on consideration of a local quadratic approximation of the loss. We perform a similar calculation to arrive at an "edge of stability" for Sharpness-Aware Minimization (SAM), a variant of GD which has been shown to improve its generalization. Unlike the case for GD, the resulting SAM-edge depends on the norm of the gradient. Using three deep learning training tasks, we see empirically that SAM operates on the edge of stability identified by this analysis.

We propose an original approach to investigate the linearity of Gray codes obtained from $\mathbb{Z}_{2^L}$-additive codes by introducing two related binary codes: the associated and concatenated. Once they are defined, one could perform a straightforward analysis of the Schur product between their codewords and determine the linearity of the respective Gray code. This work expands on earlier contributions from the literature, where the linearity was established with respect to the kernel of a code and/or operations on $\mathbb{Z}_{2^L}$. The $\mathbb{Z}_{2^L}$-additive codes we apply the Gray map and check the linearity are the well-known Hadamard, simplex, MacDonald, Kerdock, and Preparata codes. We also present a family of Reed-Muller codes that yield to linear Gray codes and perform a computational verification of our proposed method applied to other $\mathbb{Z}_{2^L}$-additive codes.

We show that under minimal assumptions on a random vector $X\in\mathbb{R}^d$ and with high probability, given $m$ independent copies of $X$, the coordinate distribution of each vector $(\langle X_i,\theta \rangle)_{i=1}^m$ is dictated by the distribution of the true marginal $\langle X,\theta \rangle$. Specifically, we show that with high probability, \[\sup_{\theta \in S^{d-1}} \left( \frac{1}{m}\sum_{i=1}^m \left|\langle X_i,\theta \rangle^\sharp - \lambda^\theta_i \right|^2 \right)^{1/2} \leq c \left( \frac{d}{m} \right)^{1/4},\] where $\lambda^{\theta}_i = m\int_{(\frac{i-1}{m}, \frac{i}{m}]} F_{ \langle X,\theta \rangle }^{-1}(u)\,du$ and $a^\sharp$ denotes the monotone non-decreasing rearrangement of $a$. Moreover, this estimate is optimal. The proof follows from a sharp estimate on the worst Wasserstein distance between a marginal of $X$ and its empirical counterpart, $\frac{1}{m} \sum_{i=1}^m \delta_{\langle X_i, \theta \rangle}$.

We study a class of functional problems reducible to computing $f^{(n)}(x)$ for inputs $n$ and $x$, where $f$ is a polynomial-time bijection. As we prove, the definition is robust against variations in the type of reduction used in its definition, and in whether we require $f$ to have a polynomial-time inverse or to be computible by a reversible logic circuit. These problems are characterized by the complexity class $\mathsf{FP}^{\mathsf{PSPACE}}$, and include natural $\mathsf{FP}^{\mathsf{PSPACE}}$-complete problems in circuit complexity, cellular automata, graph algorithms, and the dynamical systems described by piecewise-linear transformations.

A set of vertices of a graph $G$ is said to be decycling if its removal leaves an acyclic subgraph. The size of a smallest decycling set is the decycling number of $G$. Generally, at least $\lceil(n+2)/4\rceil$ vertices have to be removed in order to decycle a cubic graph on $n$ vertices. In 1979, Payan and Sakarovitch proved that the decycling number of a cyclically $4$-edge-connected cubic graph of order $n$ equals $\lceil (n+2)/4\rceil$. In addition, they characterised the structure of minimum decycling sets and their complements. If $n\equiv 2\pmod4$, then $G$ has a decycling set which is independent and its complement induces a tree. If $n\equiv 0\pmod4$, then one of two possibilities occurs: either $G$ has an independent decycling set whose complement induces a forest of two trees, or the decycling set is near-independent (which means that it induces a single edge) and its complement induces a tree. In this paper we strengthen the result of Payan and Sakarovitch by proving that the latter possibility (a near-independent set and a tree) can always be guaranteed. Moreover, we relax the assumption of cyclic $4$-edge-connectivity to a significantly weaker condition expressed through the canonical decomposition of 3-connected cubic graphs into cyclically $4$-edge-connected ones. Our methods substantially use a surprising and seemingly distant relationship between the decycling number and the maximum genus of a cubic graph.

北京阿比特科技有限公司