亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study whether a given graph can be realized as an adjacency graph of the polygonal cells of a polyhedral surface in $\mathbb{R}^3$. We show that every graph is realizable as a polyhedral surface with arbitrary polygonal cells, and that this is not true if we require the cells to be convex. In particular, if the given graph contains $K_5$, $K_{5,81}$, or any nonplanar $3$-tree as a subgraph, no such realization exists. On the other hand, all planar graphs, $K_{4,4}$, and $K_{3,5}$ can be realized with convex cells. The same holds for any subdivision of any graph where each edge is subdivided at least once, and, by a result from McMullen et al. (1983), for any hypercube. Our results have implications on the maximum density of graphs describing polyhedral surfaces with convex cells: The realizability of hypercubes shows that the maximum number of edges over all realizable $n$-vertex graphs is in $\Omega(n \log n)$. From the non-realizability of $K_{5,81}$, we obtain that any realizable $n$-vertex graph has $O(n^{9/5})$ edges. As such, these graphs can be considerably denser than planar graphs, but not arbitrarily dense.

相關內容

Wasserstein gradient flows on probability measures have found a host of applications in various optimization problems. They typically arise as the continuum limit of exchangeable particle systems evolving by some mean-field interaction involving a gradient-type potential. However, in many problems, such as in multi-layer neural networks, the so-called particles are edge weights on large graphs whose nodes are exchangeable. Such large graphs are known to converge to continuum limits called graphons as their size grow to infinity. We show that the Euclidean gradient flow of a suitable function of the edge-weights converges to a novel continuum limit given by a curve on the space of graphons that can be appropriately described as a gradient flow or, more technically, a curve of maximal slope. Several natural functions on graphons, such as homomorphism functions and the scalar entropy, are covered by our set-up, and the examples have been worked out in detail.

Codes in the Damerau--Levenshtein metric have been extensively studied recently owing to their applications in DNA-based data storage. In particular, Gabrys, Yaakobi, and Milenkovic (2017) designed a length-$n$ code correcting a single deletion and $s$ adjacent transpositions with at most $(1+2s)\log n$ bits of redundancy. In this work, we consider a new setting where both asymmetric adjacent transpositions (also known as right-shifts or left-shifts) and deletions may occur. We present several constructions of the codes correcting these errors in various cases. In particular, we design a code correcting a single deletion, $s^+$ right-shift, and $s^-$ left-shift errors with at most $(1+s)\log (n+s+1)+1$ bits of redundancy where $s=s^{+}+s^{-}$. In addition, we investigate codes correcting $t$ $0$-deletions, $s^+$ right-shift, and $s^-$ left-shift errors with both uniquely-decoding and list-decoding algorithms. Our main contribution here is the construction of a list-decodable code with list size $O(n^{\min\{s+1,t\}})$ and with at most $(\max \{t,s+1\}) \log n+O(1)$ bits of redundancy, where $s=s^{+}+s^{-}$. Finally, we construct both non-systematic and systematic codes for correcting blocks of $0$-deletions with $\ell$-limited-magnitude and $s$ adjacent transpositions.

It is well known that the Euler method for approximating the solutions of a random ordinary differential equation $\mathrm{d}X_t/\mathrm{d}t = f(t, X_t, Y_t)$ driven by a stochastic process $\{Y_t\}_t$ with $\theta$-H\"older sample paths is estimated to be of strong order $\theta$ with respect to the time step, provided $f=f(t, x, y)$ is sufficiently regular and with suitable bounds. Here, it is proved that, in many typical cases, further conditions on the noise can be exploited so that the strong convergence is actually of order 1, regardless of the H\"older regularity of the sample paths. This applies for instance to additive or multiplicative It\^o process noises (such as Wiener, Ornstein-Uhlenbeck, and geometric Brownian motion processes); to point-process noises (such as Poisson point processes and Hawkes self-exciting processes, which even have jump-type discontinuities); and to transport-type processes with sample paths of bounded variation. The result is based on a novel approach, estimating the global error as an iterated integral over both large and small mesh scales, and switching the order of integration to move the critical regularity to the large scale. The work is complemented with numerical simulations illustrating the strong order 1 convergence in those cases, and with an example with fractional Brownian motion noise with Hurst parameter $0 < H < 1/2$ for which the order of convergence is $H + 1/2$, hence lower than the attained order 1 in the examples above, but still higher than the order $H$ of convergence expected from previous works.

The problem of finding the connected components of a graph is considered. The algorithms addressed to solve the problem are used to solve such problems on graphs as problems of finding points of articulation, bridges, maximin bridge, etc. A natural approach to solving this problem is a breadth-first search, the implementations of which are presented in software libraries designed to maximize the use of the capabi\-lities of modern computer architectures. We present an approach using perturbations of adjacency matrix of a graph. We check wether the graph is connected or not by comparing the solutions of the two systems of linear algebraic equations (SLAE): the first SLAE with a perturbed adjacency matrix of the graph and the second SLAE with~unperturbed matrix. This approach makes it possible to use effective numerical implementations of SLAE solution methods to solve connectivity problems on graphs. Iterations of iterative numerical methods for solving such SLAE can be considered as carrying out a graph traversal. Generally speaking, the traversal is not equivalent to the traversal that is carried out with breadth-first search. An algorithm for finding the connected components of a graph using such a traversal is presented. For any instance of the problem, this algorithm has no greater computational complexity than breadth-first search, and for~most individual problems it has less complexity.

We study the problem of finding the smallest graph that does not occur as an induced subgraph of a given graph. This missing induced subgraph has at most logarithmic size and can be found by a brute-force search, in an $n$-vertex graph, in time $n^{O(\log n)}$. We show that under the Exponential Time Hypothesis this quasipolynomial time bound is optimal. We also consider variations of the problem in which either the missing subgraph or the given graph comes from a restricted graph family; for instance, we prove that the smallest missing planar induced subgraph of a given planar graph can be found in polynomial time.

We study the weak recovery problem on the $r$-uniform hypergraph stochastic block model ($r$-HSBM) with two balanced communities. In HSBM a random graph is constructed by placing hyperedges with higher density if all vertices of a hyperedge share the same binary label, and weak recovery asks to recover a non-trivial fraction of the labels. We introduce a multi-terminal version of strong data processing inequalities (SDPIs), which we call the multi-terminal SDPI, and use it to prove a variety of impossibility results for weak recovery. In particular, we prove that weak recovery is impossible below the Kesten-Stigum (KS) threshold if $r=3,4$, or a strength parameter $\lambda$ is at least $\frac 15$. Prior work Pal and Zhu (2021) established that weak recovery in HSBM is always possible above the KS threshold. Consequently, there is no information-computation gap for these cases, which (partially) resolves a conjecture of Angelini et al. (2015). To our knowledge this is the first impossibility result for HSBM weak recovery. As usual, we reduce the study of non-recovery of HSBM to the study of non-reconstruction in a related broadcasting on hypertrees (BOHT) model. While we show that BOHT's reconstruction threshold coincides with KS for $r=3,4$, surprisingly, we demonstrate that for $r\ge 7$ reconstruction is possible also below KS. This shows an interesting phase transition in the parameter $r$, and suggests that for $r\ge 7$, there might be an information-computation gap for the HSBM. For $r=5,6$ and large degree we propose an approach for showing non-reconstruction below KS, suggesting that $r=7$ is the correct threshold for onset of the new phase.

Let $S \subseteq \mathbb{R}^2$ be a set of $n$ \emph{sites} in the plane, so that every site $s \in S$ has an \emph{associated radius} $r_s > 0$. Let $D(S)$ be the \emph{disk intersection graph} defined by $S$, i.e., the graph with vertex set $S$ and an edge between two distinct sites $s, t \in S$ if and only if the disks with centers $s$, $t$ and radii $r_s$, $r_t$ intersect. Our goal is to design data structures that maintain the connectivity structure of $D(S)$ as $S$ changes dynamically over time. We consider the incremental case, where new sites can be inserted into $S$. While previous work focuses on data structures whose running time depends on the ratio between the smallest and the largest site in $S$, we present a data structure with $O(\alpha(n))$ amortized query time and $O(\log^6 n)$ expected amortized insertion time.

We give new characterizations for the class of uniformly dense matroids, and we describe applications to graphic and real representable matroids. We show that a matroid is uniformly dense if and only if its base polytope contains a point with constant coordinates, and if and only if there exists a measure on the bases such that every element of the ground set has equal probability to be in a random basis with respect to this measure. As one application, we derive new spectral, structural and classification results for uniformly dense graphic matroids. In particular, we show that connected regular uniformly dense graphs are $1$-tough and thus contain a (near-)perfect matching. As a second application, we show that strictly uniformly dense real representable matroids can be represented by projection matrices with constant diagonal and that they are parametrized by a subvariety of the real Grassmannian.

Translational distance-based knowledge graph embedding has shown progressive improvements on the link prediction task, from TransE to the latest state-of-the-art RotatE. However, N-1, 1-N and N-N predictions still remain challenging. In this work, we propose a novel translational distance-based approach for knowledge graph link prediction. The proposed method includes two-folds, first we extend the RotatE from 2D complex domain to high dimension space with orthogonal transforms to model relations for better modeling capacity. Second, the graph context is explicitly modeled via two directed context representations. These context representations are used as part of the distance scoring function to measure the plausibility of the triples during training and inference. The proposed approach effectively improves prediction accuracy on the difficult N-1, 1-N and N-N cases for knowledge graph link prediction task. The experimental results show that it achieves better performance on two benchmark data sets compared to the baseline RotatE, especially on data set (FB15k-237) with many high in-degree connection nodes.

In this paper we provide a comprehensive introduction to knowledge graphs, which have recently garnered significant attention from both industry and academia in scenarios that require exploiting diverse, dynamic, large-scale collections of data. After a general introduction, we motivate and contrast various graph-based data models and query languages that are used for knowledge graphs. We discuss the roles of schema, identity, and context in knowledge graphs. We explain how knowledge can be represented and extracted using a combination of deductive and inductive techniques. We summarise methods for the creation, enrichment, quality assessment, refinement, and publication of knowledge graphs. We provide an overview of prominent open knowledge graphs and enterprise knowledge graphs, their applications, and how they use the aforementioned techniques. We conclude with high-level future research directions for knowledge graphs.

北京阿比特科技有限公司