亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

For $0 \leq k \leq 6$, we give the minimum number of vertices $f(k)$ in a graph containing all $k$-vertex graphs as induced subgraphs, and show that $16 \leq f(7) \leq 18$. For $0 \leq k \leq 5$, we also give the counts of such graphs, as generated by brute-force computer search. We give additional results for small graphs containing all trees on $k$ vertices.

相關內容

A hole is an induced cycle of length at least 4, and an odd hole is a hole of odd length. A full house is a graph composed by a vertex adjacent to both ends of an edge in $K_4$ . Let $H$ be the complement of a cycle on 7 vertices. Chudnovsky et al [6] proved that every (odd hole, $K_4$)-free graph is 4-colorable and is 3-colorable if it does not has $H$ as an induced subgraph. In this paper, we use the proving technique of Chudnovsky et al to generalize this conclusion to (odd hole, full house)-free graphs, and prove that for (odd hole, full house)-free graph $G$, $\chi(G)\le \omega(G)+1$, and the equality holds if and only if $\omega(G)=3$ and $G$ has $H$ as an induced subgraph.

We prove that we can always construct strongly minimal linearizations of an arbitrary rational matrix from its Laurent expansion around the point at infinity, which happens to be the case for polynomial matrices expressed in the monomial basis. If the rational matrix has a particular self-conjugate structure we show how to construct strongly minimal linearizations that preserve it. The structures that are considered are the Hermitian and skew-Hermitian rational matrices with respect to the real line, and the para-Hermitian and para-skew-Hermitian matrices with respect to the imaginary axis. We pay special attention to the construction of strongly minimal linearizations for the particular case of structured polynomial matrices. The proposed constructions lead to efficient numerical algorithms for constructing strongly minimal linearizations. The fact that they are valid for {\em any} rational matrix is an improvement on any other previous approach for constructing other classes of structure preserving linearizations, which are not valid for any structured rational or polynomial matrix. The use of the recent concept of strongly minimal linearization is the key for getting such generality.

Kimelfeld and Sagiv [Kimelfeld and Sagiv, PODS 2006], [Kimelfeld and Sagiv, Inf. Syst. 2008] pointed out the problem of enumerating $K$-fragments is of great importance in a keyword search on data graphs. In a graph-theoretic term, the problem corresponds to enumerating minimal Steiner trees in (directed) graphs. In this paper, we propose a linear-delay and polynomial-space algorithm for enumerating all minimal Steiner trees, improving on a previous result in [Kimelfeld and Sagiv, Inf. Syst. 2008]. Our enumeration algorithm can be extended to other Steiner problems, such as minimal Steiner forests, minimal terminal Steiner trees, and minimal directed Steiner trees. As another variant of the minimal Steiner tree enumeration problem, we study the problem of enumerating minimal induced Steiner subgraphs. We propose a polynomial-delay and exponential-space enumeration algorithm of minimal induced Steiner subgraphs on claw-free graphs. Contrary to these tractable results, we show that the problem of enumerating minimal group Steiner trees is at least as hard as the minimal transversal enumeration problem on hypergraphs.

The transversal hypergraph problem is the task of enumerating the minimal hitting sets of a hypergraph. It is a long-standing open question whether this can be done in output-polynomial time. For hypergraphs whose solutions have bounded size, Eiter and Gottlob [SICOMP 1995] gave an algorithm that runs in output-polynomial time, but whose space requirement also scales with the output size. We improve this to polynomial delay and polynomial space. More generally, we present an algorithm that on $n$-vertex, $m$-edge hypergraphs has delay $O(m^{k^*+1} n^2)$ and uses $O(mn)$ space, where $k^*$ is the maximum size of any minimal hitting set. Our algorithm is oblivious to $k^*$, a quantity that is hard to compute or even approximate. Central to our approach is the extension problem for minimal hitting sets, deciding for a set $X$ of vertices whether it is contained in any solution. With $|X|$ as parameter, we show that this is one of the first natural problems to be complete for the complexity class $W[3]$. We give an algorithm for the extension problem running in time $O(m^{|X|+1} n)$. We also prove a conditional lower bound under the Strong Exponential Time Hypothesis, showing that this is close to optimal. We apply our enumeration method to the discovery problem of minimal unique column combinations from data profiling. Our empirical evaluation suggests that the algorithm outperforms its worst-case guarantees on hypergraphs stemming from real-world databases.

We consider a novel graph-structured change point problem. We observe a random vector with piecewise constant mean and whose independent, sub-Gaussian coordinates correspond to the $n$ nodes of a fixed graph. We are interested in recovering the partition of the nodes associated to the constancy regions of the mean vector. Although graph-valued signals of this type have been previously studied in the literature for the different tasks of testing for the presence of an anomalous cluster and of estimating the mean vector, no localisation results are known outside the classical case of chain graphs. When the partition $\mathcal{S}$ consists of only two elements, we characterise the difficulty of the localisation problem in terms of: the maximal noise variance $\sigma^2$, the size $\Delta$ of the smaller element of the partition, the magnitude $\kappa$ of the difference in the signal values and the sum of the effective resistance edge weights $|\partial_r(\mathcal{S})|$ of the corresponding cut. We demonstrate an information theoretical lower bound implying that, in the low signal-to-noise ratio regime $\kappa^2 \Delta \sigma^{-2} |\partial_r(\mathcal{S})|^{-1} \lesssim 1$, no consistent estimator of the true partition exists. On the other hand, when $\kappa^2 \Delta \sigma^{-2} |\partial_r(\mathcal{S})|^{-1} \gtrsim \zeta_n \log\{r(|E|)\}$, with $r(|E|)$ being the sum of effective resistance weighted edges and $\zeta_n$ being any diverging sequence in $n$, we show that a polynomial-time, approximate $\ell_0$-penalised least squared estimator delivers a localisation error of order $ \kappa^{-2} \sigma^2 |\partial_r(\mathcal{S})| \log\{r(|E|)\}$. Aside from the $\log\{r(|E|)\}$ term, this rate is minimax optimal. Finally, we provide upper bounds on the localisation error for more general partitions of unknown sizes.

Can every connected graph burn in $\lceil \sqrt{n} \rceil $ steps? While this conjecture remains open, we prove that it is asymptotically true when the graph is much larger than its \emph{growth}, which is the maximal distance of a vertex to a well-chosen path in the graph. In fact, we prove that the conjecture for graphs of bounded growth boils down to a finite number of cases. Through an improved (but still weaker) bound for all trees, we argue that the conjecture almost holds for all graphs with minimum degree at least $3$ and holds for all large enough graphs with minimum degree at least $4$. The previous best lower bound was $23$.

Many modern data analytics applications on graphs operate on domains where graph topology is not known a priori, and hence its determination becomes part of the problem definition, rather than serving as prior knowledge which aids the problem solution. Part III of this monograph starts by addressing ways to learn graph topology, from the case where the physics of the problem already suggest a possible topology, through to most general cases where the graph topology is learned from the data. A particular emphasis is on graph topology definition based on the correlation and precision matrices of the observed data, combined with additional prior knowledge and structural conditions, such as the smoothness or sparsity of graph connections. For learning sparse graphs (with small number of edges), the least absolute shrinkage and selection operator, known as LASSO is employed, along with its graph specific variant, graphical LASSO. For completeness, both variants of LASSO are derived in an intuitive way, and explained. An in-depth elaboration of the graph topology learning paradigm is provided through several examples on physically well defined graphs, such as electric circuits, linear heat transfer, social and computer networks, and spring-mass systems. As many graph neural networks (GNN) and convolutional graph networks (GCN) are emerging, we have also reviewed the main trends in GNNs and GCNs, from the perspective of graph signal filtering. Tensor representation of lattice-structured graphs is next considered, and it is shown that tensors (multidimensional data arrays) are a special class of graph signals, whereby the graph vertices reside on a high-dimensional regular lattice structure. This part of monograph concludes with two emerging applications in financial data processing and underground transportation networks modeling.

The focus of Part I of this monograph has been on both the fundamental properties, graph topologies, and spectral representations of graphs. Part II embarks on these concepts to address the algorithmic and practical issues centered round data/signal processing on graphs, that is, the focus is on the analysis and estimation of both deterministic and random data on graphs. The fundamental ideas related to graph signals are introduced through a simple and intuitive, yet illustrative and general enough case study of multisensor temperature field estimation. The concept of systems on graph is defined using graph signal shift operators, which generalize the corresponding principles from traditional learning systems. At the core of the spectral domain representation of graph signals and systems is the Graph Discrete Fourier Transform (GDFT). The spectral domain representations are then used as the basis to introduce graph signal filtering concepts and address their design, including Chebyshev polynomial approximation series. Ideas related to the sampling of graph signals are presented and further linked with compressive sensing. Localized graph signal analysis in the joint vertex-spectral domain is referred to as the vertex-frequency analysis, since it can be considered as an extension of classical time-frequency analysis to the graph domain of a signal. Important topics related to the local graph Fourier transform (LGFT) are covered, together with its various forms including the graph spectral and vertex domain windows and the inversion conditions and relations. A link between the LGFT with spectral varying window and the spectral graph wavelet transform (SGWT) is also established. Realizations of the LGFT and SGWT using polynomial (Chebyshev) approximations of the spectral functions are further considered. Finally, energy versions of the vertex-frequency representations are introduced.

Graph structured data are abundant in the real world. Among different graph types, directed acyclic graphs (DAGs) are of particular interest to machine learning researchers, as many machine learning models are realized as computations on DAGs, including neural networks and Bayesian networks. In this paper, we study deep generative models for DAGs, and propose a novel DAG variational autoencoder (D-VAE). To encode DAGs into the latent space, we leverage graph neural networks. We propose an asynchronous message passing scheme that allows encoding the computations on DAGs, rather than using existing simultaneous message passing schemes to encode local graph structures. We demonstrate the effectiveness of our proposed D-VAE through two tasks: neural architecture search and Bayesian network structure learning. Experiments show that our model not only generates novel and valid DAGs, but also produces a smooth latent space that facilitates searching for DAGs with better performance through Bayesian optimization.

Generating texts which express complex ideas spanning multiple sentences requires a structured representation of their content (document plan), but these representations are prohibitively expensive to manually produce. In this work, we address the problem of generating coherent multi-sentence texts from the output of an information extraction system, and in particular a knowledge graph. Graphical knowledge representations are ubiquitous in computing, but pose a significant challenge for text generation techniques due to their non-hierarchical nature, collapsing of long-distance dependencies, and structural variety. We introduce a novel graph transforming encoder which can leverage the relational structure of such knowledge graphs without imposing linearization or hierarchical constraints. Incorporated into an encoder-decoder setup, we provide an end-to-end trainable system for graph-to-text generation that we apply to the domain of scientific text. Automatic and human evaluations show that our technique produces more informative texts which exhibit better document structure than competitive encoder-decoder methods.

北京阿比特科技有限公司