亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

For a connected graph $G=(V,E)$, a matching $M\subseteq E$ is a matching cut of $G$ if $G-M$ is disconnected. It is known that for an integer $d$, the corresponding decision problem Matching Cut is polynomial-time solvable for graphs of diameter at most $d$ if $d\leq 2$ and NP-complete if $d\geq 3$. We prove the same dichotomy for graphs of bounded radius. For a graph $H$, a graph is $H$-free if it does not contain $H$ as an induced subgraph. As a consequence of our result, we can solve Matching Cut in polynomial time for $P_6$-free graphs, extending a recent result of Feghali for $P_5$-free graphs. We then extend our result to hold even for $(sP_3+P_6)$-free graphs for every $s\geq 0$ and initiate a complexity classification of Matching Cut for $H$-free graphs.

相關內容

In deep neural networks, better results can often be obtained by increasing the complexity of previously developed basic models. However, it is unclear whether there is a way to boost performance by decreasing the complexity of such models. Intuitively, given a problem, a simpler data structure comes with a simpler algorithm. Here, we investigate the feasibility of improving graph classification performance while simplifying the learning process. Inspired by structural entropy on graphs, we transform the data sample from graphs to coding trees, which is a simpler but essential structure for graph data. Furthermore, we propose a novel message passing scheme, termed hierarchical reporting, in which features are transferred from leaf nodes to root nodes by following the hierarchical structure of coding trees. We then present a tree kernel and a convolutional network to implement our scheme for graph classification. With the designed message passing scheme, the tree kernel and convolutional network have a lower runtime complexity of $O(n)$ than Weisfeiler-Lehman subtree kernel and other graph neural networks of at least $O(hm)$. We empirically validate our methods with several graph classification benchmarks and demonstrate that they achieve better performance and lower computational consumption than competing approaches.

We show that many graphs with bounded treewidth can be described as subgraphs of the strong product of a graph with smaller treewidth and a bounded-size complete graph. To this end, define the "underlying treewidth" of a graph class $\mathcal{G}$ to be the minimum non-negative integer $c$ such that, for some function $f$, for every graph ${G \in \mathcal{G}}$ there is a graph $H$ with ${\text{tw}(H) \leq c}$ such that $G$ is isomorphic to a subgraph of ${H \boxtimes K_{f(\text{tw}(G))}}$. We introduce disjointed coverings of graphs and show they determine the underlying treewidth of any graph class. Using this result, we prove that the class of planar graphs has underlying treewidth 3; the class of $K_{s,t}$-minor-free graphs has underlying treewidth $s$ (for ${t \geq \max\{s,3\}}$); and the class of $K_t$-minor-free graphs has underlying treewidth ${t-2}$. In general, we prove that a monotone class has bounded underlying treewidth if and only if it excludes some fixed topological minor. We also study the underlying treewidth of graph classes defined by an excluded subgraph or excluded induced subgraph. We show that the class of graphs with no $H$ subgraph has bounded underlying treewidth if and only if every component of $H$ is a subdivided star, and that the class of graphs with no induced $H$ subgraph has bounded underlying treewidth if and only if every component of $H$ is a star.

We introduce a new type of Krasnoselskii's result. Using a simple differentiability condition, we relax the nonexpansive condition in Krasnoselskii's theorem. More clearly, we analyze the convergence of the sequence $x_{n+1}=\frac{x_n+g(x_n)}{2}$ based on some differentiability condition of $g$ and present some fixed point results. We introduce some iterative sequences that for any real differentiable function $g$ and any starting point $x_0\in \mathbb [a,b]$ converge monotonically to the nearest root of $g$ in $[a,b]$ that lay to the right or left side of $x_0$. Based on this approach, we present an efficient and novel method for finding the real roots of real functions. We prove that no root will be missed in our method. It is worth mentioning that our iterative method is free from the derivative evaluation which can be regarded as an advantage of this method in comparison with many other methods. Finally, we illustrate our results with some numerical examples.

We study the problem of online learning in competitive settings in the context of two-sided matching markets. In particular, one side of the market, the agents, must learn about their preferences over the other side, the firms, through repeated interaction while competing with other agents for successful matches. We propose a class of decentralized, communication- and coordination-free algorithms that agents can use to reach to their stable match in structured matching markets. In contrast to prior works, the proposed algorithms make decisions based solely on an agent's own history of play and requires no foreknowledge of the firms' preferences. Our algorithms are constructed by splitting up the statistical problem of learning one's preferences, from noisy observations, from the problem of competing for firms. We show that under realistic structural assumptions on the underlying preferences of the agents and firms, the proposed algorithms incur a regret which grows at most logarithmically in the time horizon. Our results show that, in the case of matching markets, competition need not drastically affect the performance of decentralized, communication and coordination free online learning algorithms.

We study the log-rank conjecture from the perspective of point-hyperplane incidence geometry. We formulate the following conjecture: Given a point set in $\mathbb{R}^d$ that is covered by constant-sized sets of parallel hyperplanes, there exists an affine subspace that accounts for a large (i.e., $2^{-{\operatorname{polylog}(d)}}$) fraction of the incidences. Alternatively, our conjecture may be interpreted linear-algebraically as follows: Any rank-$d$ matrix containing at most $O(1)$ distinct entries in each column contains a submatrix of fractional size $2^{-{\operatorname{polylog}(d)}}$, in which each column contains one distinct entry. We prove that our conjecture is equivalent to the log-rank conjecture. Motivated by the connections above, we revisit well-studied questions in point-hyperplane incidence geometry without structural assumptions (i.e., the existence of partitions). We give an elementary argument for the existence of complete bipartite subgraphs of density $\Omega(\epsilon^{2d}/d)$ in any $d$-dimensional configuration with incidence density $\epsilon$. We also improve an upper-bound construction of Apfelbaum and Sharir (SIAM J. Discrete Math. '07), yielding a configuration whose complete bipartite subgraphs are exponentially small and whose incidence density is $\Omega(1/\sqrt d)$. Finally, we discuss various constructions (due to others) which yield configurations with incidence density $\Omega(1)$ and bipartite subgraph density $2^{-\Omega(\sqrt d)}$. Our framework and results may help shed light on the difficulty of improving Lovett's $\tilde{O}(\sqrt{\operatorname{rank}(f)})$ bound (J. ACM '16) for the log-rank conjecture; in particular, any improvement on this bound would imply the first bipartite subgraph size bounds for parallel $3$-partitioned configurations which beat our generic bounds for unstructured configurations.

We consider the problem of secure distributed matrix multiplication (SDMM), where a user has two matrices and wishes to compute their product with the help of $N$ honest but curious servers under the security constraint that any information about either $A$ or $B$ is not leaked to any server. This paper presents a \emph{new scheme} that considers a grid product partition for matrices $A$ and $B$, which achieves an upload cost significantly lower than the existing results in the literature. Since the grid partition is a general partition that incorporates the inner and outer ones, it turns out that the communication load of the proposed scheme matches the best-known protocols for those extreme cases.

In this paper we study colorings (or tilings) of the two-dimensional grid $\mathbb{Z}^2$. A coloring is said to be valid with respect to a set $P$ of $n\times m$ rectangular patterns if all $n\times m$ sub-patterns of the coloring are in $P$. A coloring $c$ is said to be of low complexity with respect to a rectangle if there exist $m,n\in\mathbb{N}$ and a set $P$ of $n\times m$ rectangular patterns such that $c$ is valid with respect to $P$ and $|P|\leq nm$. Open since it was stated in 1997, Nivat's conjecture states that such a coloring is necessarily periodic. If Nivat's conjecture is true, all valid colorings with respect to $P$ such that $|P|\leq mn$ must be periodic. We prove that there exists at least one periodic coloring among the valid ones. We use this result to investigate the tiling problem, also known as the domino problem, which is well known to be undecidable in its full generality. However, we show that it is decidable in the low-complexity setting. Then, we use our result to show that Nivat's conjecture holds for uniformly recurrent configurations. These results also extend to other convex shapes in place of the rectangle.\\ After that, we prove that the $nm$ bound is multiplicatively optimal for the decidability of the domino problem, as for all $\varepsilon>0$ it is undecidable to determine if there exists a valid coloring for a given $m,n\in \mathbb{N}$ and set of rectangular patterns $P$ of size $n\times m$ such that $|P|\leq (1+\varepsilon)nm$. We prove a slightly better bound in the case where $m=n$, as well as constructing aperiodic SFTs of pretty low complexity.\\ This paper is an extended version of a paper published in STACS 2020.

Graph Neural Networks (GNNs), which generalize deep neural networks to graph-structured data, have drawn considerable attention and achieved state-of-the-art performance in numerous graph related tasks. However, existing GNN models mainly focus on designing graph convolution operations. The graph pooling (or downsampling) operations, that play an important role in learning hierarchical representations, are usually overlooked. In this paper, we propose a novel graph pooling operator, called Hierarchical Graph Pooling with Structure Learning (HGP-SL), which can be integrated into various graph neural network architectures. HGP-SL incorporates graph pooling and structure learning into a unified module to generate hierarchical representations of graphs. More specifically, the graph pooling operation adaptively selects a subset of nodes to form an induced subgraph for the subsequent layers. To preserve the integrity of graph's topological information, we further introduce a structure learning mechanism to learn a refined graph structure for the pooled graph at each layer. By combining HGP-SL operator with graph neural networks, we perform graph level representation learning with focus on graph classification task. Experimental results on six widely used benchmarks demonstrate the effectiveness of our proposed model.

Graph neural networks (GNNs) are a popular class of machine learning models whose major advantage is their ability to incorporate a sparse and discrete dependency structure between data points. Unfortunately, GNNs can only be used when such a graph-structure is available. In practice, however, real-world graphs are often noisy and incomplete or might not be available at all. With this work, we propose to jointly learn the graph structure and the parameters of graph convolutional networks (GCNs) by approximately solving a bilevel program that learns a discrete probability distribution on the edges of the graph. This allows one to apply GCNs not only in scenarios where the given graph is incomplete or corrupted but also in those where a graph is not available. We conduct a series of experiments that analyze the behavior of the proposed method and demonstrate that it outperforms related methods by a significant margin.

Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6% to 93.2% and from 88.0% to 93.8% on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank.

北京阿比特科技有限公司