亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Mining subgraphs with interesting structural properties from networks (or graphs) is a computationally challenging task. In this paper, we propose two algorithms for enumerating all connected induced subgraphs of a given cardinality from networks (or connected undirected graphs in networks). The first algorithm is a variant of a previous well-known algorithm. The algorithm enumerates all connected induced subgraphs of cardinality $k$ in a bottom-up manner. The data structures that lead to unit time element checking and linear space are presented. Different from previous algorithms that either work in a bottom-up manner or a reverse search manner, an algorithm that enumerates all connected induced subgraphs of cardinality $k$ in a top-down manner is proposed. The correctness and complexity of the top-down algorithm are theoretically analyzed and proven. In the experiments, we evaluate the efficiency of the algorithms using a set of real-world networks from various fields. Experimental results show that the variant bottom-up algorithm outperforms the state-of-the-art algorithms for enumerating connected induced subgraphs of small cardinality, and the top-down algorithm can achieve an order of magnitude speedup over the state-of-the-art algorithms for enumerating connected induced subgraphs of large cardinality.

相關內容

Computational aspects of solution notions such as Nash equilibrium have been extensively studied, including settings where the ultimate goal is to find an equilibrium that possesses some additional properties. Furthermore, in order to address issues of tractability, attention has been given to approximate versions of these problems. Our work extends this direction by considering games with constraints in which players are subject to some form of restrictions on their strategic choices. We also consider the relationship between Nash equilibria and so-called constrained or social equilibria in this context, with particular attention to how they are related with respect to totality and complexity. Our results demonstrate that the computational complexity of finding an equilibrium varies significantly between games with slightly different strategic constraints. In addition to examining the computational aspects of such strategic constraints, we also demonstrate that these constraints are useful for modeling problems involving strategic resource allocation and also are of interest from the perspective of behavioral game theory.

A random algebraic graph is defined by a group ${G}$ with a "uniform" distribution over it and a connection $\sigma:{G}\longrightarrow [0,1]$ with expectation $p,$ satisfying $\sigma({g}) = \sigma({g}^{-1}).$ The random graph $\mathsf{RAG}(n,{G},p,\sigma)$ with vertex set $[n]$ is formed as follows. First, $n$ independent latent vectors ${x}_1, \ldots, {x}_n$ are sampled uniformly from ${G}.$ Then, vertices $i,j$ are connected with probability $\sigma({x}_i{x}_j^{-1}).$ This model captures random geometric graphs with latent space the unit sphere and the hypercube, certain regimes of the stochastic block model, and random subgraphs of Cayley graphs. The main question of interest to the current paper is: when is a random algebraic graph statistically and/or computationally distinguishable from $\mathsf{G}(n,p)$? Our results fall into two main categories. 1) Geometric. We focus on the case ${G} =\{\pm1\}^d$ and use Fourier-analytic tools. For hard threshold connections, we match [LMSY22b] for $p = \omega(1/n)$ and for connections that are $\frac{1}{r\sqrt{d}}$-Lipschitz we extend the results of [LR21b] when $d = \Omega(n\log n)$ to the non-monotone setting. 2) Algebraic. We provide evidence for an exponential statistical-computational gap. Consider any finite group ${G}$ and let $A\subseteq {G}$ be a set of elements formed by including each set of the form $\{{g}, {g}^{-1}\}$ independently with probability $1/2.$ Let $\Gamma_n({G},A)$ be the distribution of random graphs formed by taking a uniformly random induced subgraph of size $n$ of the Cayley graph $\Gamma({G},A).$ Then, $\Gamma_n({G}, A)$ and $\mathsf{G}(n,1/2)$ are statistically indistinguishable with high probability over $A$ if and only if $\log |{G}| \gtrsim n.$ However, low-degree polynomial tests fail to distinguish $\Gamma_n({G}, A)$ and $\mathsf{G}(n,1/2)$ with high probability over $A$ when $\log |{G}| = \log^{\Omega(1)}n.$

Motivated by recent works on streaming algorithms for constraint satisfaction problems (CSPs), we define and analyze oblivious algorithms for the Max-$k$AND problem. This generalizes the definition by Feige and Jozeph (Algorithmica '15) of oblivious algorithms for Max-DICUT, a special case of Max-$2$AND. Oblivious algorithms round each variable with probability depending only on a quantity called the variable's bias. For each oblivious algorithm, we design a so-called "factor-revealing linear program" (LP) which captures its worst-case instance, generalizing one of Feige and Jozeph for Max-DICUT. Then, departing from their work, we perform a fully explicit analysis of these (infinitely many!) LPs. In particular, we show that for all $k$, oblivious algorithms for Max-$k$AND provably outperform a special subclass of algorithms we call "superoblivious" algorithms. Our result has implications for streaming algorithms: Generalizing the result for Max-DICUT of Saxena, Singer, Sudan, and Velusamy (SODA'23), we prove that certain separation results hold between streaming models for infinitely many CSPs: for every $k$, $O(\log n)$-space sketching algorithms for Max-$k$AND known to be optimal in $o(\sqrt n)$-space can be beaten in (a) $O(\log n)$-space under a random-ordering assumption, and (b) $O(n^{1-1/k} D^{1/k})$ space under a maximum-degree-$D$ assumption. Even in the previously-known case of Max-DICUT, our analytic proof gives a fuller, computer-free picture of these separation results.

A graph is called 1-plane if it has an embedding in the plane where each edge is crossed at most once by another edge.A crossing of a 1-plane graph is called an $\times$-crossing if there are no other edges connecting the endpoints of the crossing (apart from the crossing pair of edges). In this paper, we show how to compute the vertex connectivity of a 1-plane graph $G$ without $\times$-crossings in linear time. To do so, we show that for any two vertices $u,v$ in a minimum separating set $S$, the distance between $u$ and $v$ in an auxiliary graph $\Lambda(G)$ (obtained by planarizing $G$ and then inserting into each face a new vertex adjacent to all vertices of the face) is small. It hence suffices to search for a minimum separating set in various subgraphs $\Lambda_i$ of $\Lambda(G)$ with small diameter. Since $\Lambda(G)$ is planar, the subgraphs $\Lambda_i$ have small treewidth. Each minimum separating set $S$ then gives rise to a partition of $\Lambda_i$ into three vertex sets with special properties; such a partition can be found via Courcelle's theorem in linear time.

Motivated by an application from geodesy, we introduce a novel clustering problem which is a $k$-center (or k-diameter) problem with a side constraint. For the side constraint, we are given an undirected connectivity graph $G$ on the input points, and a clustering is now only feasible if every cluster induces a connected subgraph in $G$. We call the resulting problems the connected $k$-center problem and the connected $k$-diameter problem. We prove several results on the complexity and approximability of these problems. Our main result is an $O(\log^2{k})$-approximation algorithm for the connected $k$-center and the connected $k$-diameter problem. For Euclidean metrics and metrics with constant doubling dimension, the approximation factor of this algorithm improves to $O(1)$. We also consider the special cases that the connectivity graph is a line or a tree. For the line we give optimal polynomial-time algorithms and for the case that the connectivity graph is a tree, we either give an optimal polynomial-time algorithm or a $2$-approximation algorithm for all variants of our model. We complement our upper bounds by several lower bounds.

This paper categorizes the parameterized complexity of the algorithmic problems Perfect Phylogeny and Triangulating Colored Graphs. We show that they are complete for the parameterized complexity class XALP using a reduction from Tree-chained Multicolor Independent Set and a proof of membership. We introduce the problem Triangulating Multicolored Graphs as a stepping stone and prove XALP-completeness for this problem as well. We also show that, assuming the Exponential Time Hypothesis, there exists no algorithm that solves any of these problems in time $f(k) n^{o(k)}$, where $n$ is the input size, $k$ the parameter, and $f$ any computable function.

Recent years have witnessed significant advances in technologies and services in modern network applications, including smart grid management, wireless communication, cybersecurity as well as multi-agent autonomous systems. Considering the heterogeneous nature of networked entities, emerging network applications call for game-theoretic models and learning-based approaches in order to create distributed network intelligence that responds to uncertainties and disruptions in a dynamic or an adversarial environment. This paper articulates the confluence of networks, games and learning, which establishes a theoretical underpinning for understanding multi-agent decision-making over networks. We provide an selective overview of game-theoretic learning algorithms within the framework of stochastic approximation theory, and associated applications in some representative contexts of modern network systems, such as the next generation wireless communication networks, the smart grid and distributed machine learning. In addition to existing research works on game-theoretic learning over networks, we highlight several new angles and research endeavors on learning in games that are related to recent developments in artificial intelligence. Some of the new angles extrapolate from our own research interests. The overall objective of the paper is to provide the reader a clear picture of the strengths and challenges of adopting game-theoretic learning methods within the context of network systems, and further to identify fruitful future research directions on both theoretical and applied studies.

This paper aims at revisiting Graph Convolutional Neural Networks by bridging the gap between spectral and spatial design of graph convolutions. We theoretically demonstrate some equivalence of the graph convolution process regardless it is designed in the spatial or the spectral domain. The obtained general framework allows to lead a spectral analysis of the most popular ConvGNNs, explaining their performance and showing their limits. Moreover, the proposed framework is used to design new convolutions in spectral domain with a custom frequency profile while applying them in the spatial domain. We also propose a generalization of the depthwise separable convolution framework for graph convolutional networks, what allows to decrease the total number of trainable parameters by keeping the capacity of the model. To the best of our knowledge, such a framework has never been used in the GNNs literature. Our proposals are evaluated on both transductive and inductive graph learning problems. Obtained results show the relevance of the proposed method and provide one of the first experimental evidence of transferability of spectral filter coefficients from one graph to another. Our source codes are publicly available at: //github.com/balcilar/Spectral-Designed-Graph-Convolutions

Graph convolutional network (GCN) has been successfully applied to many graph-based applications; however, training a large-scale GCN remains challenging. Current SGD-based algorithms suffer from either a high computational cost that exponentially grows with number of GCN layers, or a large space requirement for keeping the entire graph and the embedding of each node in memory. In this paper, we propose Cluster-GCN, a novel GCN algorithm that is suitable for SGD-based training by exploiting the graph clustering structure. Cluster-GCN works as the following: at each step, it samples a block of nodes that associate with a dense subgraph identified by a graph clustering algorithm, and restricts the neighborhood search within this subgraph. This simple but effective strategy leads to significantly improved memory and computational efficiency while being able to achieve comparable test accuracy with previous algorithms. To test the scalability of our algorithm, we create a new Amazon2M data with 2 million nodes and 61 million edges which is more than 5 times larger than the previous largest publicly available dataset (Reddit). For training a 3-layer GCN on this data, Cluster-GCN is faster than the previous state-of-the-art VR-GCN (1523 seconds vs 1961 seconds) and using much less memory (2.2GB vs 11.2GB). Furthermore, for training 4 layer GCN on this data, our algorithm can finish in around 36 minutes while all the existing GCN training algorithms fail to train due to the out-of-memory issue. Furthermore, Cluster-GCN allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy---using a 5-layer Cluster-GCN, we achieve state-of-the-art test F1 score 99.36 on the PPI dataset, while the previous best result was 98.71 by [16]. Our codes are publicly available at //github.com/google-research/google-research/tree/master/cluster_gcn.

With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.

北京阿比特科技有限公司