亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Our work concerns algorithms for an unweighted variant of Maximum Flow. In the All-Pairs Connectivity (APC) problem, we are given a graph $G$ on $n$ vertices and $m$ edges, and are tasked with computing the maximum number of edge-disjoint paths from $s$ to $t$ (equivalently, the size of a minimum $(s,t)$-cut) in $G$, for all pairs of vertices $(s,t)$. Although over undirected graphs APC can be solved in essentially optimal $n^{2+o(1)}$ time, the true time complexity of APC over directed graphs remains open: this problem can be solved in $\tilde{O}(m^\omega)$ time, where $\omega \in [2, 2.373)$ is the exponent of matrix multiplication, but no matching conditional lower bound is known. We study a variant of APC called the $k$-Bounded All Pairs Connectivity ($k$-APC) problem. In this problem, we are given an integer $k$ and graph $G$, and are tasked with reporting the size of a minimum $(s,t)$-cut only for pairs $(s,t)$ of vertices with a minimum cut size less than $k$ (if the minimum $(s,t)$-cut has size at least $k$, we just report it is "large" instead of computing the exact value). We present an algorithm solving $k$-APC in directed graphs in $\tilde{O}((kn)^\omega)$ time. This runtime is $\tilde O(n^\omega)$ for all $k$ polylogarithmic in $n$, which is essentially optimal under popular conjectures from fine-grained complexity. Previously, this runtime was only known for $k\le 2$ [Georgiadis et al., ICALP 2017]. We also study a variant of $k$-APC, the $k$-Bounded All-Pairs Vertex Connectivity ($k$-APVC) problem, which considers internally vertex-disjoint paths instead of edge-disjoint paths. We present an algorithm solving $k$-APVC in directed graphs in $\tilde{O}(k^2n^\omega)$ time. Previous work solved an easier version of the $k$-APVC problem in $\tilde O((kn)^\omega)$ time [Abboud et al, ICALP 2019].

相關內容

The problem Power Dominating Set (PDS) is motivated by the placement of phasor measurement units to monitor electrical networks. It asks for a minimum set of vertices in a graph that observes all remaining vertices by exhaustively applying two observation rules. Our contribution is twofold. First, we determine the parameterized complexity of PDS by proving it is $W[P]$-complete when parameterized with respect to the solution size. We note that it was only known to be $W[2]$-hard before. Our second and main contribution is a new algorithm for PDS that efficiently solves practical instances. Our algorithm consists of two complementary parts. The first is a set of reduction rules for PDS that can also be used in conjunction with previously existing algorithms. The second is an algorithm for solving the remaining kernel based on the implicit hitting set approach. Our evaluation on a set of power grid instances from the literature shows that our solver outperforms previous state-of-the-art solvers for PDS by more than one order of magnitude on average. Furthermore, our algorithm can solve previously unsolved instances of continental scale within a few minutes.

We consider the problem of answering connectivity queries on a real algebraic curve. The curve is given as the real trace of an algebraic curve, assumed to be in generic position, and being defined by some rational parametrizations. The query points are given by a zero-dimensional parametrization. We design an algorithm which counts the number of connected components of the real curve under study, and decides which query point lie in which connected component, in time log-linear in $N^6$, where $N$ is the maximum of the degrees and coefficient bit-sizes of the polynomials given as input. This matches the currently best-known bound for computing the topology of real plane curves. The main novelty of this algorithm is the avoidance of the computation of the complete topology of the curve.

Deep neural networks are superior to shallow networks in learning complex representations. As such, there is a fast-growing interest in utilizing them in large-scale settings. The training process of neural networks is already known to be time-consuming, and having a deep architecture only aggravates the issue. This process consists mostly of matrix operations, among which matrix multiplication is the bottleneck. Several sampling-based techniques have been proposed for speeding up the training time of deep neural networks by approximating the matrix products. These techniques fall under two categories: (i) sampling a subset of nodes in every hidden layer as active at every iteration and (ii) sampling a subset of nodes from the previous layer to approximate the current layer's activations using the edges from the sampled nodes. In both cases, the matrix products are computed using only the selected samples. In this paper, we evaluate the scalability of these approaches on CPU machines with limited computational resources. Making a connection between the two research directions as special cases of approximating matrix multiplications in the context of neural networks, we provide a negative theoretical analysis that shows feedforward approximation is an obstacle against scalability. We conduct comprehensive experimental evaluations that demonstrate the most pressing challenges and limitations associated with the studied approaches. We observe that the hashing-based node selection method is not scalable to a large number of layers, confirming our theoretical analysis. Finally, we identify directions for future research.

A matchstick graph is a plane graph with edges drawn as unit-distance line segments. Harborth introduced these graphs in 1981 and conjectured that the maximum number of edges for a matchstick graph on $n$ vertices is $\lfloor 3n-\sqrt{12n-3} \rfloor$. In this paper we prove this conjecture for all $n\geq 1$. The main geometric ingredient of the proof is an isoperimetric inequality related to L'Huilier's inequality.

Associative memories are devices storing information that can be fully retrieved given partial disclosure of it. We examine a toy model of associative memory and the ultimate limitations it is subjected to within the framework of general probabilistic theories (GPTs), which represent the most general class of physical theories satisfying some basic operational axioms. We ask ourselves how large the dimension of a GPT should be so that it can accommodate $2^m$ states with the property that any $N$ of them are perfectly distinguishable. Call $d(N,m)$ the minimal such dimension. Invoking an old result by Danzer and Gr\"unbaum, we prove that $d(2,m)=m+1$, to be compared with $O(2^m)$ when the GPT is required to be either classical or quantum. This yields an example of a task where GPTs outperform both classical and quantum theory exponentially. More generally, we resolve the case of fixed $N$ and asymptotically large $m$, proving that $d(N,m) \leq m^{1+o_N(1)}$ (as $m\to\infty$) for every $N\geq 2$, which yields again an exponential improvement over classical and quantum theories. Finally, we develop a numerical approach to the general problem of finding the largest $N$-wise mutually distinguishable set for a given GPT, which can be seen as an instance of the maximum clique problem on $N$-regular hypergraphs.

This paper develops an approximation to the (effective) $p$-resistance and applies it to multi-class clustering. Spectral methods based on the graph Laplacian and its generalization to the graph $p$-Laplacian have been a backbone of non-euclidean clustering techniques. The advantage of the $p$-Laplacian is that the parameter $p$ induces a controllable bias on cluster structure. The drawback of $p$-Laplacian eigenvector based methods is that the third and higher eigenvectors are difficult to compute. Thus, instead, we are motivated to use the $p$-resistance induced by the $p$-Laplacian for clustering. For $p$-resistance, small $p$ biases towards clusters with high internal connectivity while large $p$ biases towards clusters of small ``extent,'' that is a preference for smaller shortest-path distances between vertices in the cluster. However, the $p$-resistance is expensive to compute. We overcome this by developing an approximation to the $p$-resistance. We prove upper and lower bounds on this approximation and observe that it is exact when the graph is a tree. We also provide theoretical justification for the use of $p$-resistance for clustering. Finally, we provide experiments comparing our approximated $p$-resistance clustering to other $p$-Laplacian based methods.

An $(n,m)$-graph is a graph with $n$ types of arcs and $m$ types of edges. A homomorphism of an $(n,m)$-graph $G$ to another $(n,m)$-graph $H$ is a vertex mapping that preserves adjacency, its direction, and its type. The minimum value of $|V(H)|$ such that $G$ admits a homomorphism to $H$ is the $(n,m)$-chromatic number of $G$, denoted by $\mychi_{n,m}(G)$. This parameter was introduced by Ne\v{s}et\v{r}il and Raspaud (J. Comb. Theory. Ser. B 2000). In this article, we show that the arboricity of $G$ is bounded by a function of $\mychi_{n,m}(G)$, but not the other way round. We also show that acyclic chromatic number of $G$ is bounded by a function of $\mychi_{n,m}(G)$, while the other way round bound was known beforehand. Moreover, we show that $(n,m)$-chromatic number for the family of graphs having maximum average degree less than $2+ \frac{2}{4(2n+m)-1}$, which contains the family of planar graphs having girth at least $8(2n+m)$ as a subfamily, is equal to $2(2n+m)+1$. This improves the previously known result which proved that the $(n,m)$-chromatic number for the family planar graphs having girth at least $10(2n+m)-4$ is equal to $2(2n+m)+1$. It is known that the $(n,m)$-chromatic number for the family of partial $2$-trees bounded below and above by quadratic functions of $(2n+m)$ and that the lower bound is tight when $(2n+m)=2$. We show that the lower bound is not tight when $(2n+m)=3$ by improving the corresponding lower bounds by one. We manage to improve some of the upper bounds in these cases as well.

We employ a toolset -- dubbed Dr. Frankenstein -- to analyse the similarity of representations in deep neural networks. With this toolset, we aim to match the activations on given layers of two trained neural networks by joining them with a stitching layer. We demonstrate that the inner representations emerging in deep convolutional neural networks with the same architecture but different initializations can be matched with a surprisingly high degree of accuracy even with a single, affine stitching layer. We choose the stitching layer from several possible classes of linear transformations and investigate their performance and properties. The task of matching representations is closely related to notions of similarity. Using this toolset, we also provide a novel viewpoint on the current line of research regarding similarity indices of neural network representations: the perspective of the performance on a task.

As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.

Graph convolutional network (GCN) has been successfully applied to many graph-based applications; however, training a large-scale GCN remains challenging. Current SGD-based algorithms suffer from either a high computational cost that exponentially grows with number of GCN layers, or a large space requirement for keeping the entire graph and the embedding of each node in memory. In this paper, we propose Cluster-GCN, a novel GCN algorithm that is suitable for SGD-based training by exploiting the graph clustering structure. Cluster-GCN works as the following: at each step, it samples a block of nodes that associate with a dense subgraph identified by a graph clustering algorithm, and restricts the neighborhood search within this subgraph. This simple but effective strategy leads to significantly improved memory and computational efficiency while being able to achieve comparable test accuracy with previous algorithms. To test the scalability of our algorithm, we create a new Amazon2M data with 2 million nodes and 61 million edges which is more than 5 times larger than the previous largest publicly available dataset (Reddit). For training a 3-layer GCN on this data, Cluster-GCN is faster than the previous state-of-the-art VR-GCN (1523 seconds vs 1961 seconds) and using much less memory (2.2GB vs 11.2GB). Furthermore, for training 4 layer GCN on this data, our algorithm can finish in around 36 minutes while all the existing GCN training algorithms fail to train due to the out-of-memory issue. Furthermore, Cluster-GCN allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy---using a 5-layer Cluster-GCN, we achieve state-of-the-art test F1 score 99.36 on the PPI dataset, while the previous best result was 98.71 by [16]. Our codes are publicly available at //github.com/google-research/google-research/tree/master/cluster_gcn.

北京阿比特科技有限公司