亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Given an undirected $n$-vertex planar graph $G=(V,E,\omega)$ with non-negative edge weight function $\omega:E\rightarrow \mathbb R$ and given an assigned label to each vertex, a vertex-labeled distance oracle is a data structure which for any query consisting of a vertex $u$ and a label $\lambda$ reports the shortest path distance from $u$ to the nearest vertex with label $\lambda$. We show that if there is a distance oracle for undirected $n$-vertex planar graphs with non-negative edge weights using $s(n)$ space and with query time $q(n)$, then there is a vertex-labeled distance oracle with $\tilde{O}(s(n))$ space and $\tilde{O}(q(n))$ query time. Using the state-of-the-art distance oracle of Long and Pettie, our construction produces a vertex-labeled distance oracle using $n^{1+o(1)}$ space and query time $\tilde O(1)$ at one extreme, $\tilde O(n)$ space and $n^{o(1)}$ query time at the other extreme, as well as such oracles for the full tradeoff between space and query time obtained in their paper. This is the first non-trivial exact vertex-labeled distance oracle for planar graphs and, to our knowledge, for any interesting graph class other than trees.

相關內容

甲骨文公司,全稱甲骨文股份有限公司(甲骨文軟件系統有限公司),是全球最大的企業級軟件公司,總部位于美國加利福尼亞州的紅木灘。1989年正式進入中國市場。2013年,甲骨文已超越 IBM ,成為繼 Microsoft 后全球第二大軟件公司。

We study the problem of approximating edit distance in sublinear time. This is formalized as a promise problem $(k,k^c)$-Gap Edit Distance, where the input is a pair of strings $X,Y$ and parameters $k,c>1$, and the goal is to return YES if $ED(X,Y)\leq k$ and NO if $ED(X,Y)> k^c$. Recent years have witnessed significant interest in designing sublinear-time algorithms for Gap Edit Distance. We resolve the non-adaptive query complexity of Gap Edit Distance, improving over several previous results. Specifically, we design a non-adaptive algorithm with query complexity $\tilde{O}(\frac{n}{k^{c-0.5}})$, and further prove that this bound is optimal up to polylogarithmic factors. Our algorithm also achieves optimal time complexity $\tilde{O}(\frac{n}{k^{c-0.5}})$ whenever $c\geq 1.5$. For $1<c<1.5$, the running time of our algorithm is $\tilde{O}(\frac{n}{k^{2c-1}})$. For the restricted case of $k^c=\Omega(n)$, this matches a known result [Batu, Erg\"un, Kilian, Magen, Raskhodnikova, Rubinfeld, and Sami, STOC 2003], and in all other (nontrivial) cases, our running time is strictly better than all previous algorithms, including the adaptive ones.

For integers $d \geq 2$ and $k \geq d+1$, a $k$-hole in a set $S$ of points in general position in $\mathbb{R}^d$ is a $k$-tuple of points from $S$ in convex position such that the interior of their convex hull does not contain any point from $S$. For a convex body $K \subseteq \mathbb{R}^d$ of unit $d$-dimensional volume, we study the expected number $EH^K_{d,k}(n)$ of $k$-holes in a set of $n$ points drawn uniformly and independently at random from $K$. We prove an asymptotically tight lower bound on $EH^K_{d,k}(n)$ by showing that, for all fixed integers $d \geq 2$ and $k\geq d+1$, the number $EH_{d,k}^K(n)$ is at least $\Omega(n^d)$. For some small holes, we even determine the leading constant $\lim_{n \to \infty}n^{-d}EH^K_{d,k}(n)$ exactly. We improve the currently best known lower bound on $\lim_{n \to \infty}n^{-d}EH^K_{d,d+1}(n)$ by Reitzner and Temesvari (2019). In the plane, we show that the constant $\lim_{n \to \infty}n^{-2}EH^K_{2,k}(n)$ is independent of $K$ for every fixed $k \geq 3$ and we compute it exactly for $k=4$, improving earlier estimates by Fabila-Monroy, Huemer, and Mitsche (2015) and by the authors (2020).

Joint modeling of a large number of variables often requires dimension reduction strategies that lead to structural assumptions of the underlying correlation matrix, such as equal pair-wise correlations within subsets of variables. The underlying correlation matrix is thus of interest for both model specification and model validation. In this paper, we develop tests of the hypothesis that the entries of the Kendall rank correlation matrix are linear combinations of a smaller number of parameters. The asymptotic behavior of the proposed test statistics is investigated both when the dimension is fixed and when it grows with the sample size. We pay special attention to the restricted hypothesis of partial exchangeability, which contains full exchangeability as a special case. We show that under partial exchangeability, the test statistics and their large-sample distributions simplify, which leads to computational advantages and better performance of the tests. We propose various scalable numerical strategies for implementation of the proposed procedures, investigate their behavior through simulations and power calculations under local alternatives, and demonstrate their use on a real dataset of mean sea levels at various geographical locations.

We extract a core principle underlying seemingly different fundamental distributed settings, showing sparsity awareness may induce faster algorithms for problems in these settings. To leverage this, we establish a new framework by developing an intermediate auxiliary model weak enough to be simulated in the CONGEST model given low mixing time, as well as in the recently introduced HYBRID model. We prove that despite imposing harsh restrictions, this artificial model allows balancing massive data transfers with high bandwidth utilization. We exemplify the power of our methods, by deriving shortest-paths algorithms improving upon the state-of-the-art. Specifically, we show the following for graphs of $n$ nodes: A $(3+\epsilon)$ approximation for weighted APSP in $(n/\delta)\tau_{mix}\cdot 2^{O(\sqrt\log n)}$ rounds in the CONGEST model, where $\delta$ is the minimum degree of the graph and $\tau_{mix}$ is its mixing time. For graphs with $\delta=\tau_{mix}\cdot 2^{\omega(\sqrt\log n)}$, this takes $o(n)$ rounds, despite the $\Omega(n)$ lower bound for general graphs [Nanongkai, STOC'14]. An $(n^{7/6}/m^{1/2}+n^2/m)\cdot\tau_{mix}\cdot 2^{O(\sqrt\log n)}$-round exact SSSP algorithm in the CONGNEST model, for graphs with $m$ edges and a mixing time of $\tau_{mix}$. This improves upon the algorithm of [Chechik and Mukhtar, PODC'20] for significant ranges of values of $m$ and $ \tau_{mix}$. A CONGESTED CLIQUE simulation in the CONGEST model improving upon the state-of-the-art simulation of [Ghaffari, Kuhn, and SU, PODC'17] by a factor proportional to the average degree in the graph. An $\tilde O(n^{5/17}/\epsilon^9)$-round algorithm for a $(1+\epsilon)$ approximation for SSSP in the HYBRID model. The only previous $o(n^{1/3})$ round algorithm for distance approximations in this model is for a much larger factor [Augustine, Hinnenthal, Kuhn, Scheideler, Schneider, SODA'20].

We consider the problem of discovering $K$ related Gaussian directed acyclic graphs (DAGs), where the involved graph structures share a consistent causal order and sparse unions of supports. Under the multi-task learning setting, we propose a $l_1/l_2$-regularized maximum likelihood estimator (MLE) for learning $K$ linear structural equation models. We theoretically show that the joint estimator, by leveraging data across related tasks, can achieve a better sample complexity for recovering the causal order (or topological order) than separate estimations. Moreover, the joint estimator is able to recover non-identifiable DAGs, by estimating them together with some identifiable DAGs. Lastly, our analysis also shows the consistency of union support recovery of the structures. To allow practical implementation, we design a continuous optimization problem whose optimizer is the same as the joint estimator and can be approximated efficiently by an iterative algorithm. We validate the theoretical analysis and the effectiveness of the joint estimator in experiments.

Graph convolution is the core of most Graph Neural Networks (GNNs) and usually approximated by message passing between direct (one-hop) neighbors. In this work, we remove the restriction of using only the direct neighbors by introducing a powerful, yet spatially localized graph convolution: Graph diffusion convolution (GDC). GDC leverages generalized graph diffusion, examples of which are the heat kernel and personalized PageRank. It alleviates the problem of noisy and often arbitrarily defined edges in real graphs. We show that GDC is closely related to spectral-based models and thus combines the strengths of both spatial (message passing) and spectral methods. We demonstrate that replacing message passing with graph diffusion convolution consistently leads to significant performance improvements across a wide range of models on both supervised and unsupervised tasks and a variety of datasets. Furthermore, GDC is not limited to GNNs but can trivially be combined with any graph-based model or algorithm (e.g. spectral clustering) without requiring any changes to the latter or affecting its computational complexity. Our implementation is available online.

We propose a scalable Gromov-Wasserstein learning (S-GWL) method and establish a novel and theoretically-supported paradigm for large-scale graph analysis. The proposed method is based on the fact that Gromov-Wasserstein discrepancy is a pseudometric on graphs. Given two graphs, the optimal transport associated with their Gromov-Wasserstein discrepancy provides the correspondence between their nodes and achieves graph matching. When one of the graphs has isolated but self-connected nodes ($i.e.$, a disconnected graph), the optimal transport indicates the clustering structure of the other graph and achieves graph partitioning. Using this concept, we extend our method to multi-graph partitioning and matching by learning a Gromov-Wasserstein barycenter graph for multiple observed graphs; the barycenter graph plays the role of the disconnected graph, and since it is learned, so is the clustering. Our method combines a recursive $K$-partition mechanism with a regularized proximal gradient algorithm, whose time complexity is $\mathcal{O}(K(E+V)\log_K V)$ for graphs with $V$ nodes and $E$ edges. To our knowledge, our method is the first attempt to make Gromov-Wasserstein discrepancy applicable to large-scale graph analysis and unify graph partitioning and matching into the same framework. It outperforms state-of-the-art graph partitioning and matching methods, achieving a trade-off between accuracy and efficiency.

Graph neural networks (GNNs) are a popular class of machine learning models whose major advantage is their ability to incorporate a sparse and discrete dependency structure between data points. Unfortunately, GNNs can only be used when such a graph-structure is available. In practice, however, real-world graphs are often noisy and incomplete or might not be available at all. With this work, we propose to jointly learn the graph structure and the parameters of graph convolutional networks (GCNs) by approximately solving a bilevel program that learns a discrete probability distribution on the edges of the graph. This allows one to apply GCNs not only in scenarios where the given graph is incomplete or corrupted but also in those where a graph is not available. We conduct a series of experiments that analyze the behavior of the proposed method and demonstrate that it outperforms related methods by a significant margin.

Random walks are at the heart of many existing network embedding methods. However, such algorithms have many limitations that arise from the use of random walks, e.g., the features resulting from these methods are unable to transfer to new nodes and graphs as they are tied to vertex identity. In this work, we introduce the Role2Vec framework which uses the flexible notion of attributed random walks, and serves as a basis for generalizing existing methods such as DeepWalk, node2vec, and many others that leverage random walks. Our proposed framework enables these methods to be more widely applicable for both transductive and inductive learning as well as for use on graphs with attributes (if available). This is achieved by learning functions that generalize to new nodes and graphs. We show that our proposed framework is effective with an average AUC improvement of 16:55% while requiring on average 853x less space than existing methods on a variety of graphs.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司