亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Eigenvector centrality is one of the outstanding measures of central tendency in graph theory. In this paper we consider the problem of calculating eigenvector centrality of graph partitioned into components and how this partitioning can be used. Two cases are considered; first where the a single component in the graph has the dominant eigenvalue, secondly when there are at least two components that share the dominant eigenvalue for the graph. In the first case we implement and compare the method to the usual approach (power method) for calculating eigenvector centrality while in the second case with shared dominant eigenvalues we show some theoretical and numerical results. Keywords: Eigenvector centrality, power iteration, graph, strongly connected component.

相關內容

Ego-pose estimation and dynamic object tracking are two key issues in an autonomous driving system. Two assumptions are often made for them, i.e. the static world assumption of simultaneous localization and mapping (SLAM) and the exact ego-pose assumption of object tracking, respectively. However, these assumptions are difficult to hold in highly dynamic road scenarios where SLAM and object tracking become correlated and mutually beneficial. In this paper, DL-SLOT, a dynamic Lidar SLAM and object tracking method is proposed. This method integrates the state estimations of both the ego vehicle and the static and dynamic objects in the environment into a unified optimization framework, to realize SLAM and object tracking (SLOT) simultaneously. Firstly, we implement object detection to remove all the points that belong to potential dynamic objects. Then, LiDAR odometry is conducted using the filtered point cloud. At the same time, detected objects are associated with the history object trajectories based on the time-series information in a sliding window. The states of the static and dynamic objects and ego vehicle in the sliding window are integrated into a unified local optimization framework. We perform SLAM and object tracking simultaneously in this framework, which significantly improves the robustness and accuracy of SLAM in highly dynamic road scenarios and the accuracy of objects' states estimation. Experiments on public datasets have shown that our method achieves better accuracy than A-LOAM.

We propose SDFDiff, a novel approach for image-based shape optimization using differentiable rendering of 3D shapes represented by signed distance functions (SDFs). Compared to other representations, SDFs have the advantage that they can represent shapes with arbitrary topology, and that they guarantee watertight surfaces. We apply our approach to the problem of multi-view 3D reconstruction, where we achieve high reconstruction quality and can capture complex topology of 3D objects. In addition, we employ a multi-resolution strategy to obtain a robust optimization algorithm. We further demonstrate that our SDF-based differentiable renderer can be integrated with deep learning models, which opens up options for learning approaches on 3D objects without 3D supervision. In particular, we apply our method to single-view 3D reconstruction and achieve state-of-the-art results.

This paper studies the problem of matching two complete graphs with edge weights correlated through latent geometries, extending a recent line of research on random graph matching with independent edge weights to geometric models. Specifically, given a random permutation $\pi^*$ on $[n]$ and $n$ iid pairs of correlated Gaussian vectors $\{X_{\pi^*(i)}, Y_i\}$ in $\mathbb{R}^d$ with noise parameter $\sigma$, the edge weights are given by $A_{ij}=\kappa(X_i,X_j)$ and $B_{ij}=\kappa(Y_i,Y_j)$ for some link function $\kappa$. The goal is to recover the hidden vertex correspondence $\pi^*$ based on the observation of $A$ and $B$. We focus on the dot-product model with $\kappa(x,y)=\langle x, y \rangle$ and Euclidean distance model with $\kappa(x,y)=\|x-y\|^2$, in the low-dimensional regime of $d=o(\log n)$ wherein the underlying geometric structures are most evident. We derive an approximate maximum likelihood estimator, which provably achieves, with high probability, perfect recovery of $\pi^*$ when $\sigma=o(n^{-2/d})$ and almost perfect recovery with a vanishing fraction of errors when $\sigma=o(n^{-1/d})$. Furthermore, these conditions are shown to be information-theoretically optimal even when the latent coordinates $\{X_i\}$ and $\{Y_i\}$ are observed, complementing the recent results of [DCK19] and [KNW22] in geometric models of the planted bipartite matching problem. As a side discovery, we show that the celebrated spectral algorithm of [Ume88] emerges as a further approximation to the maximum likelihood in the geometric model.

Graph signal processing uses the graph eigenvector basis to analyze signals. However, these graph eigenvectors are typically linearly ordered (by total variation), which may not be reasonable for many graph structures. There have been structure based similarity metrics proposed in the literature that better capture the geometry of graph eigenvectors. On the other hand, there has been work that attempts to generalize the concept of duality to graph signal processing. The (signal processing) the dual graph captures the relationship between graph frequencies and is obtained by typically inverting the graph Fourier transform operation. In this work, we investigate the connections between these two concepts. We propose a dualness measure of two graphs, which quantifies how close the graphs are being (signal processing) duals of each other. We show that this definition satisfies some desirable properties and develop an algorithm based on coordinate descent and perfect matching to compute an approximation to dualness. By computing the dualness measure, we observe that for structured graphs, the similarity metric-based techniques give a better dual graph (and vice-versa for ER graphs), suggesting a potentially novel approach unifying the two methods.

Recent success in cooperative multi-agent reinforcement learning (MARL) relies on centralized training and policy sharing. Centralized training eliminates the issue of non-stationarity MARL yet induces large communication costs, and policy sharing is empirically crucial to efficient learning in certain tasks yet lacks theoretical justification. In this paper, we formally characterize a subclass of cooperative Markov games where agents exhibit a certain form of homogeneity such that policy sharing provably incurs no suboptimality. This enables us to develop the first consensus-based decentralized actor-critic method where the consensus update is applied to both the actors and the critics while ensuring convergence. We also develop practical algorithms based on our decentralized actor-critic method to reduce the communication cost during training, while still yielding policies comparable with centralized training.

The semi-random graph process is a single player game in which the player is initially presented an empty graph on $n$ vertices. In each round, a vertex $u$ is presented to the player independently and uniformly at random. The player then adaptively selects a vertex $v$, and adds the edge $uv$ to the graph. For a fixed monotone graph property, the objective of the player is to force the graph to satisfy this property with high probability in as few rounds as possible. We focus on the problem of constructing a perfect matching in as few rounds as possible. In particular, we present an adaptive strategy for the player which achieves a perfect matching in $\beta n$ rounds, where the value of $\beta < 1.206$ is derived from a solution to some system of differential equations. This improves upon the previously best known upper bound of $(1+2/e+o(1)) \, n < 1.736 \, n$ rounds. We also improve the previously best lower bound of $(\ln 2 + o(1)) \, n > 0.693 \, n$ and show that the player cannot achieve the desired property in less than $\alpha n$ rounds, where the value of $\alpha > 0.932$ is derived from a solution to another system of differential equations. As a result, the gap between the upper and lower bounds is decreased roughly four times.

Knowledge Graph (KG) is a flexible structure that is able to describe the complex relationship between data entities. Currently, most KG embedding models are trained based on negative sampling, i.e., the model aims to maximize some similarity of the connected entities in the KG, while minimizing the similarity of the sampled disconnected entities. Negative sampling helps to reduce the time complexity of model learning by only considering a subset of negative instances, which may fail to deliver stable model performance due to the uncertainty in the sampling procedure. To avoid such deficiency, we propose a new framework for KG embedding -- Efficient Non-Sampling Knowledge Graph Embedding (NS-KGE). The basic idea is to consider all of the negative instances in the KG for model learning, and thus to avoid negative sampling. The framework can be applied to square-loss based knowledge graph embedding models or models whose loss can be converted to a square loss. A natural side-effect of this non-sampling strategy is the increased computational complexity of model learning. To solve the problem, we leverage mathematical derivations to reduce the complexity of non-sampling loss function, which eventually provides us both better efficiency and better accuracy in KG embedding compared with existing models. Experiments on benchmark datasets show that our NS-KGE framework can achieve a better performance on efficiency and accuracy over traditional negative sampling based models, and that the framework is applicable to a large class of knowledge graph embedding models.

Graph neural networks have become one of the most important techniques to solve machine learning problems on graph-structured data. Recent work on vertex classification proposed deep and distributed learning models to achieve high performance and scalability. However, we find that the feature vectors of benchmark datasets are already quite informative for the classification task, and the graph structure only provides a means to denoise the data. In this paper, we develop a theoretical framework based on graph signal processing for analyzing graph neural networks. Our results indicate that graph neural networks only perform low-pass filtering on feature vectors and do not have the non-linear manifold learning property. We further investigate their resilience to feature noise and propose some insights on GCN-based graph neural network design.

Graph Neural Networks (GNNs) for representation learning of graphs broadly follow a neighborhood aggregation framework, where the representation vector of a node is computed by recursively aggregating and transforming feature vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs in capturing different graph structures. Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

北京阿比特科技有限公司