亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Embedding graphs in a geographical or latent space, i.e., inferring locations for vertices in Euclidean space or on a smooth submanifold, is a common task in network analysis, statistical inference, and graph visualization. We consider the classic model of random geometric graphs where $n$ points are scattered uniformly in a square of area $n$, and two points have an edge between them if and only if their Euclidean distance is less than $r$. The reconstruction problem then consists of inferring the vertex positions, up to symmetry, given only the adjacency matrix of the resulting graph. We give an algorithm that, if $r=n^\alpha$ for $\alpha > 0$, with high probability reconstructs the vertex positions with a maximum error of $O(n^\beta)$ where $\beta=1/2-(4/3)\alpha$, until $\alpha \ge 3/8$ where $\beta=0$ and the error becomes $O(\sqrt{\log n})$. This improves over earlier results, which were unable to reconstruct with error less than $r$. Our method estimates Euclidean distances using a hybrid of graph distances and short-range estimates based on the number of common neighbors. We sketch proofs that our results also apply on the surface of a sphere, and (with somewhat different exponents) in any fixed dimension.

相關內容

We study the polynomial approximation of symmetric multivariate functions. Specifically, we consider $f(x_1, \dots, x_N)$, where $x_i \in \mathbb{R}^d$, and $f$ is invariant under permutations of its $N$ arguments. We demonstrate how these symmetries can be exploited to improve the cost versus error ratio in a polynomial approximation of the function $f$, and in particular study the dependence of that ratio on $d, N$ and the polynomial degree.

This paper introduces kdiff, a novel kernel-based measure for estimating distances between instances of time series, random fields and other forms of structured data. This measure is based on the idea of matching distributions that only overlap over a portion of their region of support. Our proposed measure is inspired by MPdist which has been previously proposed for such datasets and is constructed using Euclidean metrics, whereas kdiff is constructed using non-linear kernel distances. Also, kdiff accounts for both self and cross similarities across the instances and is defined using a lower quantile of the distance distribution. Comparing the cross similarity to self similarity allows for measures of similarity that are more robust to noise and partial occlusions of the relevant signals. Our proposed measure kdiff is a more general form of the well known kernel-based Maximum Mean Discrepancy (MMD) distance estimated over the embeddings. Some theoretical results are provided for separability conditions using kdiff as a distance measure for clustering and classification problems where the embedding distributions can be modeled as two component mixtures. Applications are demonstrated for clustering of synthetic and real-life time series and image data, and the performance of kdiff is compared to competing distance measures for clustering.

In this work we establish lower bounds on the size of Clifford circuits that measure a family of commuting Pauli operators. Our bounds depend on the interplay between a pair of graphs: the Tanner graph of the set of measured Pauli operators, and the connectivity graph which represents the qubit connections required to implement the circuit. For local-expander quantum codes, which are promising for low-overhead quantum error correction, we prove that any syndrome extraction circuit implemented with local Clifford gates in a 2D square patch of $N$ qubits has depth at least $\Omega(n/\sqrt{N})$ where $n$ is the code length. Then, we propose two families of quantum circuits saturating this bound. First, we construct 2D local syndrome extraction circuits for quantum LDPC codes with bounded depth using only $O(n^2)$ ancilla qubits. Second, we design a family of 2D local syndrome extraction circuits for hypergraph product codes using $O(n)$ ancilla qubits with depth $O(\sqrt{n})$. Finally, we use circuit noise simulations to compare the performance of a family of hypergraph product codes using this last family of 2D syndrome extraction circuits with a syndrome extraction circuit implemented with fully connected qubits. While there is a threshold of about $10^{-3}$ for a fully connected implementation, we observe no threshold for the 2D local implementation despite simulating error rates of as low as $10^{-6}$. This suggests that quantum LDPC codes are impractical with 2D local quantum hardware. We believe that our proof technique is of independent interest and could find other applications. Our bounds on circuit sizes are derived from a lower bound on the amount of correlations between two subsets of qubits of the circuit and an upper bound on the amount of correlations introduced by each circuit gate, which together provide a lower bound on the circuit size.

A Lattice is a partially ordered set where both least upper bound and greatest lower bound of any pair of elements are unique and exist within the set. K\"{o}tter and Kschischang proved that codes in the linear lattice can be used for error and erasure-correction in random networks. Codes in the linear lattice have previously been shown to be special cases of codes in modular lattices. Two well known classifications of semimodular lattices are geometric and distributive lattices. Most of the frequently used coding spaces are examples of either or both. We have identified the unique criterion which makes a geometric lattice distributive, thus characterizing all finite geometric distributive lattices. Our characterization helps to prove a conjecture regarding the maximum size of a distributive sublattice of a finite geometric lattice and identify the maximal case. The Whitney numbers of the class of geometric distributive lattices are also calculated. We present a few other applications of this unique characterization to derive certain results regarding linearity and complements in the linear lattice.

By a theorem of Johansson, every triangle-free graph $G$ of maximum degree $\Delta$ has chromatic number at most $(C+o(1))\Delta/\log \Delta$ for some universal constant $C > 0$. Using the entropy compression method, Molloy proved that one can in fact take $C = 1$. Here we show that for every $q \geq (1 + o(1))\Delta/\log \Delta$, the number $c(G,q)$ of proper $q$-colorings of $G$ satisfies $c(G, q) \,\geq\, \left(1 - \frac{1}{q}\right)^m ((1-o(1))q)^n$, where $n = |V(G)|$ and $m = |E(G)|$. Except for the $o(1)$ term, this lower bound is best possible as witnessed by random $\Delta$-regular graphs. When $q = (1 + o(1)) \Delta/\log \Delta$, our result yields the inequality $c(G,q) \,\geq\, \exp\left((1 - o(1)) \frac{\log \Delta}{2} n\right)$, which implies the optimal lower bound on the number of independent sets in $G$ due to Davies, Jenssen, Perkins, and Roberts. An important ingredient in our proof is the counting method that was recently developed by Rosenfeld. As a byproduct, we obtain an alternative proof of Molloy's bound $\chi(G) \leq (1 + o(1))\Delta/\log \Delta$ using Rosenfeld's method in place of entropy compression.

Learning low-dimensional representations for entities and relations in knowledge graphs using contrastive estimation represents a scalable and effective method for inferring connectivity patterns. A crucial aspect of contrastive learning approaches is the choice of corruption distribution that generates hard negative samples, which force the embedding model to learn discriminative representations and find critical characteristics of observed data. While earlier methods either employ too simple corruption distributions, i.e. uniform, yielding easy uninformative negatives or sophisticated adversarial distributions with challenging optimization schemes, they do not explicitly incorporate known graph structure resulting in suboptimal negatives. In this paper, we propose Structure Aware Negative Sampling (SANS), an inexpensive negative sampling strategy that utilizes the rich graph structure by selecting negative samples from a node's k-hop neighborhood. Empirically, we demonstrate that SANS finds high-quality negatives that are highly competitive with SOTA methods, and requires no additional parameters nor difficult adversarial optimization.

The spatial convolution layer which is widely used in the Graph Neural Networks (GNNs) aggregates the feature vector of each node with the feature vectors of its neighboring nodes. The GNN is not aware of the locations of the nodes in the global structure of the graph and when the local structures corresponding to different nodes are similar to each other, the convolution layer maps all those nodes to similar or same feature vectors in the continuous feature space. Therefore, the GNN cannot distinguish two graphs if their difference is not in their local structures. In addition, when the nodes are not labeled/attributed the convolution layers can fail to distinguish even different local structures. In this paper, we propose an effective solution to address this problem of the GNNs. The proposed approach leverages a spatial representation of the graph which makes the neural network aware of the differences between the nodes and also their locations in the graph. The spatial representation which is equivalent to a point-cloud representation of the graph is obtained by a graph embedding method. Using the proposed approach, the local feature extractor of the GNN distinguishes similar local structures in different locations of the graph and the GNN infers the topological structure of the graph from the spatial distribution of the locally extracted feature vectors. Moreover, the spatial representation is utilized to simplify the graph down-sampling problem. A new graph pooling method is proposed and it is shown that the proposed pooling method achieves competitive or better results in comparison with the state-of-the-art methods.

In this paper, we proposed a new deep learning based dense monocular SLAM method. Compared to existing methods, the proposed framework constructs a dense 3D model via a sparse to dense mapping using learned surface normals. With single view learned depth estimation as prior for monocular visual odometry, we obtain both accurate positioning and high quality depth reconstruction. The depth and normal are predicted by a single network trained in a tightly coupled manner.Experimental results show that our method significantly improves the performance of visual tracking and depth prediction in comparison to the state-of-the-art in deep monocular dense SLAM.

A fundamental computation for statistical inference and accurate decision-making is to compute the marginal probabilities or most probable states of task-relevant variables. Probabilistic graphical models can efficiently represent the structure of such complex data, but performing these inferences is generally difficult. Message-passing algorithms, such as belief propagation, are a natural way to disseminate evidence amongst correlated variables while exploiting the graph structure, but these algorithms can struggle when the conditional dependency graphs contain loops. Here we use Graph Neural Networks (GNNs) to learn a message-passing algorithm that solves these inference tasks. We first show that the architecture of GNNs is well-matched to inference tasks. We then demonstrate the efficacy of this inference approach by training GNNs on a collection of graphical models and showing that they substantially outperform belief propagation on loopy graphs. Our message-passing algorithms generalize out of the training set to larger graphs and graphs with different structure.

Purpose: MR image reconstruction exploits regularization to compensate for missing k-space data. In this work, we propose to learn the probability distribution of MR image patches with neural networks and use this distribution as prior information constraining images during reconstruction, effectively employing it as regularization. Methods: We use variational autoencoders (VAE) to learn the distribution of MR image patches, which models the high-dimensional distribution by a latent parameter model of lower dimensions in a non-linear fashion. The proposed algorithm uses the learned prior in a Maximum-A-Posteriori estimation formulation. We evaluate the proposed reconstruction method with T1 weighted images and also apply our method on images with white matter lesions. Results: Visual evaluation of the samples showed that the VAE algorithm can approximate the distribution of MR patches well. The proposed reconstruction algorithm using the VAE prior produced high quality reconstructions. The algorithm achieved normalized RMSE, CNR and CN values of 2.77\%, 0.43, 0.11; 4.29\%, 0.43, 0.11, 6.36\%, 0.47, 0.11 and 10.00\%, 0.42, 0.10 for undersampling ratios of 2, 3, 4 and 5, respectively, where it outperformed most of the alternative methods. In the experiments on images with white matter lesions, the method faithfully reconstructed the lesions. Conclusion: We introduced a novel method for MR reconstruction, which takes a new perspective on regularization by using priors learned by neural networks. Results suggest the method compares favorably against the other evaluated methods and can reconstruct lesions as well. Keywords: Reconstruction, MRI, prior probability, MAP estimation, machine learning, variational inference, deep learning

北京阿比特科技有限公司