亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

An orthogonal representation of a graph $G$ over a field $\mathbb{F}$ is an assignment of a vector $u_v \in \mathbb{F}^t$ to every vertex $v$ of $G$, such that $\langle u_v,u_v \rangle \neq 0$ for every vertex $v$ and $\langle u_v,u_{v'} \rangle = 0$ whenever $v$ and $v'$ are adjacent in $G$. The locality of the orthogonal representation is the largest dimension of a subspace spanned by the vectors associated with a closed neighborhood in the graph. We introduce a novel graph parameter, called the local orthogonality dimension, defined for a given graph $G$ and a given field $\mathbb{F}$, as the smallest possible locality of an orthogonal representation of $G$ over $\mathbb{F}$. We investigate the usefulness of topological methods for proving lower bounds on the local orthogonality dimension. We prove that graphs for which topological methods imply a lower bound of $t$ on their chromatic number have local orthogonality dimension at least $\lceil t/2 \rceil +1$ over every field, strengthening a result of Simonyi and Tardos on the local chromatic number. We show that for certain graphs this lower bound is tight, whereas for others, the local orthogonality dimension over the reals is equal to the chromatic number. More generally, we prove that for every complement of a line graph, the local orthogonality dimension over $\mathbb{R}$ coincides with the chromatic number. This strengthens a recent result by Daneshpajouh, Meunier, and Mizrahi, who proved that the local and standard chromatic numbers of these graphs are equal. As another extension of their result, we prove that the local and standard chromatic numbers are equal for some additional graphs, from the family of Kneser graphs. We also show an $\mathsf{NP}$-hardness result for the local orthogonality dimension and present an application of this graph parameter to the index coding problem from information theory.

相關內容

Incremental learning (IL) suffers from catastrophic forgetting of old tasks when learning new tasks. This can be addressed by replaying previous tasks' data stored in a memory, which however is usually prone to size limits and privacy leakage. Recent studies store only class centroids as prototypes and augment them with Gaussian noises to create synthetic data for replay. However, they cannot effectively avoid class interference near their margins that leads to forgetting. Moreover, the injected noises distort the rich structure between real data and prototypes, hence even detrimental to IL. In this paper, we propose YONO that You Only Need to replay One condensed prototype per class, which for the first time can even outperform memory-costly exemplar-replay methods. To this end, we develop a novel prototype learning method that (1) searches for more representative prototypes in high-density regions by an attentional mean-shift algorithm and (2) moves samples in each class to their prototype to form a compact cluster distant from other classes. Thereby, the class margins are maximized, which effectively reduces interference causing future forgetting. In addition, we extend YONO to YONO+, which creates synthetic replay data by random sampling in the neighborhood of each prototype in the representation space. We show that the synthetic data can further improve YONO. Extensive experiments on IL benchmarks demonstrate the advantages of YONO/YONO+ over existing IL methods in terms of both accuracy and forgetting.

We show that the \textsc{Maximum Weight Independent Set} problem (\textsc{MWIS}) can be solved in quasi-polynomial time on $H$-free graphs (graphs excluding a fixed graph $H$ as an induced subgraph) for every $H$ whose every connected component is a path or a subdivided claw (i.e., a tree with at most three leaves). This completes the dichotomy of the complexity of \textsc{MWIS} in $\mathcal{F}$-free graphs for any finite set $\mathcal{F}$ of graphs into NP-hard cases and cases solvable in quasi-polynomial time, and corroborates the conjecture that the cases not known to be NP-hard are actually polynomial-time solvable. The key graph-theoretic ingredient in our result is as follows. Fix an integer $t \geq 1$. Let $S_{t,t,t}$ be the graph created from three paths on $t$ edges by identifying one endpoint of each path into a single vertex. We show that, given a graph $G$, one can in polynomial time find either an induced $S_{t,t,t}$ in $G$, or a balanced separator consisting of $\Oh(\log |V(G)|)$ vertex neighborhoods in $G$, or an extended strip decomposition of $G$ (a decomposition almost as useful for recursion for \textsc{MWIS} as a partition into connected components) with each particle of weight multiplicatively smaller than the weight of $G$. This is a strengthening of a result of Majewski et al.\ [ICALP~2022] which provided such an extended strip decomposition only after the deletion of $\Oh(\log |V(G)|)$ vertex neighborhoods. To reach the final result, we employ an involved branching strategy that relies on the structural lemma presented above.

We consider a type of pull voting suitable for discrete numeric opinions which can be compared on a linear scale, for example, 1 ('disagree strongly'), 2 ('disagree'), $\ldots,$ 5 ('agree strongly'). On observing the opinion of a random neighbour, a vertex changes its opinion incrementally towards the value of the neighbour's opinion, if different. For opinions drawn from a set $\{1,2,\ldots,k\}$, the opinion of the vertex would change by $+1$ if the opinion of the neighbour is larger, or by $-1$, if it is smaller. It is not clear how to predict the outcome of this process, but we observe that the total weight of the system, that is, the sum of the individual opinions of all vertices, is a martingale. This allows us analyse the outcome of the process on some classes of dense expanders such as clique graphs $K_n$ and random graphs $ G_{n,p}$ for suitably large $p$. If the average of the original opinions satisfies $i \le c \le i+1$ for some integer $i$, then the asymptotic probability that opinion $i$ wins is $i+1-c$, and the probability that opinion $i+1$ wins is $c-i$. With high probability, the winning opinion cannot be other than $i$ or $i+1$. To contrast this, we show that for a path and opinions $0,1,2$ arranged initially in non-decreasing order along the path, the outcome is very different. Any of the opinions can win with constant probability, provided that each of the two extreme opinions $0$ and $2$ is initially supported by a constant fraction of vertices.

Most entropy measures depend on the spread of the probability distribution over the sample space $\mathcal{X}$, and the maximum entropy achievable scales proportionately with the sample space cardinality $|\mathcal{X}|$. For a finite $|\mathcal{X}|$, this yields robust entropy measures which satisfy many important properties, such as invariance to bijections, while the same is not true for continuous spaces (where $|\mathcal{X}|=\infty$). Furthermore, since $\mathbb{R}$ and $\mathbb{R}^d$ ($d\in \mathbb{Z}^+$) have the same cardinality (from Cantor's correspondence argument), cardinality-dependent entropy measures cannot encode the data dimensionality. In this work, we question the role of cardinality and distribution spread in defining entropy measures for continuous spaces, which can undergo multiple rounds of transformations and distortions, e.g., in neural networks. We find that the average value of the local intrinsic dimension of a distribution, denoted as ID-Entropy, can serve as a robust entropy measure for continuous spaces, while capturing the data dimensionality. We find that ID-Entropy satisfies many desirable properties and can be extended to conditional entropy, joint entropy and mutual-information variants. ID-Entropy also yields new information bottleneck principles and also links to causality. In the context of deep learning, for feedforward architectures, we show, theoretically and empirically, that the ID-Entropy of a hidden layer directly controls the generalization gap for both classifiers and auto-encoders, when the target function is Lipschitz continuous. Our work primarily shows that, for continuous spaces, taking a structural rather than a statistical approach yields entropy measures which preserve intrinsic data dimensionality, while being relevant for studying various architectures.

We present a novel approach for computing a variant of eigenvector centrality for multilayer networks with inter-layer constraints on node importance. Specifically, we consider a multilayer network defined by multiple edge-weighted, potentially directed, graphs over the same set of nodes with each graph representing one layer of the network and no inter-layer edges. As in the standard eigenvector centrality construction, the importance of each node in a given layer is based on the weighted sum of the importance of adjacent nodes in that same layer. Unlike standard eigenvector centrality, we assume that the adjacency relationship and the importance of adjacent nodes may be based on distinct layers. Importantly, this type of centrality constraint is only partially supported by existing frameworks for multilayer eigenvector centrality that use edges between nodes in different layers to capture inter-layer dependencies. For our model, constrained, layer-specific eigenvector centrality values are defined by a system of independent eigenvalue problems and dependent pseudo-eigenvalue problems, whose solution can be efficiently realized using an interleaved power iteration algorithm.

Linear discriminant analysis (LDA) has been a useful tool in pattern recognition and data analysis research and practice. While linearity of class boundaries cannot always be expected, nonlinear projections through pre-trained deep neural networks have served to map complex data onto feature spaces in which linear discrimination has served well. The solution to binary LDA is obtained by eigenvalue analysis of within-class and between-class scatter matrices. It is well known that the multiclass LDA is solved by an extension to the binary LDA, a generalised eigenvalue problem, from which the largest subspace that can be extracted is of dimension one lower than the number of classes in the given problem. In this paper, we show that, apart from the first of the discriminant directions, the generalised eigenanalysis solution to multiclass LDA does neither yield orthogonal discriminant directions nor maximise discrimination of projected data along them. Surprisingly, to the best of our knowledge, this has not been noted in decades of literature on LDA. To overcome this drawback, we present a derivation with a strict theoretical support for sequentially obtaining discriminant directions that are orthogonal to previously computed ones and maximise in each step the Fisher criterion. We show distributions of projections along these axes and demonstrate that discrimination of data projected onto these discriminant directions has optimal separation, which is much higher than those from the generalised eigenvectors of the multiclass LDA. Using a wide range of benchmark tasks, we present a comprehensive empirical demonstration that on a number of pattern recognition and classification problems, the optimal discriminant subspaces obtained by the proposed method, referred to as GO-LDA (Generalised Optimal LDA), can offer superior accuracy.

Hypergeometric sequences are rational-valued sequences that satisfy first-order linear recurrence relations with polynomial coefficients; that is, a hypergeometric sequence $\langle u_n \rangle_{n=0}^{\infty}$ is one that satisfies a recurrence of the form $f(n)u_n = g(n)u_{n-1}$ where $f,g \in \mathbb{Z}[x]$. In this paper, we consider the Membership Problem for hypergeometric sequences: given a hypergeometric sequence $\langle u_n \rangle_{n=0}^{\infty}$ and a target value $t\in \mathbb{Q}$, determine whether $u_n=t$ for some index $n$. We establish decidability of the Membership Problem under the assumption that either (i) $f$ and $g$ have distinct splitting fields or (ii) $f$ and $g$ are monic polynomials that both split over a quadratic extension of $\mathbb{Q}$. Our results are based on an analysis of the prime divisors of polynomial sequences $\langle f(n) \rangle_{n=1}^\infty$ and $\langle g(n) \rangle_{n=1}^\infty$ appearing in the recurrence relation.

Graph Convolutional Networks (GCNs) have been widely applied in various fields due to their significant power on processing graph-structured data. Typical GCN and its variants work under a homophily assumption (i.e., nodes with same class are prone to connect to each other), while ignoring the heterophily which exists in many real-world networks (i.e., nodes with different classes tend to form edges). Existing methods deal with heterophily by mainly aggregating higher-order neighborhoods or combing the immediate representations, which leads to noise and irrelevant information in the result. But these methods did not change the propagation mechanism which works under homophily assumption (that is a fundamental part of GCNs). This makes it difficult to distinguish the representation of nodes from different classes. To address this problem, in this paper we design a novel propagation mechanism, which can automatically change the propagation and aggregation process according to homophily or heterophily between node pairs. To adaptively learn the propagation process, we introduce two measurements of homophily degree between node pairs, which is learned based on topological and attribute information, respectively. Then we incorporate the learnable homophily degree into the graph convolution framework, which is trained in an end-to-end schema, enabling it to go beyond the assumption of homophily. More importantly, we theoretically prove that our model can constrain the similarity of representations between nodes according to their homophily degree. Experiments on seven real-world datasets demonstrate that this new approach outperforms the state-of-the-art methods under heterophily or low homophily, and gains competitive performance under homophily.

Knowledge graph (KG) embeddings learn low-dimensional representations of entities and relations to predict missing facts. KGs often exhibit hierarchical and logical patterns which must be preserved in the embedding space. For hierarchical data, hyperbolic embedding methods have shown promise for high-fidelity and parsimonious representations. However, existing hyperbolic embedding methods do not account for the rich logical patterns in KGs. In this work, we introduce a class of hyperbolic KG embedding models that simultaneously capture hierarchical and logical patterns. Our approach combines hyperbolic reflections and rotations with attention to model complex relational patterns. Experimental results on standard KG benchmarks show that our method improves over previous Euclidean- and hyperbolic-based efforts by up to 6.1% in mean reciprocal rank (MRR) in low dimensions. Furthermore, we observe that different geometric transformations capture different types of relations while attention-based transformations generalize to multiple relations. In high dimensions, our approach yields new state-of-the-art MRRs of 49.6% on WN18RR and 57.7% on YAGO3-10.

Knowledge graph embedding, which aims to represent entities and relations as low dimensional vectors (or matrices, tensors, etc.), has been shown to be a powerful technique for predicting missing links in knowledge graphs. Existing knowledge graph embedding models mainly focus on modeling relation patterns such as symmetry/antisymmetry, inversion, and composition. However, many existing approaches fail to model semantic hierarchies, which are common in real-world applications. To address this challenge, we propose a novel knowledge graph embedding model---namely, Hierarchy-Aware Knowledge Graph Embedding (HAKE)---which maps entities into the polar coordinate system. HAKE is inspired by the fact that concentric circles in the polar coordinate system can naturally reflect the hierarchy. Specifically, the radial coordinate aims to model entities at different levels of the hierarchy, and entities with smaller radii are expected to be at higher levels; the angular coordinate aims to distinguish entities at the same level of the hierarchy, and these entities are expected to have roughly the same radii but different angles. Experiments demonstrate that HAKE can effectively model the semantic hierarchies in knowledge graphs, and significantly outperforms existing state-of-the-art methods on benchmark datasets for the link prediction task.

北京阿比特科技有限公司