亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

While graph neural networks (GNNs) have become the de-facto standard for graph-based node classification, they impose a strong assumption on the availability of sufficient labeled samples. This assumption restricts the classification performance of prevailing GNNs on many real-world applications suffering from low-data regimes. Specifically, features extracted from scarce labeled nodes could not provide sufficient supervision for the unlabeled samples, leading to severe over-fitting. In this work, we point out that leveraging subgraphs to capture long-range dependencies can augment the representation of a node with homophily properties, thus alleviating the low-data regime. However, prior works leveraging subgraphs fail to capture the long-range dependencies among nodes. To this end, we present a novel self-supervised learning framework, called multi-view subgraph neural networks (Muse), for handling long-range dependencies. In particular, we propose an information theory-based identification mechanism to identify two types of subgraphs from the views of input space and latent space, respectively. The former is to capture the local structure of the graph, while the latter captures the long-range dependencies among nodes. By fusing these two views of subgraphs, the learned representations can preserve the topological properties of the graph at large, including the local structure and long-range dependencies, thus maximizing their expressiveness for downstream node classification tasks. Experimental results show that Muse outperforms the alternative methods on node classification tasks with limited labeled data.

相關內容

神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)網(wang)絡(luo)(Neural Networks)是世界(jie)上三個(ge)(ge)最古(gu)老(lao)的(de)(de)(de)(de)神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)建模學(xue)會(hui)的(de)(de)(de)(de)檔案期(qi)刊(kan):國際神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)網(wang)絡(luo)學(xue)會(hui)(INNS)、歐洲(zhou)神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)網(wang)絡(luo)學(xue)會(hui)(ENNS)和(he)(he)日本神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)網(wang)絡(luo)學(xue)會(hui)(JNNS)。神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)網(wang)絡(luo)提(ti)供了(le)一個(ge)(ge)論(lun)壇,以(yi)發展和(he)(he)培(pei)育一個(ge)(ge)國際社(she)會(hui)的(de)(de)(de)(de)學(xue)者和(he)(he)實踐者感興趣的(de)(de)(de)(de)所有方面(mian)的(de)(de)(de)(de)神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)網(wang)絡(luo)和(he)(he)相(xiang)關方法的(de)(de)(de)(de)計(ji)算智(zhi)能(neng)。神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)網(wang)絡(luo)歡迎高質量論(lun)文(wen)的(de)(de)(de)(de)提(ti)交(jiao),有助于全(quan)面(mian)的(de)(de)(de)(de)神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)網(wang)絡(luo)研究,從行(xing)為和(he)(he)大腦建模,學(xue)習算法,通(tong)過數(shu)學(xue)和(he)(he)計(ji)算分(fen)(fen)析,系(xi)統(tong)的(de)(de)(de)(de)工程和(he)(he)技術應用,大量使用神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)網(wang)絡(luo)的(de)(de)(de)(de)概念和(he)(he)技術。這一獨特(te)而(er)廣泛的(de)(de)(de)(de)范圍促進了(le)生物(wu)和(he)(he)技術研究之間的(de)(de)(de)(de)思(si)想交(jiao)流(liu),并(bing)有助于促進對生物(wu)啟發的(de)(de)(de)(de)計(ji)算智(zhi)能(neng)感興趣的(de)(de)(de)(de)跨學(xue)科(ke)(ke)(ke)社(she)區的(de)(de)(de)(de)發展。因此,神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)網(wang)絡(luo)編(bian)(bian)委會(hui)代表的(de)(de)(de)(de)專家(jia)領域包括心(xin)理(li)學(xue),神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)生物(wu)學(xue),計(ji)算機科(ke)(ke)(ke)學(xue),工程,數(shu)學(xue),物(wu)理(li)。該雜志(zhi)發表文(wen)章、信(xin)件和(he)(he)評(ping)論(lun)以(yi)及給編(bian)(bian)輯的(de)(de)(de)(de)信(xin)件、社(she)論(lun)、時(shi)事、軟(ruan)件調查和(he)(he)專利信(xin)息。文(wen)章發表在(zai)五(wu)個(ge)(ge)部分(fen)(fen)之一:認知科(ke)(ke)(ke)學(xue),神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)科(ke)(ke)(ke)學(xue),學(xue)習系(xi)統(tong),數(shu)學(xue)和(he)(he)計(ji)算分(fen)(fen)析、工程和(he)(he)應用。 官網(wang)地址:

Spiking neural networks (SNNs) have gained prominence for their potential in neuromorphic computing and energy-efficient artificial intelligence, yet optimizing them remains a formidable challenge for gradient-based methods due to their discrete, spike-based computation. This paper attempts to tackle the challenges by introducing Cosine Annealing Differential Evolution (CADE), designed to modulate the mutation factor (F) and crossover rate (CR) of differential evolution (DE) for the SNN model, i.e., Spiking Element Wise (SEW) ResNet. Extensive empirical evaluations were conducted to analyze CADE. CADE showed a balance in exploring and exploiting the search space, resulting in accelerated convergence and improved accuracy compared to existing gradient-based and DE-based methods. Moreover, an initialization method based on a transfer learning setting was developed, pretraining on a source dataset (i.e., CIFAR-10) and fine-tuning the target dataset (i.e., CIFAR-100), to improve population diversity. It was found to further enhance CADE for SNN. Remarkably, CADE elevates the performance of the highest accuracy SEW model by an additional 0.52 percentage points, underscoring its effectiveness in fine-tuning and enhancing SNNs. These findings emphasize the pivotal role of a scheduler for F and CR adjustment, especially for DE-based SNN. Source Code on Github: //github.com/Tank-Jiang/CADE4SNN.

Graph neural networks have achieved remarkable success in learning graph representations, especially graph Transformer, which has recently shown superior performance on various graph mining tasks. However, graph Transformer generally treats nodes as tokens, which results in quadratic complexity regarding the number of nodes during self-attention computation. The graph MLP Mixer addresses this challenge by using the efficient MLP Mixer technique from computer vision. However, the time-consuming process of extracting graph tokens limits its performance. In this paper, we present a novel architecture named ChebMixer, a newly graph MLP Mixer that uses fast Chebyshev polynomials-based spectral filtering to extract a sequence of tokens. Firstly, we produce multiscale representations of graph nodes via fast Chebyshev polynomial-based spectral filtering. Next, we consider each node's multiscale representations as a sequence of tokens and refine the node representation with an effective MLP Mixer. Finally, we aggregate the multiscale representations of nodes through Chebyshev interpolation. Owing to the powerful representation capabilities and fast computational properties of MLP Mixer, we can quickly extract more informative node representations to improve the performance of downstream tasks. The experimental results prove our significant improvements in a variety of scenarios ranging from graph node classification to medical image segmentation.

Next-generation wireless networks are projected to empower a broad range of Internet-of-things (IoT) applications and services with extreme data rates, posing new challenges in delivering large-scale connectivity at a low cost to current communication paradigms. Rate-splitting multiple access (RSMA) is one of the most spotlight nominees, conceived to address spectrum scarcity while reaching massive connectivity. Meanwhile, symbiotic communication is said to be an inexpensive way to realize future IoT on a large scale. To reach the goal of spectrum efficiency improvement and low energy consumption, we merge these advances by means of introducing a novel paradigm shift, called symbiotic backscatter RSMA, for the next generation. Specifically, we first establish the way to operate the symbiotic system to assist the readers in apprehending the proposed paradigm, then guide detailed design in beamforming weights with four potential gain-control (GC) strategies for enhancing symbiotic communication, and finally provide an information-theoretic framework using a new metric, called symbiotic outage probability (SOP) to characterize the proposed system performance. Through numerical result experiments, we show that the developed framework can accurately predict the actual SOP and the efficacy of the proposed GC strategies in improving the SOP performance.

While spiking neural networks (SNNs) offer a promising neurally-inspired model of computation, they are vulnerable to adversarial attacks. We present the first study that draws inspiration from neural homeostasis to design a threshold-adapting leaky integrate-and-fire (TA-LIF) neuron model and utilize TA-LIF neurons to construct the adversarially robust homeostatic SNNs (HoSNNs) for improved robustness. The TA-LIF model incorporates a self-stabilizing dynamic thresholding mechanism, offering a local feedback control solution to the minimization of each neuron's membrane potential error caused by adversarial disturbance. Theoretical analysis demonstrates favorable dynamic properties of TA-LIF neurons in terms of the bounded-input bounded-output stability and suppressed time growth of membrane potential error, underscoring their superior robustness compared with the standard LIF neurons. When trained with weak FGSM attacks (attack budget = 2/255) and tested with much stronger PGD attacks (attack budget = 8/255), our HoSNNs significantly improve model accuracy on several datasets: from 30.54% to 74.91% on FashionMNIST, from 0.44% to 35.06% on SVHN, from 0.56% to 42.63% on CIFAR10, from 0.04% to 16.66% on CIFAR100, over the conventional LIF-based SNNs.

Graph neural networks (GNNs) have achieved great success for a variety of tasks such as node classification, graph classification, and link prediction. However, the use of GNNs (and machine learning more generally) to solve combinatorial optimization (CO) problems is much less explored. Here, we introduce a novel GNN architecture which leverages a complex filter bank and localized attention mechanisms designed to solve CO problems on graphs. We show how our method differentiates itself from prior GNN-based CO solvers and how it can be effectively applied to the maximum clique, minimum dominating set, and maximum cut problems in a self-supervised learning setting. In addition to demonstrating competitive overall performance across all tasks, we establish state-of-the-art results for the max cut problem.

Graph neural networks (GNNs) have emerged as a powerful paradigm for embedding-based entity alignment due to their capability of identifying isomorphic subgraphs. However, in real knowledge graphs (KGs), the counterpart entities usually have non-isomorphic neighborhood structures, which easily causes GNNs to yield different representations for them. To tackle this problem, we propose a new KG alignment network, namely AliNet, aiming at mitigating the non-isomorphism of neighborhood structures in an end-to-end manner. As the direct neighbors of counterpart entities are usually dissimilar due to the schema heterogeneity, AliNet introduces distant neighbors to expand the overlap between their neighborhood structures. It employs an attention mechanism to highlight helpful distant neighbors and reduce noises. Then, it controls the aggregation of both direct and distant neighborhood information using a gating mechanism. We further propose a relation loss to refine entity representations. We perform thorough experiments with detailed ablation studies and analyses on five entity alignment datasets, demonstrating the effectiveness of AliNet.

Graph convolutional network (GCN) has been successfully applied to many graph-based applications; however, training a large-scale GCN remains challenging. Current SGD-based algorithms suffer from either a high computational cost that exponentially grows with number of GCN layers, or a large space requirement for keeping the entire graph and the embedding of each node in memory. In this paper, we propose Cluster-GCN, a novel GCN algorithm that is suitable for SGD-based training by exploiting the graph clustering structure. Cluster-GCN works as the following: at each step, it samples a block of nodes that associate with a dense subgraph identified by a graph clustering algorithm, and restricts the neighborhood search within this subgraph. This simple but effective strategy leads to significantly improved memory and computational efficiency while being able to achieve comparable test accuracy with previous algorithms. To test the scalability of our algorithm, we create a new Amazon2M data with 2 million nodes and 61 million edges which is more than 5 times larger than the previous largest publicly available dataset (Reddit). For training a 3-layer GCN on this data, Cluster-GCN is faster than the previous state-of-the-art VR-GCN (1523 seconds vs 1961 seconds) and using much less memory (2.2GB vs 11.2GB). Furthermore, for training 4 layer GCN on this data, our algorithm can finish in around 36 minutes while all the existing GCN training algorithms fail to train due to the out-of-memory issue. Furthermore, Cluster-GCN allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy---using a 5-layer Cluster-GCN, we achieve state-of-the-art test F1 score 99.36 on the PPI dataset, while the previous best result was 98.71 by [16]. Our codes are publicly available at //github.com/google-research/google-research/tree/master/cluster_gcn.

Graph convolutional networks (GCNs) have recently become one of the most powerful tools for graph analytics tasks in numerous applications, ranging from social networks and natural language processing to bioinformatics and chemoinformatics, thanks to their ability to capture the complex relationships between concepts. At present, the vast majority of GCNs use a neighborhood aggregation framework to learn a continuous and compact vector, then performing a pooling operation to generalize graph embedding for the classification task. These approaches have two disadvantages in the graph classification task: (1)when only the largest sub-graph structure ($k$-hop neighbor) is used for neighborhood aggregation, a large amount of early-stage information is lost during the graph convolution step; (2) simple average/sum pooling or max pooling utilized, which loses the characteristics of each node and the topology between nodes. In this paper, we propose a novel framework called, dual attention graph convolutional networks (DAGCN) to address these problems. DAGCN automatically learns the importance of neighbors at different hops using a novel attention graph convolution layer, and then employs a second attention component, a self-attention pooling layer, to generalize the graph representation from the various aspects of a matrix graph embedding. The dual attention network is trained in an end-to-end manner for the graph classification task. We compare our model with state-of-the-art graph kernels and other deep learning methods. The experimental results show that our framework not only outperforms other baselines but also achieves a better rate of convergence.

Recently, graph neural networks (GNNs) have revolutionized the field of graph representation learning through effectively learned node embeddings, and achieved state-of-the-art results in tasks such as node classification and link prediction. However, current GNN methods are inherently flat and do not learn hierarchical representations of graphs---a limitation that is especially problematic for the task of graph classification, where the goal is to predict the label associated with an entire graph. Here we propose DiffPool, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion. DiffPool learns a differentiable soft cluster assignment for nodes at each layer of a deep GNN, mapping nodes to a set of clusters, which then form the coarsened input for the next GNN layer. Our experimental results show that combining existing GNN methods with DiffPool yields an average improvement of 5-10% accuracy on graph classification benchmarks, compared to all existing pooling approaches, achieving a new state-of-the-art on four out of five benchmark data sets.

Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. In this paper, we propose AdvGAN to generate adversarial examples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. For AdvGAN, once the generator is trained, it can generate adversarial perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. We apply AdvGAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly. Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76% accuracy on a public MNIST black-box attack challenge.

北京阿比特科技有限公司