亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Video data is often repetitive; for example, the contents of adjacent frames are usually strongly correlated. Such redundancy occurs at multiple levels of complexity, from low-level pixel values to textures and high-level semantics. We propose Event Neural Networks (EvNets), which leverage this redundancy to achieve considerable computation savings during video inference. A defining characteristic of EvNets is that each neuron has state variables that provide it with long-term memory, which allows low-cost, high-accuracy inference even in the presence of significant camera motion. We show that it is possible to transform a wide range of neural networks into EvNets without re-training. We demonstrate our method on state-of-the-art architectures for both high- and low-level visual processing, including pose recognition, object detection, optical flow, and image enhancement. We observe roughly an order-of-magnitude reduction in computational costs compared to conventional networks, with minimal reductions in model accuracy.

相關內容

神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)(luo)(Neural Networks)是世(shi)界上(shang)三個最(zui)古老的(de)(de)(de)(de)神(shen)(shen)(shen)經(jing)(jing)建(jian)模學(xue)(xue)(xue)會(hui)(hui)(hui)的(de)(de)(de)(de)檔案期刊(kan):國(guo)際(ji)神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)(luo)學(xue)(xue)(xue)會(hui)(hui)(hui)(INNS)、歐洲(zhou)神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)(luo)學(xue)(xue)(xue)會(hui)(hui)(hui)(ENNS)和(he)(he)(he)(he)日(ri)本(ben)神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)(luo)學(xue)(xue)(xue)會(hui)(hui)(hui)(JNNS)。神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)(luo)提供了一個論壇,以發展(zhan)和(he)(he)(he)(he)培育一個國(guo)際(ji)社(she)(she)會(hui)(hui)(hui)的(de)(de)(de)(de)學(xue)(xue)(xue)者(zhe)和(he)(he)(he)(he)實踐者(zhe)感興(xing)趣的(de)(de)(de)(de)所有(you)方面的(de)(de)(de)(de)神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)(luo)和(he)(he)(he)(he)相(xiang)關方法的(de)(de)(de)(de)計(ji)算智能(neng)。神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)(luo)歡迎高質量論文(wen)的(de)(de)(de)(de)提交,有(you)助(zhu)于(yu)全面的(de)(de)(de)(de)神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)(luo)研究,從行為和(he)(he)(he)(he)大腦建(jian)模,學(xue)(xue)(xue)習(xi)算法,通過數學(xue)(xue)(xue)和(he)(he)(he)(he)計(ji)算分析,系統的(de)(de)(de)(de)工(gong)程和(he)(he)(he)(he)技術(shu)應(ying)用,大量使用神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)(luo)的(de)(de)(de)(de)概念(nian)和(he)(he)(he)(he)技術(shu)。這一獨特(te)而(er)廣泛的(de)(de)(de)(de)范圍促(cu)進(jin)了生物和(he)(he)(he)(he)技術(shu)研究之間的(de)(de)(de)(de)思想交流,并(bing)有(you)助(zhu)于(yu)促(cu)進(jin)對生物啟發的(de)(de)(de)(de)計(ji)算智能(neng)感興(xing)趣的(de)(de)(de)(de)跨學(xue)(xue)(xue)科(ke)(ke)社(she)(she)區(qu)的(de)(de)(de)(de)發展(zhan)。因(yin)此,神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)(wang)(wang)(wang)絡(luo)(luo)(luo)編委(wei)會(hui)(hui)(hui)代表(biao)的(de)(de)(de)(de)專(zhuan)家領(ling)域包括心理(li)學(xue)(xue)(xue),神(shen)(shen)(shen)經(jing)(jing)生物學(xue)(xue)(xue),計(ji)算機科(ke)(ke)學(xue)(xue)(xue),工(gong)程,數學(xue)(xue)(xue),物理(li)。該雜志(zhi)發表(biao)文(wen)章(zhang)、信件(jian)和(he)(he)(he)(he)評論以及(ji)給編輯(ji)的(de)(de)(de)(de)信件(jian)、社(she)(she)論、時事、軟件(jian)調查(cha)和(he)(he)(he)(he)專(zhuan)利信息。文(wen)章(zhang)發表(biao)在五個部分之一:認知科(ke)(ke)學(xue)(xue)(xue),神(shen)(shen)(shen)經(jing)(jing)科(ke)(ke)學(xue)(xue)(xue),學(xue)(xue)(xue)習(xi)系統,數學(xue)(xue)(xue)和(he)(he)(he)(he)計(ji)算分析、工(gong)程和(he)(he)(he)(he)應(ying)用。 官(guan)網(wang)(wang)(wang)(wang)(wang)地址:

The increasing size of recently proposed Neural Networks makes it hard to implement them on embedded devices, where memory, battery and computational power are a non-trivial bottleneck. For this reason during the last years network compression literature has been thriving and a large number of solutions has been been published to reduce both the number of operations and the parameters involved with the models. Unfortunately, most of these reducing techniques are actually heuristic methods and usually require at least one re-training step to recover the accuracy. The need of procedures for model reduction is well-known also in the fields of Verification and Performances Evaluation, where large efforts have been devoted to the definition of quotients that preserve the observable underlying behaviour. In this paper we try to bridge the gap between the most popular and very effective network reduction strategies and formal notions, such as lumpability, introduced for verification and evaluation of Markov Chains. Elaborating on lumpability we propose a pruning approach that reduces the number of neurons in a network without using any data or fine-tuning, while completely preserving the exact behaviour. Relaxing the constraints on the exact definition of the quotienting method we can give a formal explanation of some of the most common reduction techniques.

The Internet of Things (IoT) boom has revolutionized almost every corner of people's daily lives: healthcare, home, transportation, manufacturing, supply chain, and so on. With the recent development of sensor and communication technologies, IoT devices including smart wearables, cameras, smartwatches, and autonomous vehicles can accurately measure and perceive their surrounding environment. Continuous sensing generates massive amounts of data and presents challenges for machine learning. Deep learning models (e.g., convolution neural networks and recurrent neural networks) have been extensively employed in solving IoT tasks by learning patterns from multi-modal sensory data. Graph Neural Networks (GNNs), an emerging and fast-growing family of neural network models, can capture complex interactions within sensor topology and have been demonstrated to achieve state-of-the-art results in numerous IoT learning tasks. In this survey, we present a comprehensive review of recent advances in the application of GNNs to the IoT field, including a deep dive analysis of GNN design in various IoT sensing environments, an overarching list of public data and source code from the collected publications, and future research directions. To keep track of newly published works, we collect representative papers and their open-source implementations and create a Github repository at //github.com/GuiminDong/GNN4IoT.

Graph neural networks (GNNs) is widely used to learn a powerful representation of graph-structured data. Recent work demonstrates that transferring knowledge from self-supervised tasks to downstream tasks could further improve graph representation. However, there is an inherent gap between self-supervised tasks and downstream tasks in terms of optimization objective and training data. Conventional pre-training methods may be not effective enough on knowledge transfer since they do not make any adaptation for downstream tasks. To solve such problems, we propose a new transfer learning paradigm on GNNs which could effectively leverage self-supervised tasks as auxiliary tasks to help the target task. Our methods would adaptively select and combine different auxiliary tasks with the target task in the fine-tuning stage. We design an adaptive auxiliary loss weighting model to learn the weights of auxiliary tasks by quantifying the consistency between auxiliary tasks and the target task. In addition, we learn the weighting model through meta-learning. Our methods can be applied to various transfer learning approaches, it performs well not only in multi-task learning but also in pre-training and fine-tuning. Comprehensive experiments on multiple downstream tasks demonstrate that the proposed methods can effectively combine auxiliary tasks with the target task and significantly improve the performance compared to state-of-the-art methods.

This paper presents a new approach for assembling graph neural networks based on framelet transforms. The latter provides a multi-scale representation for graph-structured data. With the framelet system, we can decompose the graph feature into low-pass and high-pass frequencies as extracted features for network training, which then defines a framelet-based graph convolution. The framelet decomposition naturally induces a graph pooling strategy by aggregating the graph feature into low-pass and high-pass spectra, which considers both the feature values and geometry of the graph data and conserves the total information. The graph neural networks with the proposed framelet convolution and pooling achieve state-of-the-art performance in many types of node and graph prediction tasks. Moreover, we propose shrinkage as a new activation for the framelet convolution, which thresholds the high-frequency information at different scales. Compared to ReLU, shrinkage in framelet convolution improves the graph neural network model in terms of denoising and signal compression: noises in both node and structure can be significantly reduced by accurately cutting off the high-pass coefficients from framelet decomposition, and the signal can be compressed to less than half its original size with the prediction performance well preserved.

Dynamic neural network is an emerging research topic in deep learning. Compared to static models which have fixed computational graphs and parameters at the inference stage, dynamic networks can adapt their structures or parameters to different inputs, leading to notable advantages in terms of accuracy, computational efficiency, adaptiveness, etc. In this survey, we comprehensively review this rapidly developing area by dividing dynamic networks into three main categories: 1) instance-wise dynamic models that process each instance with data-dependent architectures or parameters; 2) spatial-wise dynamic networks that conduct adaptive computation with respect to different spatial locations of image data and 3) temporal-wise dynamic models that perform adaptive inference along the temporal dimension for sequential data such as videos and texts. The important research problems of dynamic networks, e.g., architecture design, decision making scheme, optimization technique and applications, are reviewed systematically. Finally, we discuss the open problems in this field together with interesting future research directions.

Graph Neural Networks (GNNs) have proven to be useful for many different practical applications. However, many existing GNN models have implicitly assumed homophily among the nodes connected in the graph, and therefore have largely overlooked the important setting of heterophily, where most connected nodes are from different classes. In this work, we propose a novel framework called CPGNN that generalizes GNNs for graphs with either homophily or heterophily. The proposed framework incorporates an interpretable compatibility matrix for modeling the heterophily or homophily level in the graph, which can be learned in an end-to-end fashion, enabling it to go beyond the assumption of strong homophily. Theoretically, we show that replacing the compatibility matrix in our framework with the identity (which represents pure homophily) reduces to GCN. Our extensive experiments demonstrate the effectiveness of our approach in more realistic and challenging experimental settings with significantly less training data compared to previous works: CPGNN variants achieve state-of-the-art results in heterophily settings with or without contextual node features, while maintaining comparable performance in homophily settings.

Deep learning methods for graphs achieve remarkable performance on many node-level and graph-level prediction tasks. However, despite the proliferation of the methods and their success, prevailing Graph Neural Networks (GNNs) neglect subgraphs, rendering subgraph prediction tasks challenging to tackle in many impactful applications. Further, subgraph prediction tasks present several unique challenges, because subgraphs can have non-trivial internal topology, but also carry a notion of position and external connectivity information relative to the underlying graph in which they exist. Here, we introduce SUB-GNN, a subgraph neural network to learn disentangled subgraph representations. In particular, we propose a novel subgraph routing mechanism that propagates neural messages between the subgraph's components and randomly sampled anchor patches from the underlying graph, yielding highly accurate subgraph representations. SUB-GNN specifies three channels, each designed to capture a distinct aspect of subgraph structure, and we provide empirical evidence that the channels encode their intended properties. We design a series of new synthetic and real-world subgraph datasets. Empirical results for subgraph classification on eight datasets show that SUB-GNN achieves considerable performance gains, outperforming strong baseline methods, including node-level and graph-level GNNs, by 12.4% over the strongest baseline. SUB-GNN performs exceptionally well on challenging biomedical datasets when subgraphs have complex topology and even comprise multiple disconnected components.

Learning node embeddings that capture a node's position within the broader graph structure is crucial for many prediction tasks on graphs. However, existing Graph Neural Network (GNN) architectures have limited power in capturing the position/location of a given node with respect to all other nodes of the graph. Here we propose Position-aware Graph Neural Networks (P-GNNs), a new class of GNNs for computing position-aware node embeddings. P-GNN first samples sets of anchor nodes, computes the distance of a given target node to each anchor-set,and then learns a non-linear distance-weighted aggregation scheme over the anchor-sets. This way P-GNNs can capture positions/locations of nodes with respect to the anchor nodes. P-GNNs have several advantages: they are inductive, scalable,and can incorporate node feature information. We apply P-GNNs to multiple prediction tasks including link prediction and community detection. We show that P-GNNs consistently outperform state of the art GNNs, with up to 66% improvement in terms of the ROC AUC score.

Graphs, which describe pairwise relations between objects, are essential representations of many real-world data such as social networks. In recent years, graph neural networks, which extend the neural network models to graph data, have attracted increasing attention. Graph neural networks have been applied to advance many different graph related tasks such as reasoning dynamics of the physical system, graph classification, and node classification. Most of the existing graph neural network models have been designed for static graphs, while many real-world graphs are inherently dynamic. For example, social networks are naturally evolving as new users joining and new relations being created. Current graph neural network models cannot utilize the dynamic information in dynamic graphs. However, the dynamic information has been proven to enhance the performance of many graph analytical tasks such as community detection and link prediction. Hence, it is necessary to design dedicated graph neural networks for dynamic graphs. In this paper, we propose DGNN, a new {\bf D}ynamic {\bf G}raph {\bf N}eural {\bf N}etwork model, which can model the dynamic information as the graph evolving. In particular, the proposed framework can keep updating node information by capturing the sequential information of edges, the time intervals between edges and information propagation coherently. Experimental results on various dynamic graphs demonstrate the effectiveness of the proposed framework.

Script event prediction requires a model to predict the subsequent event given an existing event context. Previous models based on event pairs or event chains cannot make full use of dense event connections, which may limit their capability of event prediction. To remedy this, we propose constructing an event graph to better utilize the event network information for script event prediction. In particular, we first extract narrative event chains from large quantities of news corpus, and then construct a narrative event evolutionary graph (NEEG) based on the extracted chains. NEEG can be seen as a knowledge base that describes event evolutionary principles and patterns. To solve the inference problem on NEEG, we present a scaled graph neural network (SGNN) to model event interactions and learn better event representations. Instead of computing the representations on the whole graph, SGNN processes only the concerned nodes each time, which makes our model feasible to large-scale graphs. By comparing the similarity between input context event representations and candidate event representations, we can choose the most reasonable subsequent event. Experimental results on widely used New York Times corpus demonstrate that our model significantly outperforms state-of-the-art baseline methods, by using standard multiple choice narrative cloze evaluation.

北京阿比特科技有限公司