亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Video data is often repetitive; for example, the content of adjacent frames is usually strongly correlated. Such repetition occurs at multiple levels of complexity, from low-level pixel values to textures and high-level semantics. We propose Event Neural Networks (EvNets), a novel class of networks that leverage this repetition to achieve considerable computation savings for video inference tasks. A defining characteristic of EvNets is that each neuron has state variables that provide it with long-term memory, which allows low-cost inference even in the presence of significant camera motion. We show that it is possible to transform virtually any conventional neural into an EvNet. We demonstrate the effectiveness of our method on several state-of-the-art neural networks for both high- and low-level visual processing, including pose recognition, object detection, optical flow, and image enhancement. We observe up to an order-of-magnitude reduction in computational costs (2-20x) as compared to conventional networks, with minimal reductions in model accuracy.

相關內容

神(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(Neural Networks)是(shi)世界上(shang)三個最古老的(de)(de)(de)(de)(de)(de)神(shen)(shen)經(jing)(jing)(jing)(jing)建(jian)(jian)模學(xue)(xue)(xue)會(hui)(hui)的(de)(de)(de)(de)(de)(de)檔(dang)案期刊:國(guo)際(ji)神(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)學(xue)(xue)(xue)會(hui)(hui)(INNS)、歐洲神(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)學(xue)(xue)(xue)會(hui)(hui)(ENNS)和(he)日(ri)本神(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)學(xue)(xue)(xue)會(hui)(hui)(JNNS)。神(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)提供(gong)了(le)一(yi)個論壇,以發(fa)(fa)展(zhan)和(he)培(pei)育一(yi)個國(guo)際(ji)社(she)會(hui)(hui)的(de)(de)(de)(de)(de)(de)學(xue)(xue)(xue)者和(he)實踐者感(gan)(gan)興趣的(de)(de)(de)(de)(de)(de)所有方面(mian)的(de)(de)(de)(de)(de)(de)神(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)和(he)相關方法的(de)(de)(de)(de)(de)(de)計(ji)算智能。神(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)歡(huan)迎高質量論文(wen)的(de)(de)(de)(de)(de)(de)提交(jiao),有助于(yu)全面(mian)的(de)(de)(de)(de)(de)(de)神(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)研究(jiu)(jiu),從行為和(he)大腦建(jian)(jian)模,學(xue)(xue)(xue)習算法,通過(guo)數學(xue)(xue)(xue)和(he)計(ji)算分(fen)(fen)析(xi),系統的(de)(de)(de)(de)(de)(de)工程(cheng)和(he)技(ji)術應用,大量使用神(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)的(de)(de)(de)(de)(de)(de)概念和(he)技(ji)術。這一(yi)獨(du)特而廣泛的(de)(de)(de)(de)(de)(de)范(fan)圍促進了(le)生(sheng)物(wu)(wu)和(he)技(ji)術研究(jiu)(jiu)之間的(de)(de)(de)(de)(de)(de)思想交(jiao)流,并有助于(yu)促進對(dui)生(sheng)物(wu)(wu)啟發(fa)(fa)的(de)(de)(de)(de)(de)(de)計(ji)算智能感(gan)(gan)興趣的(de)(de)(de)(de)(de)(de)跨學(xue)(xue)(xue)科(ke)社(she)區的(de)(de)(de)(de)(de)(de)發(fa)(fa)展(zhan)。因此,神(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)編(bian)委會(hui)(hui)代表的(de)(de)(de)(de)(de)(de)專家領域包括心理學(xue)(xue)(xue),神(shen)(shen)經(jing)(jing)(jing)(jing)生(sheng)物(wu)(wu)學(xue)(xue)(xue),計(ji)算機科(ke)學(xue)(xue)(xue),工程(cheng),數學(xue)(xue)(xue),物(wu)(wu)理。該雜志發(fa)(fa)表文(wen)章、信件和(he)評論以及給編(bian)輯的(de)(de)(de)(de)(de)(de)信件、社(she)論、時事、軟件調查(cha)和(he)專利信息(xi)。文(wen)章發(fa)(fa)表在五個部分(fen)(fen)之一(yi):認知(zhi)科(ke)學(xue)(xue)(xue),神(shen)(shen)經(jing)(jing)(jing)(jing)科(ke)學(xue)(xue)(xue),學(xue)(xue)(xue)習系統,數學(xue)(xue)(xue)和(he)計(ji)算分(fen)(fen)析(xi)、工程(cheng)和(he)應用。 官(guan)網(wang)(wang)地址:

Dynamic neural network is an emerging research topic in deep learning. Compared to static models which have fixed computational graphs and parameters at the inference stage, dynamic networks can adapt their structures or parameters to different inputs, leading to notable advantages in terms of accuracy, computational efficiency, adaptiveness, etc. In this survey, we comprehensively review this rapidly developing area by dividing dynamic networks into three main categories: 1) instance-wise dynamic models that process each instance with data-dependent architectures or parameters; 2) spatial-wise dynamic networks that conduct adaptive computation with respect to different spatial locations of image data and 3) temporal-wise dynamic models that perform adaptive inference along the temporal dimension for sequential data such as videos and texts. The important research problems of dynamic networks, e.g., architecture design, decision making scheme, optimization technique and applications, are reviewed systematically. Finally, we discuss the open problems in this field together with interesting future research directions.

In order to overcome the expressive limitations of graph neural networks (GNNs), we propose the first method that exploits vector flows over graphs to develop globally consistent directional and asymmetric aggregation functions. We show that our directional graph networks (DGNs) generalize convolutional neural networks (CNNs) when applied on a grid. Whereas recent theoretical works focus on understanding local neighbourhoods, local structures and local isomorphism with no global information flow, our novel theoretical framework allows directional convolutional kernels in any graph. First, by defining a vector field in the graph, we develop a method of applying directional derivatives and smoothing by projecting node-specific messages into the field. Then we propose the use of the Laplacian eigenvectors as such vector field, and we show that the method generalizes CNNs on an n-dimensional grid, and is provably more discriminative than standard GNNs regarding the Weisfeiler-Lehman 1-WL test. Finally, we bring the power of CNN data augmentation to graphs by providing a means of doing reflection, rotation and distortion on the underlying directional field. We evaluate our method on different standard benchmarks and see a relative error reduction of 8\% on the CIFAR10 graph dataset and 11% to 32% on the molecular ZINC dataset. An important outcome of this work is that it enables to translate any physical or biological problems with intrinsic directional axes into a graph network formalism with an embedded directional field.

The classification of sentences is very challenging, since sentences contain the limited contextual information. In this paper, we proposed an Attention-Gated Convolutional Neural Network (AGCNN) for sentence classification, which generates attention weights from the feature's context windows of different sizes by using specialized convolution encoders. It makes full use of limited contextual information to extract and enhance the influence of important features in predicting the sentence's category. Experimental results demonstrated that our model can achieve up to 3.1% higher accuracy than standard CNN models, and gain competitive results over the baselines on four out of the six tasks. Besides, we designed an activation function, namely, Natural Logarithm rescaled Rectified Linear Unit (NLReLU). Experiments showed that NLReLU can outperform ReLU and is comparable to other well-known activation functions on AGCNN.

Graphs, which describe pairwise relations between objects, are essential representations of many real-world data such as social networks. In recent years, graph neural networks, which extend the neural network models to graph data, have attracted increasing attention. Graph neural networks have been applied to advance many different graph related tasks such as reasoning dynamics of the physical system, graph classification, and node classification. Most of the existing graph neural network models have been designed for static graphs, while many real-world graphs are inherently dynamic. For example, social networks are naturally evolving as new users joining and new relations being created. Current graph neural network models cannot utilize the dynamic information in dynamic graphs. However, the dynamic information has been proven to enhance the performance of many graph analytical tasks such as community detection and link prediction. Hence, it is necessary to design dedicated graph neural networks for dynamic graphs. In this paper, we propose DGNN, a new {\bf D}ynamic {\bf G}raph {\bf N}eural {\bf N}etwork model, which can model the dynamic information as the graph evolving. In particular, the proposed framework can keep updating node information by capturing the sequential information of edges, the time intervals between edges and information propagation coherently. Experimental results on various dynamic graphs demonstrate the effectiveness of the proposed framework.

Graph Neural Networks (GNNs) for representation learning of graphs broadly follow a neighborhood aggregation framework, where the representation vector of a node is computed by recursively aggregating and transforming feature vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs in capturing different graph structures. Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance.

Text Classification is an important and classical problem in natural language processing. There have been a number of studies that applied convolutional neural networks (convolution on regular grid, e.g., sequence) to classification. However, only a limited number of studies have explored the more flexible graph convolutional neural networks (e.g., convolution on non-grid, e.g., arbitrary graph) for the task. In this work, we propose to use graph convolutional networks for text classification. We build a single text graph for a corpus based on word co-occurrence and document word relations, then learn a Text Graph Convolutional Network (Text GCN) for the corpus. Our Text GCN is initialized with one-hot representation for word and document, it then jointly learns the embeddings for both words and documents, as supervised by the known class labels for documents. Our experimental results on multiple benchmark datasets demonstrate that a vanilla Text GCN without any external word embeddings or knowledge outperforms state-of-the-art methods for text classification. On the other hand, Text GCN also learns predictive word and document embeddings. In addition, experimental results show that the improvement of Text GCN over state-of-the-art comparison methods become more prominent as we lower the percentage of training data, suggesting the robustness of Text GCN to less training data in text classification.

Automatic neural architecture design has shown its potential in discovering powerful neural network architectures. Existing methods, no matter based on reinforcement learning or evolutionary algorithms (EA), conduct architecture search in a discrete space, which is highly inefficient. In this paper, we propose a simple and efficient method to automatic neural architecture design based on continuous optimization. We call this new approach neural architecture optimization (NAO). There are three key components in our proposed approach: (1) An encoder embeds/maps neural network architectures into a continuous space. (2) A predictor takes the continuous representation of a network as input and predicts its accuracy. (3) A decoder maps a continuous representation of a network back to its architecture. The performance predictor and the encoder enable us to perform gradient based optimization in the continuous space to find the embedding of a new architecture with potentially better accuracy. Such a better embedding is then decoded to a network by the decoder. Experiments show that the architecture discovered by our method is very competitive for image classification task on CIFAR-10 and language modeling task on PTB, outperforming or on par with the best results of previous architecture search methods with a significantly reduction of computational resources. Specifically we obtain $2.07\%$ test set error rate for CIFAR-10 image classification task and $55.9$ test set perplexity of PTB language modeling task. The best discovered architectures on both tasks are successfully transferred to other tasks such as CIFAR-100 and WikiText-2.

We propose a Bayesian convolutional neural network built upon Bayes by Backprop and elaborate how this known method can serve as the fundamental construct of our novel, reliable variational inference method for convolutional neural networks. First, we show how Bayes by Backprop can be applied to convolutional layers where weights in filters have probability distributions instead of point-estimates; and second, how our proposed framework leads with various network architectures to performances comparable to convolutional neural networks with point-estimates weights. In the past, Bayes by Backprop has been successfully utilised in feedforward and recurrent neural networks, but not in convolutional ones. This work symbolises the extension of the group of Bayesian neural networks which encompasses all three aforementioned types of network architectures now.

Graph Convolutional Neural Networks (Graph CNNs) are generalizations of classical CNNs to handle graph data such as molecular data, point could and social networks. Current filters in graph CNNs are built for fixed and shared graph structure. However, for most real data, the graph structures varies in both size and connectivity. The paper proposes a generalized and flexible graph CNN taking data of arbitrary graph structure as input. In that way a task-driven adaptive graph is learned for each graph data while training. To efficiently learn the graph, a distance metric learning is proposed. Extensive experiments on nine graph-structured datasets have demonstrated the superior performance improvement on both convergence speed and predictive accuracy.

Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build "fully convolutional" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20% relative improvement to 62.2% mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.

北京阿比特科技有限公司