亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Data-intensive workloads and applications, such as machine learning (ML), are fundamentally limited by traditional computing systems based on the von-Neumann architecture. As data movement operations and energy consumption become key bottlenecks in the design of computing systems, the interest in unconventional approaches such as Near-Data Processing (NDP), machine learning, and especially neural network (NN)-based accelerators has grown significantly. Emerging memory technologies, such as ReRAM and 3D-stacked, are promising for efficiently architecting NDP-based accelerators for NN due to their capabilities to work as both: High-density/low-energy storage and in/near-memory computation/search engine. In this paper, we present a survey of techniques for designing NDP architectures for NN. By classifying the techniques based on the memory technology employed, we underscore their similarities and differences. Finally, we discuss open challenges and future perspectives that need to be explored in order to improve and extend the adoption of NDP architectures for future computing platforms. This paper will be valuable for computer architects, chip designers and researchers in the area of machine learning.

相關內容

神(shen)經(jing)(jing)(jing)(jing)網(wang)絡(luo)(luo)(luo)(luo)(Neural Networks)是世界(jie)上(shang)三個最古老的(de)(de)(de)神(shen)經(jing)(jing)(jing)(jing)建模學(xue)(xue)會(hui)的(de)(de)(de)檔案(an)期刊:國際神(shen)經(jing)(jing)(jing)(jing)網(wang)絡(luo)(luo)(luo)(luo)學(xue)(xue)會(hui)(INNS)、歐洲神(shen)經(jing)(jing)(jing)(jing)網(wang)絡(luo)(luo)(luo)(luo)學(xue)(xue)會(hui)(ENNS)和(he)(he)(he)(he)日本神(shen)經(jing)(jing)(jing)(jing)網(wang)絡(luo)(luo)(luo)(luo)學(xue)(xue)會(hui)(JNNS)。神(shen)經(jing)(jing)(jing)(jing)網(wang)絡(luo)(luo)(luo)(luo)提(ti)供了一(yi)(yi)(yi)個論(lun)壇,以(yi)發(fa)展和(he)(he)(he)(he)培育一(yi)(yi)(yi)個國際社(she)會(hui)的(de)(de)(de)學(xue)(xue)者和(he)(he)(he)(he)實踐者感興(xing)趣的(de)(de)(de)所有方(fang)面的(de)(de)(de)神(shen)經(jing)(jing)(jing)(jing)網(wang)絡(luo)(luo)(luo)(luo)和(he)(he)(he)(he)相(xiang)關方(fang)法的(de)(de)(de)計(ji)(ji)算(suan)智能(neng)。神(shen)經(jing)(jing)(jing)(jing)網(wang)絡(luo)(luo)(luo)(luo)歡迎高(gao)質量論(lun)文的(de)(de)(de)提(ti)交,有助于全(quan)面的(de)(de)(de)神(shen)經(jing)(jing)(jing)(jing)網(wang)絡(luo)(luo)(luo)(luo)研(yan)究,從行(xing)為和(he)(he)(he)(he)大(da)腦(nao)建模,學(xue)(xue)習算(suan)法,通過數學(xue)(xue)和(he)(he)(he)(he)計(ji)(ji)算(suan)分(fen)(fen)析,系(xi)統(tong)的(de)(de)(de)工(gong)程(cheng)和(he)(he)(he)(he)技(ji)術應(ying)用(yong),大(da)量使用(yong)神(shen)經(jing)(jing)(jing)(jing)網(wang)絡(luo)(luo)(luo)(luo)的(de)(de)(de)概念(nian)和(he)(he)(he)(he)技(ji)術。這(zhe)一(yi)(yi)(yi)獨特而廣(guang)泛(fan)的(de)(de)(de)范圍促進了生物(wu)和(he)(he)(he)(he)技(ji)術研(yan)究之間的(de)(de)(de)思(si)想交流,并有助于促進對生物(wu)啟發(fa)的(de)(de)(de)計(ji)(ji)算(suan)智能(neng)感興(xing)趣的(de)(de)(de)跨學(xue)(xue)科(ke)社(she)區的(de)(de)(de)發(fa)展。因此(ci),神(shen)經(jing)(jing)(jing)(jing)網(wang)絡(luo)(luo)(luo)(luo)編(bian)委會(hui)代表(biao)的(de)(de)(de)專家領域(yu)包括(kuo)心理學(xue)(xue),神(shen)經(jing)(jing)(jing)(jing)生物(wu)學(xue)(xue),計(ji)(ji)算(suan)機(ji)科(ke)學(xue)(xue),工(gong)程(cheng),數學(xue)(xue),物(wu)理。該雜志發(fa)表(biao)文章、信(xin)件和(he)(he)(he)(he)評論(lun)以(yi)及給(gei)編(bian)輯(ji)的(de)(de)(de)信(xin)件、社(she)論(lun)、時(shi)事、軟件調查和(he)(he)(he)(he)專利信(xin)息(xi)。文章發(fa)表(biao)在五(wu)個部分(fen)(fen)之一(yi)(yi)(yi):認知科(ke)學(xue)(xue),神(shen)經(jing)(jing)(jing)(jing)科(ke)學(xue)(xue),學(xue)(xue)習系(xi)統(tong),數學(xue)(xue)和(he)(he)(he)(he)計(ji)(ji)算(suan)分(fen)(fen)析、工(gong)程(cheng)和(he)(he)(he)(he)應(ying)用(yong)。 官網(wang)地(di)址:

We review the scholarly contributions that utilise Natural Language Processing (NLP) techniques to support the design process. Using a heuristic approach, we gathered 223 articles that are published in 32 journals within the period 1991-present. We present state-of-the-art NLP in-and-for design research by reviewing these articles according to the type of natural language text sources: internal reports, design concepts, discourse transcripts, technical publications, consumer opinions, and others. Upon summarizing and identifying the gaps in these contributions, we utilise an existing design innovation framework to identify the applications that are currently being supported by NLP. We then propose a few methodological and theoretical directions for future NLP in-and-for design research.

The best neural architecture for a given machine learning problem depends on many factors: not only the complexity and structure of the dataset, but also on resource constraints including latency, compute, energy consumption, etc. Neural architecture search (NAS) for tabular datasets is an important but under-explored problem. Previous NAS algorithms designed for image search spaces incorporate resource constraints directly into the reinforcement learning rewards. In this paper, we argue that search spaces for tabular NAS pose considerable challenges for these existing reward-shaping methods, and propose a new reinforcement learning (RL) controller to address these challenges. Motivated by rejection sampling, when we sample candidate architectures during a search, we immediately discard any architecture that violates our resource constraints. We use a Monte-Carlo-based correction to our RL policy gradient update to account for this extra filtering step. Results on several tabular datasets show TabNAS, the proposed approach, efficiently finds high-quality models that satisfy the given resource constraints.

The Internet of Things (IoT) boom has revolutionized almost every corner of people's daily lives: healthcare, home, transportation, manufacturing, supply chain, and so on. With the recent development of sensor and communication technologies, IoT devices including smart wearables, cameras, smartwatches, and autonomous vehicles can accurately measure and perceive their surrounding environment. Continuous sensing generates massive amounts of data and presents challenges for machine learning. Deep learning models (e.g., convolution neural networks and recurrent neural networks) have been extensively employed in solving IoT tasks by learning patterns from multi-modal sensory data. Graph Neural Networks (GNNs), an emerging and fast-growing family of neural network models, can capture complex interactions within sensor topology and have been demonstrated to achieve state-of-the-art results in numerous IoT learning tasks. In this survey, we present a comprehensive review of recent advances in the application of GNNs to the IoT field, including a deep dive analysis of GNN design in various IoT sensing environments, an overarching list of public data and source code from the collected publications, and future research directions. To keep track of newly published works, we collect representative papers and their open-source implementations and create a Github repository at //github.com/GuiminDong/GNN4IoT.

It has been a long time that computer architecture and systems are optimized to enable efficient execution of machine learning (ML) algorithms or models. Now, it is time to reconsider the relationship between ML and systems, and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: the improvement of designers' productivity, and the completion of the virtuous cycle. In this paper, we present a comprehensive review of work that applies ML for system design, which can be grouped into two major categories, ML-based modelling that involves predictions of performance metrics or some other criteria of interest, and ML-based design methodology that directly leverages ML as the design tool. For ML-based modelling, we discuss existing studies based on their target level of system, ranging from the circuit level to the architecture/system level. For ML-based design methodology, we follow a bottom-up path to review current work, with a scope of (micro-)architecture design (memory, branch prediction, NoC), coordination between architecture/system and workload (resource allocation and management, data center management, and security), compiler, and design automation. We further provide a future vision of opportunities and potential directions, and envision that applying ML for computer architecture and systems would thrive in the community.

Dynamic neural network is an emerging research topic in deep learning. Compared to static models which have fixed computational graphs and parameters at the inference stage, dynamic networks can adapt their structures or parameters to different inputs, leading to notable advantages in terms of accuracy, computational efficiency, adaptiveness, etc. In this survey, we comprehensively review this rapidly developing area by dividing dynamic networks into three main categories: 1) instance-wise dynamic models that process each instance with data-dependent architectures or parameters; 2) spatial-wise dynamic networks that conduct adaptive computation with respect to different spatial locations of image data and 3) temporal-wise dynamic models that perform adaptive inference along the temporal dimension for sequential data such as videos and texts. The important research problems of dynamic networks, e.g., architecture design, decision making scheme, optimization technique and applications, are reviewed systematically. Finally, we discuss the open problems in this field together with interesting future research directions.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

The concept of smart grid has been introduced as a new vision of the conventional power grid to figure out an efficient way of integrating green and renewable energy technologies. In this way, Internet-connected smart grid, also called energy Internet, is also emerging as an innovative approach to ensure the energy from anywhere at any time. The ultimate goal of these developments is to build a sustainable society. However, integrating and coordinating a large number of growing connections can be a challenging issue for the traditional centralized grid system. Consequently, the smart grid is undergoing a transformation to the decentralized topology from its centralized form. On the other hand, blockchain has some excellent features which make it a promising application for smart grid paradigm. In this paper, we have an aim to provide a comprehensive survey on application of blockchain in smart grid. As such, we identify the significant security challenges of smart grid scenarios that can be addressed by blockchain. Then, we present a number of blockchain-based recent research works presented in different literatures addressing security issues in the area of smart grid. We also summarize several related practical projects, trials, and products that have been emerged recently. Finally, we discuss essential research challenges and future directions of applying blockchain to smart grid security issues.

Deep Convolutional Neural Networks (CNNs) are a special type of Neural Networks, which have shown state-of-the-art results on various competitive benchmarks. The powerful learning ability of deep CNN is largely achieved with the use of multiple non-linear feature extraction stages that can automatically learn hierarchical representation from the data. Availability of a large amount of data and improvements in the hardware processing units have accelerated the research in CNNs and recently very interesting deep CNN architectures are reported. The recent race in deep CNN architectures for achieving high performance on the challenging benchmarks has shown that the innovative architectural ideas, as well as parameter optimization, can improve the CNN performance on various vision-related tasks. In this regard, different ideas in the CNN design have been explored such as use of different activation and loss functions, parameter optimization, regularization, and restructuring of processing units. However, the major improvement in representational capacity is achieved by the restructuring of the processing units. Especially, the idea of using a block as a structural unit instead of a layer is gaining substantial appreciation. This survey thus focuses on the intrinsic taxonomy present in the recently reported CNN architectures and consequently, classifies the recent innovations in CNN architectures into seven different categories. These seven categories are based on spatial exploitation, depth, multi-path, width, feature map exploitation, channel boosting and attention. Additionally, it covers the elementary understanding of the CNN components and sheds light on the current challenges and applications of CNNs.

Lots of learning tasks require dealing with graph data which contains rich relation information among elements. Modeling physics system, learning molecular fingerprints, predicting protein interface, and classifying diseases require that a model to learn from graph inputs. In other domains such as learning from non-structural data like texts and images, reasoning on extracted structures, like the dependency tree of sentences and the scene graph of images, is an important research topic which also needs graph reasoning models. Graph neural networks (GNNs) are connectionist models that capture the dependence of graphs via message passing between the nodes of graphs. Unlike standard neural networks, graph neural networks retain a state that can represent information from its neighborhood with an arbitrary depth. Although the primitive graph neural networks have been found difficult to train for a fixed point, recent advances in network architectures, optimization techniques, and parallel computation have enabled successful learning with them. In recent years, systems based on graph convolutional network (GCN) and gated graph neural network (GGNN) have demonstrated ground-breaking performance on many tasks mentioned above. In this survey, we provide a detailed review over existing graph neural network models, systematically categorize the applications, and propose four open problems for future research.

Deep Learning has enabled remarkable progress over the last years on a variety of tasks, such as image recognition, speech recognition, and machine translation. One crucial aspect for this progress are novel neural architectures. Currently employed architectures have mostly been developed manually by human experts, which is a time-consuming and error-prone process. Because of this, there is growing interest in automated neural architecture search methods. We provide an overview of existing work in this field of research and categorize them according to three dimensions: search space, search strategy, and performance estimation strategy.

北京阿比特科技有限公司