亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Human Activity Recognition (HAR) plays a critical role in a wide range of real-world applications, and it is traditionally achieved via wearable sensing. Recently, to avoid the burden and discomfort caused by wearable devices, device-free approaches exploiting RF signals arise as a promising alternative for HAR. Most of the latest device-free approaches require training a large deep neural network model in either time or frequency domain, entailing extensive storage to contain the model and intensive computations to infer activities. Consequently, even with some major advances on device-free HAR, current device-free approaches are still far from practical in real-world scenarios where the computation and storage resources possessed by, for example, edge devices, are limited. Therefore, we introduce HAR-SAnet which is a novel RF-based HAR framework. It adopts an original signal adapted convolutional neural network architecture: instead of feeding the handcraft features of RF signals into a classifier, HAR-SAnet fuses them adaptively from both time and frequency domains to design an end-to-end neural network model. We apply point-wise grouped convolution and depth-wise separable convolutions to confine the model scale and to speed up the inference execution time. The experiment results show that the recognition accuracy of HAR-SAnet outperforms state-of-the-art algorithms and systems.

相關內容

神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(Neural Networks)是(shi)世界上(shang)三個最古老的(de)(de)神(shen)(shen)(shen)經(jing)(jing)建模學(xue)(xue)(xue)(xue)(xue)(xue)會(hui)的(de)(de)檔案期刊(kan):國(guo)際(ji)神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)學(xue)(xue)(xue)(xue)(xue)(xue)會(hui)(INNS)、歐洲神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)學(xue)(xue)(xue)(xue)(xue)(xue)會(hui)(ENNS)和(he)(he)(he)日本神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)學(xue)(xue)(xue)(xue)(xue)(xue)會(hui)(JNNS)。神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)提(ti)(ti)供了(le)一(yi)個論(lun)壇(tan),以發展和(he)(he)(he)培(pei)育一(yi)個國(guo)際(ji)社(she)會(hui)的(de)(de)學(xue)(xue)(xue)(xue)(xue)(xue)者和(he)(he)(he)實踐者感興趣的(de)(de)所有方面的(de)(de)神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)和(he)(he)(he)相關方法的(de)(de)計(ji)算(suan)(suan)智能。神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)歡迎高質(zhi)量(liang)論(lun)文(wen)的(de)(de)提(ti)(ti)交(jiao),有助于全面的(de)(de)神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)研(yan)究,從(cong)行為和(he)(he)(he)大腦建模,學(xue)(xue)(xue)(xue)(xue)(xue)習算(suan)(suan)法,通過數(shu)學(xue)(xue)(xue)(xue)(xue)(xue)和(he)(he)(he)計(ji)算(suan)(suan)分(fen)(fen)析(xi),系統的(de)(de)工(gong)程(cheng)和(he)(he)(he)技術(shu)應(ying)用(yong),大量(liang)使用(yong)神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)的(de)(de)概(gai)念(nian)和(he)(he)(he)技術(shu)。這(zhe)一(yi)獨特而廣泛(fan)的(de)(de)范圍促(cu)進了(le)生(sheng)物(wu)和(he)(he)(he)技術(shu)研(yan)究之間的(de)(de)思(si)想(xiang)交(jiao)流,并有助于促(cu)進對生(sheng)物(wu)啟發的(de)(de)計(ji)算(suan)(suan)智能感興趣的(de)(de)跨學(xue)(xue)(xue)(xue)(xue)(xue)科社(she)區(qu)的(de)(de)發展。因此,神(shen)(shen)(shen)經(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)編委(wei)會(hui)代表的(de)(de)專家(jia)領域包括心(xin)理(li)學(xue)(xue)(xue)(xue)(xue)(xue),神(shen)(shen)(shen)經(jing)(jing)生(sheng)物(wu)學(xue)(xue)(xue)(xue)(xue)(xue),計(ji)算(suan)(suan)機科學(xue)(xue)(xue)(xue)(xue)(xue),工(gong)程(cheng),數(shu)學(xue)(xue)(xue)(xue)(xue)(xue),物(wu)理(li)。該雜(za)志(zhi)發表文(wen)章(zhang)、信件(jian)和(he)(he)(he)評(ping)論(lun)以及給編輯的(de)(de)信件(jian)、社(she)論(lun)、時(shi)事、軟件(jian)調查和(he)(he)(he)專利信息。文(wen)章(zhang)發表在五個部分(fen)(fen)之一(yi):認知科學(xue)(xue)(xue)(xue)(xue)(xue),神(shen)(shen)(shen)經(jing)(jing)科學(xue)(xue)(xue)(xue)(xue)(xue),學(xue)(xue)(xue)(xue)(xue)(xue)習系統,數(shu)學(xue)(xue)(xue)(xue)(xue)(xue)和(he)(he)(he)計(ji)算(suan)(suan)分(fen)(fen)析(xi)、工(gong)程(cheng)和(he)(he)(he)應(ying)用(yong)。 官網(wang)(wang)地(di)址:

Recently, researches have explored the graph neural network (GNN) techniques on text classification, since GNN does well in handling complex structures and preserving global information. However, previous methods based on GNN are mainly faced with the practical problems of fixed corpus level graph structure which do not support online testing and high memory consumption. To tackle the problems, we propose a new GNN based model that builds graphs for each input text with global parameters sharing instead of a single graph for the whole corpus. This method removes the burden of dependence between an individual text and entire corpus which support online testing, but still preserve global information. Besides, we build graphs by much smaller windows in the text, which not only extract more local features but also significantly reduce the edge numbers as well as memory consumption. Experiments show that our model outperforms existing models on several text classification datasets even with consuming less memory.

During the last decade, Convolutional Neural Networks (CNNs) have become the de facto standard for various Computer Vision and Machine Learning operations. CNNs are feed-forward Artificial Neural Networks (ANNs) with alternating convolutional and subsampling layers. Deep 2D CNNs with many hidden layers and millions of parameters have the ability to learn complex objects and patterns providing that they can be trained on a massive size visual database with ground-truth labels. With a proper training, this unique ability makes them the primary tool for various engineering applications for 2D signals such as images and video frames. Yet, this may not be a viable option in numerous applications over 1D signals especially when the training data is scarce or application-specific. To address this issue, 1D CNNs have recently been proposed and immediately achieved the state-of-the-art performance levels in several applications such as personalized biomedical data classification and early diagnosis, structural health monitoring, anomaly detection and identification in power electronics and motor-fault detection. Another major advantage is that a real-time and low-cost hardware implementation is feasible due to the simple and compact configuration of 1D CNNs that perform only 1D convolutions (scalar multiplications and additions). This paper presents a comprehensive review of the general architecture and principals of 1D CNNs along with their major engineering applications, especially focused on the recent progress in this field. Their state-of-the-art performance is highlighted concluding with their unique properties. The benchmark datasets and the principal 1D CNN software used in those applications are also publically shared in a dedicated website.

Current top-performing object detectors depend on deep CNN backbones, such as ResNet-101 and Inception, benefiting from their powerful feature representations but suffering from high computational costs. Conversely, some lightweight model based detectors fulfil real time processing, while their accuracies are often criticized. In this paper, we explore an alternative to build a fast and accurate detector by strengthening lightweight features using a hand-crafted mechanism. Inspired by the structure of Receptive Fields (RFs) in human visual systems, we propose a novel RF Block (RFB) module, which takes the relationship between the size and eccentricity of RFs into account, to enhance the feature discriminability and robustness. We further assemble RFB to the top of SSD, constructing the RFB Net detector. To evaluate its effectiveness, experiments are conducted on two major benchmarks and the results show that RFB Net is able to reach the performance of advanced very deep detectors while keeping the real-time speed. Code is available at //github.com/ruinmessi/RFBNet.

Over the past decades, state-of-the-art medical image segmentation has heavily rested on signal processing paradigms, most notably registration-based label propagation and pair-wise patch comparison, which are generally slow despite a high segmentation accuracy. In recent years, deep learning has revolutionalized computer vision with many practices outperforming prior art, in particular the convolutional neural network (CNN) studies on image classification. Deep CNN has also started being applied to medical image segmentation lately, but generally involves long training and demanding memory requirements, achieving limited success. We propose a patch-based deep learning framework based on a revisit to the classic neural network model with substantial modernization, including the use of Rectified Linear Unit (ReLU) activation, dropout layers, 2.5D tri-planar patch multi-pathway settings. In a test application to hippocampus segmentation using 100 brain MR images from the ADNI database, our approach significantly outperformed prior art in terms of both segmentation accuracy and speed: scoring a median Dice score up to 90.98% on a near real-time performance (<1s).

This research mainly emphasizes on traffic detection thus essentially involving object detection and classification. The particular work discussed here is motivated from unsatisfactory attempts of re-using well known pre-trained object detection networks for domain specific data. In this course, some trivial issues leading to prominent performance drop are identified and ways to resolve them are discussed. For example, some simple yet relevant tricks regarding data collection and sampling prove to be very beneficial. Also, introducing a blur net to deal with blurred real time data is another important factor promoting performance elevation. We further study the neural network design issues for beneficial object classification and involve shared, region-independent convolutional features. Adaptive learning rates to deal with saddle points are also investigated and an average covariance matrix based pre-conditioned approach is proposed. We also introduce the use of optical flow features to accommodate orientation information. Experimental results demonstrate that this results in a steady rise in the performance rate.

The state of the art in video understanding suffers from two problems: (1) The major part of reasoning is performed locally in the video, therefore, it misses important relationships within actions that span several seconds. (2) While there are local methods with fast per-frame processing, the processing of the whole video is not efficient and hampers fast video retrieval or online classification of long-term activities. In this paper, we introduce a network architecture that takes long-term content into account and enables fast per-video processing at the same time. The architecture is based on merging long-term content already in the network rather than in a post-hoc fusion. Together with a sampling strategy, which exploits that neighboring frames are largely redundant, this yields high-quality action classification and video captioning at up to 230 videos per second, where each video can consist of a few hundred frames. The approach achieves competitive performance across all datasets while being 10x to 80x faster than state-of-the-art methods.

In this paper, we propose an effective electrocardiogram (ECG) arrhythmia classification method using a deep two-dimensional convolutional neural network (CNN) which recently shows outstanding performance in the field of pattern recognition. Every ECG beat was transformed into a two-dimensional grayscale image as an input data for the CNN classifier. Optimization of the proposed CNN classifier includes various deep learning techniques such as batch normalization, data augmentation, Xavier initialization, and dropout. In addition, we compared our proposed classifier with two well-known CNN models; AlexNet and VGGNet. ECG recordings from the MIT-BIH arrhythmia database were used for the evaluation of the classifier. As a result, our classifier achieved 99.05% average accuracy with 97.85% average sensitivity. To precisely validate our CNN classifier, 10-fold cross-validation was performed at the evaluation which involves every ECG recording as a test data. Our experimental results have successfully validated that the proposed CNN classifier with the transformed ECG images can achieve excellent classification accuracy without any manual pre-processing of the ECG signals such as noise filtering, feature extraction, and feature reduction.

Classifying large scale networks into several categories and distinguishing them according to their fine structures is of great importance with several applications in real life. However, most studies of complex networks focus on properties of a single network but seldom on classification, clustering, and comparison between different networks, in which the network is treated as a whole. Due to the non-Euclidean properties of the data, conventional methods can hardly be applied on networks directly. In this paper, we propose a novel framework of complex network classifier (CNC) by integrating network embedding and convolutional neural network to tackle the problem of network classification. By training the classifiers on synthetic complex network data and real international trade network data, we show CNC can not only classify networks in a high accuracy and robustness, it can also extract the features of the networks automatically.

Faster RCNN has achieved great success for generic object detection including PASCAL object detection and MS COCO object detection. In this report, we propose a detailed designed Faster RCNN method named FDNet1.0 for face detection. Several techniques were employed including multi-scale training, multi-scale testing, light-designed RCNN, some tricks for inference and a vote-based ensemble method. Our method achieves two 1th places and one 2nd place in three tasks over WIDER FACE validation dataset (easy set, medium set, hard set).

We investigate video classification via a two-stream convolutional neural network (CNN) design that directly ingests information extracted from compressed video bitstreams. Our approach begins with the observation that all modern video codecs divide the input frames into macroblocks (MBs). We demonstrate that selective access to MB motion vector (MV) information within compressed video bitstreams can also provide for selective, motion-adaptive, MB pixel decoding (a.k.a., MB texture decoding). This in turn allows for the derivation of spatio-temporal video activity regions at extremely high speed in comparison to conventional full-frame decoding followed by optical flow estimation. In order to evaluate the accuracy of a video classification framework based on such activity data, we independently train two CNN architectures on MB texture and MV correspondences and then fuse their scores to derive the final classification of each test video. Evaluation on two standard datasets shows that the proposed approach is competitive to the best two-stream video classification approaches found in the literature. At the same time: (i) a CPU-based realization of our MV extraction is over 977 times faster than GPU-based optical flow methods; (ii) selective decoding is up to 12 times faster than full-frame decoding; (iii) our proposed spatial and temporal CNNs perform inference at 5 to 49 times lower cloud computing cost than the fastest methods from the literature.

北京阿比特科技有限公司