亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Index modulation (IM) reduces the power consumption and hardware cost of the multiple-input multiple-output (MIMO) system by activating part of the antennas for data transmission. However, IM significantly increases the complexity of the receiver and needs accurate channel estimation to guarantee its performance. To tackle these challenges, in this paper, we design a deep learning (DL) based detector for the IM aided MIMO (IM-MIMO) systems. We first formulate the detection process as a sparse reconstruction problem by utilizing the inherent attributes of IM. Then, based on greedy strategy, we design a DL based detector, called IMRecoNet, to realize this sparse reconstruction process. Different from the general neural networks, we introduce complex value operations to adapt the complex signals in communication systems. To the best of our knowledge, this is the first attempt that introduce complex valued neural network to the design of detector for the IM-MIMO systems. Finally, to verify the adaptability and robustness of the proposed detector, simulations are carried out with consideration of inaccurate channel state information (CSI) and correlated MIMO channels. The simulation results demonstrate that the proposed detector outperforms existing algorithms in terms of antenna recognition accuracy and bit error rate under various scenarios.

相關內容

神(shen)(shen)(shen)經(jing)網(wang)(wang)絡(Neural Networks)是世(shi)界(jie)上(shang)三個最古老的(de)神(shen)(shen)(shen)經(jing)建(jian)模學(xue)(xue)(xue)(xue)(xue)會(hui)的(de)檔案期刊:國際神(shen)(shen)(shen)經(jing)網(wang)(wang)絡學(xue)(xue)(xue)(xue)(xue)會(hui)(INNS)、歐洲神(shen)(shen)(shen)經(jing)網(wang)(wang)絡學(xue)(xue)(xue)(xue)(xue)會(hui)(ENNS)和(he)日本神(shen)(shen)(shen)經(jing)網(wang)(wang)絡學(xue)(xue)(xue)(xue)(xue)會(hui)(JNNS)。神(shen)(shen)(shen)經(jing)網(wang)(wang)絡提供(gong)了一(yi)個論(lun)壇,以(yi)發(fa)展和(he)培育一(yi)個國際社(she)會(hui)的(de)學(xue)(xue)(xue)(xue)(xue)者和(he)實踐者感(gan)興趣(qu)的(de)所有(you)方(fang)面(mian)的(de)神(shen)(shen)(shen)經(jing)網(wang)(wang)絡和(he)相關方(fang)法(fa)的(de)計(ji)算(suan)智能。神(shen)(shen)(shen)經(jing)網(wang)(wang)絡歡迎高(gao)質量(liang)論(lun)文(wen)(wen)的(de)提交,有(you)助于全面(mian)的(de)神(shen)(shen)(shen)經(jing)網(wang)(wang)絡研(yan)究,從行為和(he)大腦建(jian)模,學(xue)(xue)(xue)(xue)(xue)習算(suan)法(fa),通過數(shu)學(xue)(xue)(xue)(xue)(xue)和(he)計(ji)算(suan)分(fen)(fen)析,系(xi)統(tong)(tong)的(de)工(gong)程(cheng)(cheng)和(he)技(ji)術(shu)應用(yong),大量(liang)使(shi)用(yong)神(shen)(shen)(shen)經(jing)網(wang)(wang)絡的(de)概念和(he)技(ji)術(shu)。這一(yi)獨特(te)而廣(guang)泛(fan)的(de)范圍促(cu)進了生物和(he)技(ji)術(shu)研(yan)究之間的(de)思(si)想交流,并有(you)助于促(cu)進對生物啟發(fa)的(de)計(ji)算(suan)智能感(gan)興趣(qu)的(de)跨學(xue)(xue)(xue)(xue)(xue)科社(she)區的(de)發(fa)展。因此,神(shen)(shen)(shen)經(jing)網(wang)(wang)絡編委會(hui)代表的(de)專(zhuan)家(jia)領域包(bao)括(kuo)心理學(xue)(xue)(xue)(xue)(xue),神(shen)(shen)(shen)經(jing)生物學(xue)(xue)(xue)(xue)(xue),計(ji)算(suan)機科學(xue)(xue)(xue)(xue)(xue),工(gong)程(cheng)(cheng),數(shu)學(xue)(xue)(xue)(xue)(xue),物理。該雜(za)志發(fa)表文(wen)(wen)章、信件(jian)和(he)評論(lun)以(yi)及給編輯的(de)信件(jian)、社(she)論(lun)、時事、軟件(jian)調(diao)查和(he)專(zhuan)利(li)信息。文(wen)(wen)章發(fa)表在(zai)五個部分(fen)(fen)之一(yi):認知科學(xue)(xue)(xue)(xue)(xue),神(shen)(shen)(shen)經(jing)科學(xue)(xue)(xue)(xue)(xue),學(xue)(xue)(xue)(xue)(xue)習系(xi)統(tong)(tong),數(shu)學(xue)(xue)(xue)(xue)(xue)和(he)計(ji)算(suan)分(fen)(fen)析、工(gong)程(cheng)(cheng)和(he)應用(yong)。 官網(wang)(wang)地址:

Dilated and transposed convolutions are widely used in modern convolutional neural networks (CNNs). These kernels are used extensively during CNN training and inference of applications such as image segmentation and high-resolution image generation. Although these kernels have grown in popularity, they stress current compute systems due to their high memory intensity, exascale compute demands, and large energy consumption. We find that commonly-used low-power CNN inference accelerators based on spatial architectures are not optimized for both of these convolutional kernels. Dilated and transposed convolutions introduce significant zero padding when mapped to the underlying spatial architecture, significantly degrading performance and energy efficiency. Existing approaches that address this issue require significant design changes to the otherwise simple, efficient, and well-adopted architectures used to compute direct convolutions. To address this challenge, we propose EcoFlow, a new set of dataflows and mapping algorithms for dilated and transposed convolutions. These algorithms are tailored to execute efficiently on existing low-cost, small-scale spatial architectures and requires minimal changes to the network-on-chip of existing accelerators. EcoFlow eliminates zero padding through careful dataflow orchestration and data mapping tailored to the spatial architecture. EcoFlow enables flexible and high-performance transpose and dilated convolutions on architectures that are otherwise optimized for CNN inference. We evaluate the efficiency of EcoFlow on CNN training workloads and Generative Adversarial Network (GAN) training workloads. Experiments in our new cycle-accurate simulator show that EcoFlow 1) reduces end-to-end CNN training time between 7-85%, and 2) improves end-to-end GAN training performance between 29-42%, compared to state-of-the-art CNN inference accelerators.

Industry has gradually moved towards application-specific hardware accelerators in order to attain higher efficiency. While such a paradigm shift is already starting to show promising results, designers need to spend considerable manual effort and perform a large number of time-consuming simulations to find accelerators that can accelerate multiple target applications while obeying design constraints. Moreover, such a "simulation-driven" approach must be re-run from scratch every time the set of target applications or design constraints change. An alternative paradigm is to use a "data-driven", offline approach that utilizes logged simulation data, to architect hardware accelerators, without needing any form of simulations. Such an approach not only alleviates the need to run time-consuming simulation, but also enables data reuse and applies even when set of target applications changes. In this paper, we develop such a data-driven offline optimization method for designing hardware accelerators, dubbed PRIME, that enjoys all of these properties. Our approach learns a conservative, robust estimate of the desired cost function, utilizes infeasible points, and optimizes the design against this estimate without any additional simulator queries during optimization. PRIME architects accelerators -- tailored towards both single and multiple applications -- improving performance upon state-of-the-art simulation-driven methods by about 1.54x and 1.20x, while considerably reducing the required total simulation time by 93% and 99%, respectively. In addition, PRIME also architects effective accelerators for unseen applications in a zero-shot setting, outperforming simulation-based methods by 1.26x.

Intelligent reflecting surfaces (IRSs) enable multiple-input multiple-output (MIMO) transmitters to modify the communication channels between the transmitters and receivers. In the presence of eavesdropping terminals, this degree of freedom can be used to effectively suppress the information leakage towards such malicious terminals. This leads to significant potential secrecy gains in IRS-aided MIMO systems. This work exploits these gains via a tractable joint design of downlink beamformers and IRS phase-shifts. In this respect, we consider a generic IRS-aided MIMO wiretap setting and invoke fractional programming and alternating optimization techniques to iteratively find the beamformers and phase-shifts that maximize the achievable weighted secrecy sum-rate. Our design concludes two low-complexity algorithms for joint beamforming and phase-shift tuning. Performance of the proposed algorithms are numerically evaluated and compared to the benchmark. The results reveal that integrating IRSs into MIMO systems not only boosts the secrecy performance of the system, but also improves the robustness against passive eavesdropping.

Keypoint-based methods are a relatively new paradigm in object detection, eliminating the need for anchor boxes and offering a simplified detection framework. Keypoint-based CornerNet achieves state of the art accuracy among single-stage detectors. However, this accuracy comes at high processing cost. In this work, we tackle the problem of efficient keypoint-based object detection and introduce CornerNet-Lite. CornerNet-Lite is a combination of two efficient variants of CornerNet: CornerNet-Saccade, which uses an attention mechanism to eliminate the need for exhaustively processing all pixels of the image, and CornerNet-Squeeze, which introduces a new compact backbone architecture. Together these two variants address the two critical use cases in efficient object detection: improving efficiency without sacrificing accuracy, and improving accuracy at real-time efficiency. CornerNet-Saccade is suitable for offline processing, improving the efficiency of CornerNet by 6.0x and the AP by 1.0% on COCO. CornerNet-Squeeze is suitable for real-time detection, improving both the efficiency and accuracy of the popular real-time detector YOLOv3 (34.4% AP at 34ms for CornerNet-Squeeze compared to 33.0% AP at 39ms for YOLOv3 on COCO). Together these contributions for the first time reveal the potential of keypoint-based detection to be useful for applications requiring processing efficiency.

With the widespread applications of deep convolutional neural networks (DCNNs), it becomes increasingly important for DCNNs not only to make accurate predictions but also to explain how they make their decisions. In this work, we propose a CHannel-wise disentangled InterPretation (CHIP) model to give the visual interpretation to the predictions of DCNNs. The proposed model distills the class-discriminative importance of channels in networks by utilizing the sparse regularization. Here, we first introduce the network perturbation technique to learn the model. The proposed model is capable to not only distill the global perspective knowledge from networks but also present the class-discriminative visual interpretation for specific predictions of networks. It is noteworthy that the proposed model is able to interpret different layers of networks without re-training. By combining the distilled interpretation knowledge in different layers, we further propose the Refined CHIP visual interpretation that is both high-resolution and class-discriminative. Experimental results on the standard dataset demonstrate that the proposed model provides promising visual interpretation for the predictions of networks in image classification task compared with existing visual interpretation methods. Besides, the proposed method outperforms related approaches in the application of ILSVRC 2015 weakly-supervised localization task.

Transfer learning is one of the subjects undergoing intense study in the area of machine learning. In object recognition and object detection there are known experiments for the transferability of parameters, but not for neural networks which are suitable for object-detection in real time embedded applications, such as the SqueezeDet neural network. We use transfer learning to accelerate the training of SqueezeDet to a new group of classes. Also, experiments are conducted to study the transferability and co-adaptation phenomena introduced by the transfer learning process. To accelerate training, we propose a new implementation of the SqueezeDet training which provides a faster pipeline for data processing and achieves $1.8$ times speedup compared to the initial implementation. Finally, we created a mechanism for automatic hyperparamer optimization using an empirical method.

The ever-growing interest witnessed in the acquisition and development of unmanned aerial vehicles (UAVs), commonly known as drones in the past few years, has brought generation of a very promising and effective technology. Because of their characteristic of small size and fast deployment, UAVs have shown their effectiveness in collecting data over unreachable areas and restricted coverage zones. Moreover, their flexible-defined capacity enables them to collect information with a very high level of detail, leading to high resolution images. UAVs mainly served in military scenario. However, in the last decade, they have being broadly adopted in civilian applications as well. The task of aerial surveillance and situation awareness is usually completed by integrating intelligence, surveillance, observation, and navigation systems, all interacting in the same operational framework. To build this capability, UAV's are well suited tools that can be equipped with a wide variety of sensors, such as cameras or radars. Deep learning has been widely recognized as a prominent approach in different computer vision applications. Specifically, one-stage object detector and two-stage object detector are regarded as the most important two groups of Convolutional Neural Network based object detection methods. One-stage object detector could usually outperform two-stage object detector in speed; however, it normally trails in detection accuracy, compared with two-stage object detectors. In this study, focal loss based RetinaNet, which works as one-stage object detector, is utilized to be able to well match the speed of regular one-stage detectors and also defeat two-stage detectors in accuracy, for UAV based object detection. State-of-the-art performance result has been showed on the UAV captured image dataset-Stanford Drone Dataset (SDD).

Object detection is a fundamental and challenging problem in aerial and satellite image analysis. More recently, a two-stage detector Faster R-CNN is proposed and demonstrated to be a promising tool for object detection in optical remote sensing images, while the sparse and dense characteristic of objects in remote sensing images is complexity. It is unreasonable to treat all images with the same region proposal strategy, and this treatment limits the performance of two-stage detectors. In this paper, we propose a novel and effective approach, named deep adaptive proposal network (DAPNet), address this complexity characteristic of object by learning a new category prior network (CPN) on the basis of the existing Faster R-CNN architecture. Moreover, the candidate regions produced by DAPNet model are different from the traditional region proposal network (RPN), DAPNet predicts the detail category of each candidate region. And these candidate regions combine the object number, which generated by the category prior network to achieve a suitable number of candidate boxes for each image. These candidate boxes can satisfy detection tasks in sparse and dense scenes. The performance of the proposed framework has been evaluated on the challenging NWPU VHR-10 data set. Experimental results demonstrate the superiority of the proposed framework to the state-of-the-art.

This paper introduces a novel neural network-based reinforcement learning approach for robot gaze control. Our approach enables a robot to learn and to adapt its gaze control strategy for human-robot interaction neither with the use of external sensors nor with human supervision. The robot learns to focus its attention onto groups of people from its own audio-visual experiences, independently of the number of people, of their positions and of their physical appearances. In particular, we use a recurrent neural network architecture in combination with Q-learning to find an optimal action-selection policy; we pre-train the network using a simulated environment that mimics realistic scenarios that involve speaking/silent participants, thus avoiding the need of tedious sessions of a robot interacting with people. Our experimental evaluation suggests that the proposed method is robust against parameter estimation, i.e. the parameter values yielded by the method do not have a decisive impact on the performance. The best results are obtained when both audio and visual information is jointly used. Experiments with the Nao robot indicate that our framework is a step forward towards the autonomous learning of socially acceptable gaze behavior.

Mobile network that millions of people use every day is one of the most complex systems in real world. Optimization of mobile network to meet exploding customer demand and reduce CAPEX/OPEX poses greater challenges than in prior works. Actually, learning to solve complex problems in real world to benefit everyone and make the world better has long been ultimate goal of AI. However, application of deep reinforcement learning (DRL) to complex problems in real world still remains unsolved, due to imperfect information, data scarcity and complex rules in real world, potential negative impact to real world, etc. To bridge this reality gap, we propose a sim-to-real framework to direct transfer learning from simulation to real world without any training in real world. First, we distill temporal-spatial relationships between cells and mobile users to scalable 3D image-like tensor to best characterize partially observed mobile network. Second, inspired by AlphaGo, we introduce a novel self-play mechanism to empower DRL agents to gradually improve intelligence by competing for best record on multiple tasks, just like athletes compete for world record in decathlon. Third, a decentralized DRL method is proposed to coordinate multi-agents to compete and cooperate as a team to maximize global reward and minimize potential negative impact. Using 7693 unseen test tasks over 160 unseen mobile networks in another simulator as well as 6 field trials on 4 commercial mobile networks in real world, we demonstrate the capability of this sim-to-real framework to direct transfer the learning not only from one simulator to another simulator, but also from simulation to real world. This is the first time that a DRL agent successfully transfers its learning directly from simulation to very complex real world problems with imperfect information, complex rules, huge state/action space, and multi-agent interactions.

北京阿比特科技有限公司