亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Field of view (FoV) prediction is critical in 360-degree video multicast, which is a key component of the emerging Virtual Reality (VR) and Augmented Reality (AR) applications. Most of the current prediction methods combining saliency detection and FoV information neither take into account that the distortion of projected 360-degree videos can invalidate the weight sharing of traditional convolutional networks, nor do they adequately consider the difficulty of obtaining complete multi-user FoV information, which degrades the prediction performance. This paper proposes a spherical convolution-empowered FoV prediction method, which is a multi-source prediction framework combining salient features extracted from 360-degree video with limited FoV feedback information. A spherical convolution neural network (CNN) is used instead of a traditional two-dimensional CNN to eliminate the problem of weight sharing failure caused by video projection distortion. Specifically, salient spatial-temporal features are extracted through a spherical convolution-based saliency detection model, after which the limited feedback FoV information is represented as a time-series model based on a spherical convolution-empowered gated recurrent unit network. Finally, the extracted salient video features are combined to predict future user FoVs. The experimental results show that the performance of the proposed method is better than other prediction methods.

相關內容

The most common sensing modalities found in a robot perception system are vision and touch, which together can provide global and highly localized data for manipulation. However, these sensing modalities often fail to adequately capture the behavior of target objects during the critical moments as they transition out of static, controlled contact with an end-effector to dynamic and uncontrolled motion. In this work, we present a novel multimodal visuotactile sensor that provides simultaneous visuotactile and proximity depth data. The sensor integrates an RGB camera and air pressure sensor to sense touch with an infrared time-of-flight (ToF) camera to sense proximity by leveraging a selectively transmissive soft membrane to enable the dual sensing modalities. We present the mechanical design, fabrication techniques, algorithm implementations, and evaluation of the sensor's tactile and proximity modalities. The sensor is demonstrated in three open-loop robotic tasks: approaching and contacting an object, catching, and throwing. The fusion of tactile and proximity data could be used to capture key information about a target object's transition behavior for sensor-based control in dynamic manipulation.

Hybrid precoding is a cost-efficient technique for millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) communications. This paper proposes a deep learning approach by using a distributed neural network for hybrid analog-and-digital precoding design with limited feedback. The proposed distributed neural precoding network, called DNet, is committed to achieving two objectives. First, the DNet realizes channel state information (CSI) compression with a distributed architecture of neural networks, which enables practical deployment on multiple users. Specifically, this neural network is composed of multiple independent sub-networks with the same structure and parameters, which reduces both the number of training parameters and network complexity. Secondly, DNet learns the calculation of hybrid precoding from reconstructed CSI from limited feedback. Different from existing black-box neural network design, the DNet is specifically designed according to the data form of the matrix calculation of hybrid precoding. Simulation results show that the proposed DNet significantly improves the performance up to nearly 50% compared to traditional limited feedback precoding methods under the tests with various CSI compression ratios.

Seven million people suffer surgical complications each year, but with sufficient surgical training and review, 50\% of these complications could be prevented. To improve surgical performance, existing research uses various deep learning (DL) technologies including convolutional neural networks (CNN) and recurrent neural networks (RNN) to automate surgical tool and workflow detection. However, there is room to improve accuracy; real-time analysis is also minimal due to the complexity of CNN. In this research, a novel DL architecture is proposed to integrate visual simultaneous localization and mapping (vSLAM) into Mask R-CNN. This architecture, vSLAM-CNN (vCNN), for the first time, integrates the best of both worlds, inclusive of (1) vSLAM for object detection, by focusing on geometric information for region proposals, and (2) CNN for object recognition, by focusing on semantic information for image classification, combining them into one joint end-to-end training process. This method, using spatio-temporal information in addition to visual features, is evaluated on M2CAI 2016 challenge datasets, achieving the state-of-the-art results with 96.8 mAP for tool detection and 97.5 mean Jaccard score for workflow detection, surpassing all previous works, and reaching a 50 FPS performance, 10x faster than the region-based CNN. A region proposal module (RPM) replaces the region proposal network (RPN) in Mask R-CNN, accurately placing bounding boxes and lessening the annotation requirement. Furthermore, a Microsoft HoloLens 2 application is developed to provide an augmented reality (AR)-based solution for surgical training and assistance.

In large scale dynamic wireless networks, the amount of overhead caused by channel estimation (CE) is becoming one of the main performance bottlenecks. This is due to the large number users whose channels should be estimated, the user mobility, and the rapid channel change caused by the usage of the high-frequency spectrum (e.g. millimeter wave). In this work, we propose a new hybrid channel estimation/prediction (CEP) scheme to reduce overhead in time-division duplex (TDD) wireless cell-free massive multiple-input-multiple-output (mMIMO) systems. The scheme proposes sending a pilot signal from each user only once in a given number (window) of coherence intervals (CIs). Then minimum mean-square error (MMSE) estimation is used to estimate the channel of this CI, while a deep neural network (DNN) is used to predict the channels of the remaining CIs in the window. The DNN exploits the temporal correlation between the consecutive CIs and the received pilot signals to improve the channel prediction accuracy. By doing so, CE overhead is reduced by at least 50 percent at the expense of negligible CE error for practical user mobility settings. Consequently, the proposed CEP scheme improves the spectral efficiency compared to the conventional MMSE CE approach, especially when the number of users is large, which is demonstrated numerically.

Objective: The effect of camera viewpoint was studied when performing visually obstructed psychomotor targeting tasks. Background: Previous research in laparoscopy and robotic teleoperation found that complex perceptual-motor adaptations associated with misaligned viewpoints corresponded to degraded performance in manipulation. Because optimal camera positioning is often unavailable in restricted environments, alternative viewpoints that might mitigate performance effects are not obvious. Methods: A virtual keyboard-controlled targeting task was remotely distributed to workers of Amazon Mechanical Turk. The experiment was performed by 192 subjects for a static viewpoint with independent parameters of target direction, Fitts' law index of difficulty, viewpoint azimuthal angle (AA), and viewpoint polar angle (PA). A dynamic viewpoint experiment was also performed by 112 subjects in which the viewpoint AA changed after every trial. Results: AA and target direction had significant effects on performance for the static viewpoint experiment. Movement time and travel distance increased while AA increased until there was a discrete improvement in performance for 180{\deg}. Increasing AA from 225{\deg} to 315{\deg} linearly decreased movement time and distance. There were significant main effects of current AA and magnitude of transition for the dynamic viewpoint experiment. Orthogonal direction and no-change viewpoint transitions least affected performance. Conclusions: Viewpoint selection should aim to minimize associated rotations within the manipulation plane when performing targeting tasks whether implementing a static or dynamic viewing solution. Because PA rotations had negligible performance effects, PA adjustments may extend the space of viable viewpoints. Applications: These results can inform viewpoint-selection for visual feedback during psychomotor tasks.

Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multiple modalities such as vision and audio. Machine perception models, in stark contrast, are typically modality-specific and optimised for unimodal benchmarks, and hence late-stage fusion of final representations or predictions from each modality (`late-fusion') is still a dominant paradigm for multimodal video classification. Instead, we introduce a novel transformer based architecture that uses `fusion bottlenecks' for modality fusion at multiple layers. Compared to traditional pairwise self-attention, our model forces information between different modalities to pass through a small number of bottleneck latents, requiring the model to collate and condense the most relevant information in each modality and only share what is necessary. We find that such a strategy improves fusion performance, at the same time reducing computational cost. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All code and models will be released.

Deep Learning has implemented a wide range of applications and has become increasingly popular in recent years. The goal of multimodal deep learning is to create models that can process and link information using various modalities. Despite the extensive development made for unimodal learning, it still cannot cover all the aspects of human learning. Multimodal learning helps to understand and analyze better when various senses are engaged in the processing of information. This paper focuses on multiple types of modalities, i.e., image, video, text, audio, body gestures, facial expressions, and physiological signals. Detailed analysis of past and current baseline approaches and an in-depth study of recent advancements in multimodal deep learning applications has been provided. A fine-grained taxonomy of various multimodal deep learning applications is proposed, elaborating on different applications in more depth. Architectures and datasets used in these applications are also discussed, along with their evaluation metrics. Last, main issues are highlighted separately for each domain along with their possible future research directions.

Deep neural network architectures have traditionally been designed and explored with human expertise in a long-lasting trial-and-error process. This process requires huge amount of time, expertise, and resources. To address this tedious problem, we propose a novel algorithm to optimally find hyperparameters of a deep network architecture automatically. We specifically focus on designing neural architectures for medical image segmentation task. Our proposed method is based on a policy gradient reinforcement learning for which the reward function is assigned a segmentation evaluation utility (i.e., dice index). We show the efficacy of the proposed method with its low computational cost in comparison with the state-of-the-art medical image segmentation networks. We also present a new architecture design, a densely connected encoder-decoder CNN, as a strong baseline architecture to apply the proposed hyperparameter search algorithm. We apply the proposed algorithm to each layer of the baseline architectures. As an application, we train the proposed system on cine cardiac MR images from Automated Cardiac Diagnosis Challenge (ACDC) MICCAI 2017. Starting from a baseline segmentation architecture, the resulting network architecture obtains the state-of-the-art results in accuracy without performing any trial-and-error based architecture design approaches or close supervision of the hyperparameters changes.

This paper introduces an online model for object detection in videos designed to run in real-time on low-powered mobile and embedded devices. Our approach combines fast single-image object detection with convolutional long short term memory (LSTM) layers to create an interweaved recurrent-convolutional architecture. Additionally, we propose an efficient Bottleneck-LSTM layer that significantly reduces computational cost compared to regular LSTMs. Our network achieves temporal awareness by using Bottleneck-LSTMs to refine and propagate feature maps across frames. This approach is substantially faster than existing detection methods in video, outperforming the fastest single-frame models in model size and computational cost while attaining accuracy comparable to much more expensive single-frame models on the Imagenet VID 2015 dataset. Our model reaches a real-time inference speed of up to 15 FPS on a mobile CPU.

In this paper, we propose the joint learning attention and recurrent neural network (RNN) models for multi-label classification. While approaches based on the use of either model exist (e.g., for the task of image captioning), training such existing network architectures typically require pre-defined label sequences. For multi-label classification, it would be desirable to have a robust inference process, so that the prediction error would not propagate and thus affect the performance. Our proposed model uniquely integrates attention and Long Short Term Memory (LSTM) models, which not only addresses the above problem but also allows one to identify visual objects of interests with varying sizes without the prior knowledge of particular label ordering. More importantly, label co-occurrence information can be jointly exploited by our LSTM model. Finally, by advancing the technique of beam search, prediction of multiple labels can be efficiently achieved by our proposed network model.

北京阿比特科技有限公司