亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The flourishing low-Earth orbit (LEO) constellation communication network provides a promising solution for seamless coverage services to Internet-of-Things (IoT) terminals. However, confronted with massive connectivity and rapid variation of terrestrial-satellite link (TSL), the traditional grant-free random-access schemes always fail to match this scenario. In this paper, a new non-orthogonal multiple-access (NOMA) transmission protocol that incorporates orthogonal time frequency space (OTFS) modulation is proposed to solve these problems. Furthermore, we propose a two-stages joint active user detection and channel estimation scheme based on the training sequences aided OTFS data frame structure. Specifically, in the first stage, with the aid of training sequences, we perform active user detection and coarse channel estimation by recovering the sparse sampled channel vectors. And then, we develop a parametric approach to facilitate more accurate result of channel estimation with the previously recovered sampled channel vectors according to the inherent characteristics of TSL channel. Simulation results demonstrate the superiority of the proposed method in this kind of high-mobility scenario in the end.

相關內容

We consider a centralized detection problem where sensors experience noisy measurements and intermittent connectivity to a centralized fusion center. The sensors may collaborate locally within predefined sensor clusters and fuse their noisy sensor data to reach a common local estimate of the detected event in each cluster. The connectivity of each sensor cluster is intermittent and depends on the available communication opportunities of the sensors to the fusion center. Upon receiving the estimates from all the connected sensor clusters the fusion center fuses the received estimates to make a final determination regarding the occurrence of the event across the deployment area. We refer to this hybrid communication scheme as a cloud-cluster architecture. We propose a method for optimizing the decision rule for each cluster and analyzing the expected detection performance resulting from our hybrid scheme. Our method is tractable and addresses the high computational complexity caused by heterogeneous sensors' and clusters' detection quality, heterogeneity in their communication opportunities, and non-convexity of the loss function. Our analysis shows that clustering the sensors provides resilience to noise in the case of low sensor communication probability with the cloud. For larger clusters, a steep improvement in detection performance is possible even for a low communication probability by using our cloud-cluster architecture.

This paper considers a multi-user downlink scheduling problem with access to the channel state information at the transmitter (CSIT) to minimize the Age-of-Information (AoI) in a non-stationary environment. The non-stationary environment is modelled using a novel adversarial framework. In this setting, we propose a greedy scheduling policy, called MA-CSIT, that takes into account the current channel state information. We establish a finite upper bound on the competitive ratio achieved by the MA-CSIT policy for a small number of users and show that the proposed policy has a better performance guarantee than a recently proposed greedy scheduler that operates without CSIT. In particular, we show that access to the additional channel state information improves the competitive ratio from 8 to 2 in the two-user case and from 18 to 8/3 in the three-user case. Finally, we carry out extensive numerical simulations to quantify the advantage of knowing CSIT in order to minimize the Age-of-Information for an arbitrary number of users.

This paper proposes a deep learning approach to channel sensing and downlink hybrid beamforming for massive multiple-input multiple-output systems operating in the time division duplex mode and employing either single-carrier or multi-carrier transmission. The conventional precoding design involves estimating the high dimensional channel and designing the precoders based on such estimate. This two-step process is, however, not necessarily optimal. This paper shows that by training the analog sensing and designing the hybrid downlink precoders directly from the received pilots without the intermediate high-dimensional channel estimation, the overall system performance can be significantly improved. However, the direct approach that simultaneously designs the hybrid precoders is difficult to train and only works for a fixed number of users. In this paper, we develop a simplified semi-direct approach that enjoys most of the advantages of the direct design while eliminating its drawbacks. Specifically, the proposed approach learns the uplink sensing stage and downlink analog precoder using deep learning and designs the digital precoder based on an estimate of the low-dimensional equivalent channel. Numerical comparisons show that the proposed methodology requires significantly less training overhead than the conventional strategy and further demonstrate its generalizability to various different system settings.

Optimally extracting the advantages available from reconfigurable intelligent surfaces (RISs) in wireless communications systems requires estimation of the channels to and from the RIS. The process of determining these channels is complicated by the fact that the RIS is typically composed of passive elements without any data processing capabilities, and thus the channels must be estimated indirectly by a non-colocated device, typically a controlling base station. In this article, we examine channel estimation for RIS-based systems from a fundamental viewpoint. We study various possible channel models and the identifiability of the models as a function of the available pilot data and behavior of the RIS during training. In particular, we consider situations with and without line-of-sight propagation, single- and multiple-antenna configurations for the users and base station, correlated and sparse channel models, single-carrier and wideband OFDM scenarios, availability of direct links between the users and base station, exploitation of prior information, as well as a number of other special cases. We further conduct numerical comparisons of achievable performance for various channel models using the relevant Cramer-Rao bounds.

Consumer Internet of things research often involves collecting network traffic sent or received by IoT devices. These data are typically collected via crowdsourcing or while researchers manually interact with IoT devices in a laboratory setting. However, manual interactions and crowdsourcing are often tedious, expensive, inaccurate, or do not provide comprehensive coverage of possible IoT device behaviors. We present a new method for generating IoT network traffic using a robotic arm to automate user interactions with devices. This eliminates manual button pressing and enables permutation-based interaction sequences that rigorously explore the range of possible device behaviors. We test this approach with an Arduino-controlled robotic arm, a smart speaker and a smart thermostat, using machine learning to demonstrate that collected network traffic contains information about device interactions that could be useful for network, security, or privacy analyses. We also provide source code and documentation allowing researchers to easily automate IoT device interactions and network traffic collection in future studies.

In this paper, we propose a frequency-time division network (FreqTimeNet) to improve the performance of deep learning (DL) based OFDM channel estimation. This FreqTimeNet is designed based on the orthogonality between the frequency domain and the time domain. In FreqTimeNet, the input is processed by parallel frequency blocks and parallel time blocks sequentially. By introducing the attention mechanism using the SNR information, an attention based FreqTimeNet (AttenFreqTimeNet) is proposed. Using 3rd Generation Partnership Project (3GPP) channel models, the mean square error (MSE) performance of FreqTimeNet and AttenFreqTimeNet under different scenarios is evaluated. A method for constructing mixed training data is proposed, which could address the generalization problem in DL. It is observed that AttenFreqTimeNet outperforms FreqTimeNet, and FreqTimeNet outperforms other DL networks with reasonable complexity.

Benefit from the quick development of deep learning techniques, salient object detection has achieved remarkable progresses recently. However, there still exists following two major challenges that hinder its application in embedded devices, low resolution output and heavy model weight. To this end, this paper presents an accurate yet compact deep network for efficient salient object detection. More specifically, given a coarse saliency prediction in the deepest layer, we first employ residual learning to learn side-output residual features for saliency refinement, which can be achieved with very limited convolutional parameters while keep accuracy. Secondly, we further propose reverse attention to guide such side-output residual learning in a top-down manner. By erasing the current predicted salient regions from side-output features, the network can eventually explore the missing object parts and details which results in high resolution and accuracy. Experiments on six benchmark datasets demonstrate that the proposed approach compares favorably against state-of-the-art methods, and with advantages in terms of simplicity, efficiency (45 FPS) and model size (81 MB).

Although YOLOv2 approach is extremely fast on object detection; its backbone network has the low ability on feature extraction and fails to make full use of multi-scale local region features, which restricts the improvement of object detection accuracy. Therefore, this paper proposed a DC-SPP-YOLO (Dense Connection and Spatial Pyramid Pooling Based YOLO) approach for ameliorating the object detection accuracy of YOLOv2. Specifically, the dense connection of convolution layers is employed in the backbone network of YOLOv2 to strengthen the feature extraction and alleviate the vanishing-gradient problem. Moreover, an improved spatial pyramid pooling is introduced to pool and concatenate the multi-scale local region features, so that the network can learn the object features more comprehensively. The DC-SPP-YOLO model is established and trained based on a new loss function composed of mean square error and cross entropy, and the object detection is realized. Experiments demonstrate that the mAP (mean Average Precision) of DC-SPP-YOLO proposed on PASCAL VOC datasets and UA-DETRAC datasets is higher than that of YOLOv2; the object detection accuracy of DC-SPP-YOLO is superior to YOLOv2 by strengthening feature extraction and using the multi-scale local region features.

This research mainly emphasizes on traffic detection thus essentially involving object detection and classification. The particular work discussed here is motivated from unsatisfactory attempts of re-using well known pre-trained object detection networks for domain specific data. In this course, some trivial issues leading to prominent performance drop are identified and ways to resolve them are discussed. For example, some simple yet relevant tricks regarding data collection and sampling prove to be very beneficial. Also, introducing a blur net to deal with blurred real time data is another important factor promoting performance elevation. We further study the neural network design issues for beneficial object classification and involve shared, region-independent convolutional features. Adaptive learning rates to deal with saddle points are also investigated and an average covariance matrix based pre-conditioned approach is proposed. We also introduce the use of optical flow features to accommodate orientation information. Experimental results demonstrate that this results in a steady rise in the performance rate.

Within the rapidly developing Internet of Things (IoT), numerous and diverse physical devices, Edge devices, Cloud infrastructure, and their quality of service requirements (QoS), need to be represented within a unified specification in order to enable rapid IoT application development, monitoring, and dynamic reconfiguration. But heterogeneities among different configuration knowledge representation models pose limitations for acquisition, discovery and curation of configuration knowledge for coordinated IoT applications. This paper proposes a unified data model to represent IoT resource configuration knowledge artifacts. It also proposes IoT-CANE (Context-Aware recommendatioN systEm) to facilitate incremental knowledge acquisition and declarative context driven knowledge recommendation.

北京阿比特科技有限公司