亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we propose a novel integrated sensing and communication (ISAC) complex convolution neural network (CNN) CSI enhancer for 6G networks, which exploits the correlation between the sensing parameters, such as angle-of-arrival (AoA) and range, and the channel state information (CSI) to significantly improve the CSI estimation accuracy and further enhance the sensing accuracy. The ISAC complex CNN CSI enhancer uses the complex-value computation layers to form the CNN to better maintain the phase information of CSI. Furthermore, we incorporate the ISAC transform modules into the CNN enhancer to transform the CSI into the sparse angle-delay domain, which can be treated as images with prominent peaks and are suitable to be processed by CNN. Then, we further propose a novel biased FFT-based sensing scheme, where we actively add known phase bias terms to the original CSI to generate multiple estimation results using a simple FFT-based sensing method, and we finally calculate the average of all the debiased sensing results to obtain more accurate range estimates. The extensive simulation results show that the ISAC complex CNN CSI enhancer can converge within 30 training epochs. Its CSI estimation normalized mean square error (NMSE) is about 17 dB lower than the MMSE method, and the bit error rate (BER) of demodulation using the enhanced CSI approaches the perfect CSI. Finally, the range estimation MSE of the proposed biased FFT-based sensing method can approach the subspace-based method with much lower complexity.

相關內容

In recent years, word embeddings have been widely used to measure biases in texts. Even if they have proven to be effective in detecting a wide variety of biases, metrics based on word embeddings lack transparency and interpretability. We analyze an alternative PMI-based metric to quantify biases in texts. It can be expressed as a function of conditional probabilities, which provides a simple interpretation in terms of word co-occurrences. We also prove that it can be approximated by an odds ratio, which allows estimating confidence intervals and statistical significance of textual biases. This approach produces similar results to metrics based on word embeddings when capturing gender gaps of the real world embedded in large corpora.

Diffusion models recently have been successfully applied for the visual synthesis of strikingly realistic appearing images. This raises strong concerns about their potential for malicious purposes. In this paper, we propose using the lightweight multi Local Intrinsic Dimensionality (multiLID), which has been originally developed in context of the detection of adversarial examples, for the automatic detection of synthetic images and the identification of the according generator networks. In contrast to many existing detection approaches, which often only work for GAN-generated images, the proposed method provides close to perfect detection results in many realistic use cases. Extensive experiments on known and newly created datasets demonstrate that the proposed multiLID approach exhibits superiority in diffusion detection and model identification. Since the empirical evaluations of recent publications on the detection of generated images are often mainly focused on the "LSUN-Bedroom" dataset, we further establish a comprehensive benchmark for the detection of diffusion-generated images, including samples from several diffusion models with different image sizes.

Deep neural network is a powerful tool for many tasks. Understanding why it is so successful and providing a mathematical explanation is an important problem and has been one popular research direction in past years. In the literature of mathematical analysis of deep deep neural networks, a lot of works are dedicated to establishing representation theories. How to make connections between deep neural networks and mathematical algorithms is still under development. In this paper, we give an algorithmic explanation for deep neural networks, especially in their connection with operator splitting and multigrid methods. We show that with certain splitting strategies, operator-splitting methods have the same structure as networks. Utilizing this connection and the Potts model for image segmentation, two networks inspired by operator-splitting methods are proposed. The two networks are essentially two operator-splitting algorithms solving the Potts model. Numerical experiments are presented to demonstrate the effectiveness of the proposed networks.

Aiming at providing wireless communication systems with environment-perceptive capacity, emerging integrated sensing and communication (ISAC) technologies face multiple difficulties, especially in balancing the performance trade-off between the communication and radar functions. In this paper, we introduce a reconfigurable intelligent surface (RIS) to assist both data transmission and target detection in a dual-functional ISAC system. To formulate a general optimization framework, diverse communication performance metrics have been taken into account including famous capacity maximization and mean-squared error (MSE) minimization. Whereas the target detection process is modeled as a general likelihood ratio test (GLRT) due to the practical limitations, and the monotonicity of the corresponding detection probability is proved. For the single-user and single-target (SUST) scenario, the minimum transmit power of the ISAC transceiver has been revealed. By exploiting the optimal conditions of the BS design, we validate that the BS is able to realize the maximum power allocation scheme and derive the optimal BS precoder in a semi-closed form. Moreover, an alternating direction method of multipliers (ADMM) based RIS design is proposed to address the optimization of unit-modulus RIS phase shifts. For the sake of further enhancing computational efficiency, we also develop a low-complexity RIS design based on Riemannian gradient descent. Furthermore, the ISAC transceiver design for the multiple-users and multiple-targets (MUMT) scenario is also investigated, where a zero-forcing (ZF) radar receiver is adopted to cancel the interferences. Then optimal BS precoder is derived under the maximum power allocation scheme, and the RIS phase shifts can be optimized by extending the proposed ADMM-based RIS design. Numerical simulation results verify the performance of our proposed transceiver designs.

The topology of the Internet and its geographic properties received significant attention during the last years, not only because they have a deep impact on the performance experienced by users, but also because of legal, political, and economic reasons. In this paper, the global Internet is studied in terms of path locality, where a path is defined as local if it does not cross the borders of the region where the source and destination hosts are located. The phenomenon is studied from the points of view of two metrics, one based on the size of the address space of the autonomous systems where the endpoints are located and the other one on the amount of served population. Results show that the regions of the world are characterized by significant differences in terms of path locality. The main elements contributing to the path locality, and non-locality, of the regions and countries, are identified and discussed. Finally, we present the most significant dependency relationships between countries caused by non-local paths.

The millimeter-wave (mmWave) communication technology, which employs large-scale antenna arrays, enables inherent sensing capabilities. Simultaneous localization and mapping (SLAM) can utilize channel multipath angle estimates to realize integrated sensing and communication design in 6G communication systems. However, existing works have ignored the significant overhead required by the mmWave beam management when implementing SLAM with angle estimates. This study proposes a joint beam management and SLAM design that utilizes the strong coupling between the radio map and channel multipath for simultaneous beam management, localization, and mapping. In this approach, we first propose a hierarchical sweeping and sensing service design. The path angles are estimated in the hierarchical sweeping, enabling angle-based SLAM with the aid of an inertial measurement unit (IMU) to realize sensing service. Then, feature-aided tracking is proposed that utilizes prior angle information generated from the radio map and IMU. Finally, a switching module is introduced to enable flexible switching between hierarchical sweeping and feature-aided tracking. Simulations show that the proposed joint design can achieve sub-meter level localization and mapping accuracy (with an error < 0.5 m). Moreover, the beam management overhead can be reduced by approximately 40% in different wireless environments.

Next-generation wireless networks strive for higher communication rates, ultra-low latency, seamless connectivity, and high-resolution sensing capabilities. To meet these demands, terahertz (THz)-band signal processing is envisioned as a key technology offering wide bandwidth and sub-millimeter wavelength. Furthermore, THz integrated sensing and communications (ISAC) paradigm has emerged jointly access spectrum and reduced hardware costs through a unified platform. To address the challenges in THz propagation, THz-ISAC systems employ extremely large antenna arrays to improve the beamforming gain for communications with high data rates and sensing with high resolution. However, the cost and power consumption of implementing fully digital beamformers are prohibitive. While hybrid analog/digital beamforming can be a potential solution, the use of subcarrier-independent analog beamformers leads to the beam-squint phenomenon where different subcarriers observe distinct directions because of adopting the same analog beamformer across all subcarriers. In this paper, we develop a sparse array architecture for THz-ISAC with hybrid beamforming to provide a cost-effective solution. We analyze the antenna selection problem under beam-squint influence and introduce a manifold optimization approach for hybrid beamforming design. To reduce computational and memory costs, we propose novel algorithms leveraging grouped subarrays, quantized performance metrics, and sequential optimization. These approaches yield a significant reduction in the number of possible subarray configurations, which enables us to devise a neural network with classification model to accurately perform antenna selection.

Substantial efforts have been devoted more recently to presenting various methods for object detection in optical remote sensing images. However, the current survey of datasets and deep learning based methods for object detection in optical remote sensing images is not adequate. Moreover, most of the existing datasets have some shortcomings, for example, the numbers of images and object categories are small scale, and the image diversity and variations are insufficient. These limitations greatly affect the development of deep learning based object detection methods. In the paper, we provide a comprehensive review of the recent deep learning based object detection progress in both the computer vision and earth observation communities. Then, we propose a large-scale, publicly available benchmark for object DetectIon in Optical Remote sensing images, which we name as DIOR. The dataset contains 23463 images and 192472 instances, covering 20 object classes. The proposed DIOR dataset 1) is large-scale on the object categories, on the object instance number, and on the total image number; 2) has a large range of object size variations, not only in terms of spatial resolutions, but also in the aspect of inter- and intra-class size variability across objects; 3) holds big variations as the images are obtained with different imaging conditions, weathers, seasons, and image quality; and 4) has high inter-class similarity and intra-class diversity. The proposed benchmark can help the researchers to develop and validate their data-driven methods. Finally, we evaluate several state-of-the-art approaches on our DIOR dataset to establish a baseline for future research.

Deep Convolutional Neural Networks (CNNs) are a special type of Neural Networks, which have shown state-of-the-art results on various competitive benchmarks. The powerful learning ability of deep CNN is largely achieved with the use of multiple non-linear feature extraction stages that can automatically learn hierarchical representation from the data. Availability of a large amount of data and improvements in the hardware processing units have accelerated the research in CNNs and recently very interesting deep CNN architectures are reported. The recent race in deep CNN architectures for achieving high performance on the challenging benchmarks has shown that the innovative architectural ideas, as well as parameter optimization, can improve the CNN performance on various vision-related tasks. In this regard, different ideas in the CNN design have been explored such as use of different activation and loss functions, parameter optimization, regularization, and restructuring of processing units. However, the major improvement in representational capacity is achieved by the restructuring of the processing units. Especially, the idea of using a block as a structural unit instead of a layer is gaining substantial appreciation. This survey thus focuses on the intrinsic taxonomy present in the recently reported CNN architectures and consequently, classifies the recent innovations in CNN architectures into seven different categories. These seven categories are based on spatial exploitation, depth, multi-path, width, feature map exploitation, channel boosting and attention. Additionally, it covers the elementary understanding of the CNN components and sheds light on the current challenges and applications of CNNs.

Text classification is an important and classical problem in natural language processing. There have been a number of studies that applied convolutional neural networks (convolution on regular grid, e.g., sequence) to classification. However, only a limited number of studies have explored the more flexible graph convolutional neural networks (convolution on non-grid, e.g., arbitrary graph) for the task. In this work, we propose to use graph convolutional networks for text classification. We build a single text graph for a corpus based on word co-occurrence and document word relations, then learn a Text Graph Convolutional Network (Text GCN) for the corpus. Our Text GCN is initialized with one-hot representation for word and document, it then jointly learns the embeddings for both words and documents, as supervised by the known class labels for documents. Our experimental results on multiple benchmark datasets demonstrate that a vanilla Text GCN without any external word embeddings or knowledge outperforms state-of-the-art methods for text classification. On the other hand, Text GCN also learns predictive word and document embeddings. In addition, experimental results show that the improvement of Text GCN over state-of-the-art comparison methods become more prominent as we lower the percentage of training data, suggesting the robustness of Text GCN to less training data in text classification.

北京阿比特科技有限公司