亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This work studies multiple-antenna wireless communication systems based on super-resolution arrays (SRAs). We consider the uplink of a multiple-antenna system in which users communicate with a multiple-antenna base station equipped with SRAs. In particular, we develop linear minimum mean-square error (MMSE) receive filters along with linear and successive interference cancellation receivers for processing signals with the difference co-array originating from the SRAs. We then derive analytical expressions to assess the achievable sum-rates associated with the proposed multiple-antenna systems with SRAs. Simulations show that the proposed multiple-antenna systems with SRAs outperform existing systems with standard arrays that have a larger number of antenna elements.

相關內容

Distributed deep neural networks (DNNs) have been shown to reduce the computational burden of mobile devices and decrease the end-to-end inference latency in edge computing scenarios. While distributed DNNs have been studied, to the best of our knowledge the resilience of distributed DNNs to adversarial action still remains an open problem. In this paper, we fill the existing research gap by rigorously analyzing the robustness of distributed DNNs against adversarial action. We cast this problem in the context of information theory and introduce two new measurements for distortion and robustness. Our theoretical findings indicate that (i) assuming the same level of information distortion, latent features are always more robust than input representations; (ii) the adversarial robustness is jointly determined by the feature dimension and the generalization capability of the DNN. To test our theoretical findings, we perform extensive experimental analysis by considering 6 different DNN architectures, 6 different approaches for distributed DNN and 10 different adversarial attacks to the ImageNet-1K dataset. Our experimental results support our theoretical findings by showing that the compressed latent representations can reduce the success rate of adversarial attacks by 88% in the best case and by 57% on the average compared to attacks to the input space.

An innovative approach to hybrid analog-digital precoding for the downlink of wideband massive MIMO systems is developed. The proposed solution, termed Rank-Constrained Coordinate Ascent (RCCA), starts seeking the full-digital precoder that maximizes the achievable sum-rate over all the frequency subcarriers while constraining the rank of the overall transmit covariance matrix. The frequency-flat constraint on the analog part of the hybrid precoder and the non-convex nature of the rank constraint are circumvented by transforming the original problem into a more suitable one, where a convenient structure for the transmit covariance matrix is imposed. Such structure makes the resulting full-digital precoder particularly adequate for its posterior analog-digital factorization. An additional problem formulation to determine an appropriate power allocation policy according to the rank constraint is also provided. The numerical results show that the proposed method outperforms baseline solutions even for practical scenarios with high spatial diversity.

The IoT's vulnerability to network attacks has motivated the design of intrusion detection schemes (IDS) using Machine Learning (ML), with a low computational cost for online detection but intensive offline learning. Such IDS can have high attack detection accuracy and are easily installed on servers that communicate with IoT devices. However, they are seldom evaluated in realistic operational conditions where IDS processing may be held up by the system overload created by attacks. Thus we first present an experimental study of UDP Flood Attacks on a Local Area Network Test-Bed, where the first line of defence is an accurate IDS using an Auto-Associative Dense Random Neural Network. The experiments reveal that during severe attacks, the packet and protocol management software overloads the multi-core server, and paralyses IDS detection. We therefore propose and experimentally evaluate an IDS design where decisions are made from a very small number of incoming packets, so that attacking traffic is dropped within milli-seconds after an attack begins and the paralysing effect of congestion is avoided.

This work addresses the challenge of democratizing advanced Large Language Models (LLMs) by compressing their mathematical reasoning capabilities into sub-billion parameter Small Language Models (SLMs) without compromising performance. We introduce Equation-of-Thought Distillation (EoTD), a novel technique that encapsulates the reasoning process into equation-based representations to construct an EoTD dataset for fine-tuning SLMs. Additionally, we propose the Ensemble Thoughts Distillation (ETD) framework to enhance the reasoning performance of SLMs. This involves creating a reasoning dataset with multiple thought processes, including Chain-of-Thought (CoT), Program-of-Thought (PoT), and Equation-of-Thought (EoT), and using it for fine-tuning. Our experimental findings demonstrate that EoTD significantly boosts the reasoning abilities of SLMs, while ETD enables these models to achieve state-of-the-art reasoning performance.

Integrated sensing and communication (ISAC) has attracted growing interests for enabling the future 6G wireless networks, due to its capability of sharing spectrum and hardware resources between communication and sensing systems. However, existing works on ISAC usually need to modify the communication protocol to cater for the new sensing performance requirement, which may be difficult to implement in practice. In this paper, we study a new intelligent reflecting surface (IRS) aided millimeter-wave (mmWave) ISAC system by exploiting the distinct beam scanning operation in mmWave communications to achieve efficient sensing at the same time. First, we propose a two-phase ISAC protocol aided by a semi-passive IRS, consisting of beam scanning and data transmission. Specifically, in the beam scanning phase, the IRS finds the optimal beam for reflecting signals from the base station to a communication user via its passive elements. Meanwhile, the IRS directly estimates the angle of a nearby target based on echo signals from the target using its equipped active sensing element. Then, in the data transmission phase, the sensing accuracy is further improved by leveraging the data signals via possible IRS beam splitting. Next, we derive the achievable rate of the communication user as well as the Cram\'er-Rao bound and the approximate mean square error of the target angle estimation Finally, extensive simulation results are provided to verify our analysis as well as the effectiveness of the proposed scheme.

Graph neural networks (GNNs) have demonstrated a significant boost in prediction performance on graph data. At the same time, the predictions made by these models are often hard to interpret. In that regard, many efforts have been made to explain the prediction mechanisms of these models from perspectives such as GNNExplainer, XGNN and PGExplainer. Although such works present systematic frameworks to interpret GNNs, a holistic review for explainable GNNs is unavailable. In this survey, we present a comprehensive review of explainability techniques developed for GNNs. We focus on explainable graph neural networks and categorize them based on the use of explainable methods. We further provide the common performance metrics for GNNs explanations and point out several future research directions.

Vast amount of data generated from networks of sensors, wearables, and the Internet of Things (IoT) devices underscores the need for advanced modeling techniques that leverage the spatio-temporal structure of decentralized data due to the need for edge computation and licensing (data access) issues. While federated learning (FL) has emerged as a framework for model training without requiring direct data sharing and exchange, effectively modeling the complex spatio-temporal dependencies to improve forecasting capabilities still remains an open problem. On the other hand, state-of-the-art spatio-temporal forecasting models assume unfettered access to the data, neglecting constraints on data sharing. To bridge this gap, we propose a federated spatio-temporal model -- Cross-Node Federated Graph Neural Network (CNFGNN) -- which explicitly encodes the underlying graph structure using graph neural network (GNN)-based architecture under the constraint of cross-node federated learning, which requires that data in a network of nodes is generated locally on each node and remains decentralized. CNFGNN operates by disentangling the temporal dynamics modeling on devices and spatial dynamics on the server, utilizing alternating optimization to reduce the communication cost, facilitating computations on the edge devices. Experiments on the traffic flow forecasting task show that CNFGNN achieves the best forecasting performance in both transductive and inductive learning settings with no extra computation cost on edge devices, while incurring modest communication cost.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

Deep convolutional neural networks (CNNs) have recently achieved great success in many visual recognition tasks. However, existing deep neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past few years, tremendous progress has been made in this area. In this paper, we survey the recent advanced techniques for compacting and accelerating CNNs model developed. These techniques are roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation. Methods of parameter pruning and sharing will be described at the beginning, after that the other techniques will be introduced. For each scheme, we provide insightful analysis regarding the performance, related applications, advantages, and drawbacks etc. Then we will go through a few very recent additional successful methods, for example, dynamic capacity networks and stochastic depths networks. After that, we survey the evaluation matrix, the main datasets used for evaluating the model performance and recent benchmarking efforts. Finally, we conclude this paper, discuss remaining challenges and possible directions on this topic.

Convolutional networks (ConvNets) have achieved great successes in various challenging vision tasks. However, the performance of ConvNets would degrade when encountering the domain shift. The domain adaptation is more significant while challenging in the field of biomedical image analysis, where cross-modality data have largely different distributions. Given that annotating the medical data is especially expensive, the supervised transfer learning approaches are not quite optimal. In this paper, we propose an unsupervised domain adaptation framework with adversarial learning for cross-modality biomedical image segmentations. Specifically, our model is based on a dilated fully convolutional network for pixel-wise prediction. Moreover, we build a plug-and-play domain adaptation module (DAM) to map the target input to features which are aligned with source domain feature space. A domain critic module (DCM) is set up for discriminating the feature space of both domains. We optimize the DAM and DCM via an adversarial loss without using any target domain label. Our proposed method is validated by adapting a ConvNet trained with MRI images to unpaired CT data for cardiac structures segmentations, and achieved very promising results.

北京阿比特科技有限公司