亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper focuses on advancing outdoor wireless systems to better support ubiquitous extended reality (XR) applications, and close the gap with current indoor wireless transmission capabilities. We propose a hybrid knowledge-data driven method for channel semantic acquisition and multi-user beamforming in cell-free massive multiple-input multiple-output (MIMO) systems. Specifically, we firstly propose a data-driven multiple layer perceptron (MLP)-Mixer-based auto-encoder for channel semantic acquisition, where the pilot signals, CSI quantizer for channel semantic embedding, and CSI reconstruction for channel semantic extraction are jointly optimized in an end-to-end manner. Moreover, based on the acquired channel semantic, we further propose a knowledge-driven deep-unfolding multi-user beamformer, which is capable of achieving good spectral efficiency with robustness to imperfect CSI in outdoor XR scenarios. By unfolding conventional successive over-relaxation (SOR)-based linear beamforming scheme with deep learning, the proposed beamforming scheme is capable of adaptively learning the optimal parameters to accelerate convergence and improve the robustness to imperfect CSI. The proposed deep unfolding beamforming scheme can be used for access points (APs) with fully-digital array and APs with hybrid analog-digital array structure. Simulation results demonstrate the effectiveness of our proposed scheme in improving the accuracy of channel acquisition, as well as reducing complexity in both CSI acquisition and beamformer design. The proposed beamforming method achieves approximately 96% of the converged spectrum efficiency performance after only three iterations in downlink transmission, demonstrating its efficacy and potential to improve outdoor XR applications.

相關內容

Terahertz (THz) communication is widely deemed the next frontier of wireless networks owing to the abundant spectrum resources in the THz band. Whilst THz signals suffer from severe propagation losses, a massive antenna array can be deployed at the base station (BS) to mitigate those losses through beamforming. Nevertheless, a very large number of antennas increases the BS's hardware complexity and power consumption, and hence it can lead to poor energy efficiency (EE). To surmount this fundamental problem, we propose a novel array design based on superdirectivity and nonuniform inter-element spacing. Specifically, we exploit the mutual coupling between closely spaced elements to form superdirective pairs. A unique property of them is that all require the same excitation amplitude, and thus can be driven by a single radio frequency chain akin to conventional phased arrays. Moreover, they facilitate multi-port impedance matching, which ensures maximum power transfer for any beamforming angle. After addressing the implementation issues of superdirectivity, we show that the number of BS antennas can be effectively reduced without sacrificing the achievable rate. Simulation results demonstrate that our design offers huge EE gains compared to uncoupled arrays with uniform spacing, and hence could be a radical solution for future THz systems.

We present CEMA: Causal Explanations in Multi-Agent systems; a general framework to create causal explanations for an agent's decisions in sequential multi-agent systems. The core of CEMA is a novel causal selection method inspired by how humans select causes for explanations. Unlike prior work that assumes a specific causal structure, CEMA is applicable whenever a probabilistic model for predicting future states of the environment is available. Given such a model, CEMA samples counterfactual worlds that inform us about the salient causes behind the agent's decisions. We evaluate CEMA on the task of motion planning for autonomous driving and test it in diverse simulated scenarios. We show that CEMA correctly and robustly identifies the causes behind decisions, even when a large number of agents is present, and show via a user study that CEMA's explanations have a positive effect on participant's trust in AVs and are rated at least as good as high-quality human explanations elicited from other participants.

This article addresses the problem of Ultra Reliable Low Latency Communications (URLLC) in wireless networks, a framework with particularly stringent constraints imposed by many Internet of Things (IoT) applications from diverse sectors. We propose a novel Deep Reinforcement Learning (DRL) scheduling algorithm, named NOMA-PPO, to solve the Non-Orthogonal Multiple Access (NOMA) uplink URLLC scheduling problem involving strict deadlines. The challenge of addressing uplink URLLC requirements in NOMA systems is related to the combinatorial complexity of the action space due to the possibility to schedule multiple devices, and to the partial observability constraint that we impose to our algorithm in order to meet the IoT communication constraints and be scalable. Our approach involves 1) formulating the NOMA-URLLC problem as a Partially Observable Markov Decision Process (POMDP) and the introduction of an agent state, serving as a sufficient statistic of past observations and actions, enabling a transformation of the POMDP into a Markov Decision Process (MDP); 2) adapting the Proximal Policy Optimization (PPO) algorithm to handle the combinatorial action space; 3) incorporating prior knowledge into the learning agent with the introduction of a Bayesian policy. Numerical results reveal that not only does our approach outperform traditional multiple access protocols and DRL benchmarks on 3GPP scenarios, but also proves to be robust under various channel and traffic configurations, efficiently exploiting inherent time correlations.

To support the extremely high spectral efficiency and energy efficiency requirements, and emerging applications of future wireless communications, holographic multiple-input multiple-output (H-MIMO) technology is envisioned as one of the most promising enablers. It can potentially bring extra degrees-of-freedom for communications and signal processing, including spatial multiplexing in line-of-sight (LoS) channels and electromagnetic (EM) field processing performed using specialized devices, to attain the fundamental limits of wireless communications. In this context, EM-domain channel modeling is critical to harvest the benefits offered by H-MIMO. Existing EM-domain channel models are built based on the tensor Green function, which require prior knowledge of the global position and/or the relative distances and directions of the transmit/receive antenna elements. Such knowledge may be difficult to acquire in real-world applications due to extensive measurements needed for obtaining this data. To overcome this limitation, we propose a transmit-receive parameter separable channel model methodology in which the EM-domain (or holographic) channel can be simply acquired from the distance/direction measured between the center-points between the transmit and receive surfaces, and the local positions between the transmit and receive elements, thus avoiding extensive global parameter measurements. Analysis and numerical results showcase the effectiveness of the proposed channel modeling approach in approximating the H-MIMO channel, and achieving the theoretical channel capacity.

The next generation of wireless communication technology is anticipated to address the communication reliability challenges encountered in high-speed mobile communication scenarios. An Orthogonal Time Frequency Space (OTFS) system has been introduced as a solution that effectively mitigates these issues. However, OTFS is associated with relatively high pilot overhead and multiuser multiplexing overhead. In response to these concerns within the OTFS framework, a novel modulation technology known as Affine Frequency Division Multiplexing (AFDM) which is based on the discrete affine Fourier transform has emerged. AFDM effectively resolves the challenges by achieving full diversity through parameter adjustments aligned with the channel's delay-Doppler profile. Consequently, AFDM is capable of achieving performance levels comparable to OTFS. As the research on AFDM detection is currently limited, we present a low-complexity yet efficient message passing (MP) algorithm. This algorithm handles joint interference cancellation and detection while capitalizing on the inherent sparsity of the channel. Based on simulation results, the MP detection algorithm outperforms Minimum Mean Square Error (MMSE) and Maximal Ratio Combining (MRC) detection techniques.

Vast amount of data generated from networks of sensors, wearables, and the Internet of Things (IoT) devices underscores the need for advanced modeling techniques that leverage the spatio-temporal structure of decentralized data due to the need for edge computation and licensing (data access) issues. While federated learning (FL) has emerged as a framework for model training without requiring direct data sharing and exchange, effectively modeling the complex spatio-temporal dependencies to improve forecasting capabilities still remains an open problem. On the other hand, state-of-the-art spatio-temporal forecasting models assume unfettered access to the data, neglecting constraints on data sharing. To bridge this gap, we propose a federated spatio-temporal model -- Cross-Node Federated Graph Neural Network (CNFGNN) -- which explicitly encodes the underlying graph structure using graph neural network (GNN)-based architecture under the constraint of cross-node federated learning, which requires that data in a network of nodes is generated locally on each node and remains decentralized. CNFGNN operates by disentangling the temporal dynamics modeling on devices and spatial dynamics on the server, utilizing alternating optimization to reduce the communication cost, facilitating computations on the edge devices. Experiments on the traffic flow forecasting task show that CNFGNN achieves the best forecasting performance in both transductive and inductive learning settings with no extra computation cost on edge devices, while incurring modest communication cost.

Data transmission between two or more digital devices in industry and government demands secure and agile technology. Digital information distribution often requires deployment of Internet of Things (IoT) devices and Data Fusion techniques which have also gained popularity in both, civilian and military environments, such as, emergence of Smart Cities and Internet of Battlefield Things (IoBT). This usually requires capturing and consolidating data from multiple sources. Because datasets do not necessarily originate from identical sensors, fused data typically results in a complex Big Data problem. Due to potentially sensitive nature of IoT datasets, Blockchain technology is used to facilitate secure sharing of IoT datasets, which allows digital information to be distributed, but not copied. However, blockchain has several limitations related to complexity, scalability, and excessive energy consumption. We propose an approach to hide information (sensor signal) by transforming it to an image or an audio signal. In one of the latest attempts to the military modernization, we investigate sensor fusion approach by investigating the challenges of enabling an intelligent identification and detection operation and demonstrates the feasibility of the proposed Deep Learning and Anomaly Detection models that can support future application for specific hand gesture alert system from wearable devices.

Recent advancements in deep neural networks for graph-structured data have led to state-of-the-art performance on recommender system benchmarks. However, making these methods practical and scalable to web-scale recommendation tasks with billions of items and hundreds of millions of users remains a challenge. Here we describe a large-scale deep recommendation engine that we developed and deployed at Pinterest. We develop a data-efficient Graph Convolutional Network (GCN) algorithm PinSage, which combines efficient random walks and graph convolutions to generate embeddings of nodes (i.e., items) that incorporate both graph structure as well as node feature information. Compared to prior GCN approaches, we develop a novel method based on highly efficient random walks to structure the convolutions and design a novel training strategy that relies on harder-and-harder training examples to improve robustness and convergence of the model. We also develop an efficient MapReduce model inference algorithm to generate embeddings using a trained model. We deploy PinSage at Pinterest and train it on 7.5 billion examples on a graph with 3 billion nodes representing pins and boards, and 18 billion edges. According to offline metrics, user studies and A/B tests, PinSage generates higher-quality recommendations than comparable deep learning and graph-based alternatives. To our knowledge, this is the largest application of deep graph embeddings to date and paves the way for a new generation of web-scale recommender systems based on graph convolutional architectures.

This paper introduces an online model for object detection in videos designed to run in real-time on low-powered mobile and embedded devices. Our approach combines fast single-image object detection with convolutional long short term memory (LSTM) layers to create an interweaved recurrent-convolutional architecture. Additionally, we propose an efficient Bottleneck-LSTM layer that significantly reduces computational cost compared to regular LSTMs. Our network achieves temporal awareness by using Bottleneck-LSTMs to refine and propagate feature maps across frames. This approach is substantially faster than existing detection methods in video, outperforming the fastest single-frame models in model size and computational cost while attaining accuracy comparable to much more expensive single-frame models on the Imagenet VID 2015 dataset. Our model reaches a real-time inference speed of up to 15 FPS on a mobile CPU.

In this paper, we propose the joint learning attention and recurrent neural network (RNN) models for multi-label classification. While approaches based on the use of either model exist (e.g., for the task of image captioning), training such existing network architectures typically require pre-defined label sequences. For multi-label classification, it would be desirable to have a robust inference process, so that the prediction error would not propagate and thus affect the performance. Our proposed model uniquely integrates attention and Long Short Term Memory (LSTM) models, which not only addresses the above problem but also allows one to identify visual objects of interests with varying sizes without the prior knowledge of particular label ordering. More importantly, label co-occurrence information can be jointly exploited by our LSTM model. Finally, by advancing the technique of beam search, prediction of multiple labels can be efficiently achieved by our proposed network model.

北京阿比特科技有限公司