亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Cell-free massive MIMO is emerging as a promising technology for future wireless communication systems, which is expected to offer uniform coverage and high spectral efficiency compared to classical cellular systems. We study in this paper how cell-free massive MIMO can support federated edge learning. Taking advantage of the additive nature of the wireless multiple access channel, over-the-air computation is exploited, where the clients send their local updates simultaneously over the same communication resource. This approach, known as over-the-air federated learning (OTA-FL), is proven to alleviate the communication overhead of federated learning over wireless networks. Considering channel correlation and only imperfect channel state information available at the central server, we propose a practical implementation of OTA-FL over cell-free massive MIMO. The convergence of the proposed implementation is studied analytically and experimentally, confirming the benefits of cell-free massive MIMO for OTA-FL.

相關內容

Offloading is a popular way to overcome the resource and power constraints of networked embedded devices, which are increasingly found in industrial environments. It involves moving resource-intensive computational tasks to a more powerful device on the network, often in close proximity to enable wireless communication. However, many Industrial Internet of Things (IIoT) applications have real-time constraints. Offloading such tasks over a wireless network with latency uncertainties poses new challenges. In this paper, we aim to better understand these challenges by proposing a system architecture and scheduler for real-time task offloading in wireless IIoT environments. Based on a prototype, we then evaluate different system configurations and discuss their trade-offs and implications. Our design showed to prevent deadline misses under high load and network uncertainties and was able to outperform a reference scheduler in terms of successful task throughput. Under heavy task load, where the reference scheduler had a success rate of 5%, our design achieved a success rate of 60%.

Collaborative decision-making is an essential capability for multi-robot systems, such as connected vehicles, to collaboratively control autonomous vehicles in accident-prone scenarios. Under limited communication bandwidth, capturing comprehensive situational awareness by integrating connected agents' observation is very challenging. In this paper, we propose a novel collaborative decision-making method that efficiently and effectively integrates collaborators' representations to control the ego vehicle in accident-prone scenarios. Our approach formulates collaborative decision-making as a classification problem. We first represent sequences of raw observations as spatiotemporal graphs, which significantly reduce the package size to share among connected vehicles. Then we design a novel spatiotemporal graph neural network based on heterogeneous graph learning, which analyzes spatial and temporal connections of objects in a unified way for collaborative decision-making. We evaluate our approach using a high-fidelity simulator that considers realistic traffic, communication bandwidth, and vehicle sensing among connected autonomous vehicles. The experimental results show that our representation achieves over 100x reduction in the shared data size that meets the requirements of communication bandwidth for connected autonomous driving. In addition, our approach achieves over 30% improvements in driving safety.

Nonlinear distortion stemming from low-cost power amplifiers may severely affect wireless communication performance through out-of-band (OOB) radiation and in-band distortion. The distortion is correlated between different transmit antennas in an antenna array, which results in a beamforming gain at the receiver side that grows with the number of antennas. In this paper, we investigate how the strength of the distortion is affected by the frequency selectivity of the channel. A closed-form expression for the received distortion power is derived as a function of the number of multipath components (MPCs) and the delay spread, which highlight their impact. The performed analysis, which is verified via numerical simulations, reveals that as the number of MPCs increases, distortion exhibits distinct characteristics for in-band and OOB frequencies. It is shown that the received in-band and OOB distortion power is inversely proportional to the number of MPCs, and it is reported that as the delay spread gets narrower, the in-band distortion power is beamformed towards the intended user, which yields higher received in-band distortion compared to the OOB distortion.

Energy saving in wireless networks is growing in importance due to increasing demand for evolving new-gen cellular networks, environmental and regulatory concerns, and potential energy crises arising from geopolitical tensions. In this work, we propose an approximate dynamic programming (ADP)-based method coupled with online optimization to switch on/off the cells of base stations to reduce network power consumption while maintaining adequate Quality of Service (QoS) metrics. We use a multilayer perceptron (MLP) given each state-action pair to predict the power consumption to approximate the value function in ADP for selecting the action with optimal expected power saved. To save the largest possible power consumption without deteriorating QoS, we include another MLP to predict QoS and a long short-term memory (LSTM) for predicting handovers, incorporated into an online optimization algorithm producing an adaptive QoS threshold for filtering cell switching actions based on the overall QoS history. The performance of the method is evaluated using a practical network simulator with various real-world scenarios with dynamic traffic patterns.

Optical wireless communication (OWC) offers several complementary advantages to radio-frequency wireless networks such as its massive available spectrum; hence, it is widely anticipated that OWC will assume a pivotal role in the forthcoming sixth generation wireless communication networks. Although significant progress has been achieved in OWC over the past decades, the outage induced by occasionally low received optical power continues to pose a key limiting factor for its deployment. In this work, we discuss the potential role of single-photon counting (SPC) receivers as a promising solution to overcome this limitation. We present an overview of the applications of SPC-based OWC systems in 6G networks, introduce their major performance-limiting factors, propose a performance enhancement framework to tackle these issues, and identify critical areas of open problems for future research.

Distributed computing is known as an emerging and efficient technique to support various intelligent services, such as large-scale machine learning. However, privacy leakage and random delays from straggling servers pose significant challenges. To address these issues, coded computing, a promising solution that combines coding theory with distributed computing, recovers computation tasks with results from a subset of workers. In this paper, we propose the adaptive privacy-preserving coded computing (APCC) strategy, which can adaptively provide accurate or approximated results according to the form of computation functions, so as to suit diverse types of computation tasks. We prove that APCC achieves complete data privacy preservation and demonstrate its optimality in terms of encoding rate, defined as the ratio between the computation loads of tasks before and after encoding. To further alleviate the straggling effect and reduce delay, we integrate hierarchical task partitioning and task cancellation into the coding design of APCC. The corresponding partitioning problems are formulated as mixed-integer nonlinear programming (MINLP) problems with the objective of minimizing task completion delay. We propose a low-complexity maximum value descent (MVD) algorithm to optimally solve these problems. Simulation results show that APCC can reduce task completion delay by a range of 20.3% to 47.5% when compared to other state-of-the-art benchmarks.

Vehicle-to-everything (V2X) perception is an innovative technology that enhances vehicle perception accuracy, thereby elevating the security and reliability of autonomous systems. However, existing V2X perception methods focus on static scenes from mainly vehicle-based vision, which is constrained by sensor capabilities and communication loads. To adapt V2X perception models to dynamic scenes, we propose to build V2X perception from road-to-vehicle vision and present Adaptive Road-to-Vehicle Perception (AR2VP) method. In AR2VP,we leverage roadside units to offer stable, wide-range sensing capabilities and serve as communication hubs. AR2VP is devised to tackle both intra-scene and inter-scene changes. For the former, we construct a dynamic perception representing module, which efficiently integrates vehicle perceptions, enabling vehicles to capture a more comprehensive range of dynamic factors within the scene.Moreover, we introduce a road-to-vehicle perception compensating module, aimed at preserving the maximized roadside unit perception information in the presence of intra-scene changes.For inter-scene changes, we implement an experience replay mechanism leveraging the roadside unit's storage capacity to retain a subset of historical scene data, maintaining model robustness in response to inter-scene shifts. We conduct perception experiment on 3D object detection and segmentation, and the results show that AR2VP excels in both performance-bandwidth trade-offs and adaptability within dynamic environments.

Scheduling packets with end-to-end deadline constraints in multihop networks is an important problem that has been notoriously difficult to tackle. Recently, there has been progress on this problem in the worst-case traffic setting, with the objective of maximizing the number of packets delivered within their deadlines. Specifically, the proposed algorithms were shown to achieve $\Omega(1/\log(L))$ fraction of the optimal objective value if the minimum link capacity in the network is $C_{\min}=\Omega(\log (L))$, where $L$ is the maximum length of a packet's route in the network (which is bounded by the packet's maximum deadline). However, such guarantees can be quite pessimistic due to the strict worst-case traffic assumption and may not accurately reflect real-world settings. In this work, we aim to address this limitation by exploring whether it is possible to design algorithms that achieve a constant fraction of the optimal value while relaxing the worst-case traffic assumption. We provide a positive answer by demonstrating that in stochastic traffic settings, such as i.i.d. packet arrivals, near-optimal, $(1-\epsilon)$-approximation algorithms can be designed if $C_{\min} = \Omega\big(\frac{\log (L/\epsilon) } {\epsilon^2}\big)$. To the best of our knowledge, this is the first result that shows this problem can be solved near-optimally under nontrivial assumptions on traffic and link capacity. We further present extended simulations using real network traces with non-stationary traffic, which demonstrate that our algorithms outperform worst-case-based algorithms in practical settings.

Vast amount of data generated from networks of sensors, wearables, and the Internet of Things (IoT) devices underscores the need for advanced modeling techniques that leverage the spatio-temporal structure of decentralized data due to the need for edge computation and licensing (data access) issues. While federated learning (FL) has emerged as a framework for model training without requiring direct data sharing and exchange, effectively modeling the complex spatio-temporal dependencies to improve forecasting capabilities still remains an open problem. On the other hand, state-of-the-art spatio-temporal forecasting models assume unfettered access to the data, neglecting constraints on data sharing. To bridge this gap, we propose a federated spatio-temporal model -- Cross-Node Federated Graph Neural Network (CNFGNN) -- which explicitly encodes the underlying graph structure using graph neural network (GNN)-based architecture under the constraint of cross-node federated learning, which requires that data in a network of nodes is generated locally on each node and remains decentralized. CNFGNN operates by disentangling the temporal dynamics modeling on devices and spatial dynamics on the server, utilizing alternating optimization to reduce the communication cost, facilitating computations on the edge devices. Experiments on the traffic flow forecasting task show that CNFGNN achieves the best forecasting performance in both transductive and inductive learning settings with no extra computation cost on edge devices, while incurring modest communication cost.

Multivariate time series forecasting is extensively studied throughout the years with ubiquitous applications in areas such as finance, traffic, environment, etc. Still, concerns have been raised on traditional methods for incapable of modeling complex patterns or dependencies lying in real word data. To address such concerns, various deep learning models, mainly Recurrent Neural Network (RNN) based methods, are proposed. Nevertheless, capturing extremely long-term patterns while effectively incorporating information from other variables remains a challenge for time-series forecasting. Furthermore, lack-of-explainability remains one serious drawback for deep neural network models. Inspired by Memory Network proposed for solving the question-answering task, we propose a deep learning based model named Memory Time-series network (MTNet) for time series forecasting. MTNet consists of a large memory component, three separate encoders, and an autoregressive component to train jointly. Additionally, the attention mechanism designed enable MTNet to be highly interpretable. We can easily tell which part of the historic data is referenced the most.

北京阿比特科技有限公司