亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Ultra-massive multiple-input multiple-output (UM-MIMO) is a cutting-edge technology that promises to revolutionize wireless networks by providing an unprecedentedly high spectral and energy efficiency. The enlarged array aperture of UM-MIMO facilitates the accessibility of the near-field region, thereby offering a novel degree of freedom for communications and sensing. Nevertheless, the transceiver design for such systems is challenging because of the enormous system scale, the complicated channel characteristics, and the uncertainties in propagation environments. Therefore, it is critical to study scalable, low-complexity, and robust algorithms that can efficiently characterize and leverage the properties of the near-field channel. In this article, we will advocate two general frameworks from an artificial intelligence (AI)-native perspective, which are tailored for the algorithmic design of near-field UM-MIMO transceivers. Specifically, the frameworks for both iterative and non-iterative algorithms are discussed. Near-field beam focusing and channel estimation are presented as two tutorial-style examples to demonstrate the significant advantages of the proposed AI-native frameworks in terms of various key performance indicators.

相關內容

Integrated sensing and communication (ISAC) has attracted growing interests for sixth-generation (6G) and beyond wireless networks. The primary challenges faced by highly efficient ISAC include limited sensing and communication (S&C) coverage, constrained integration gain between S&C under weak channel correlations, and unknown performance boundary. Intelligent reflecting/refracting surfaces (IRSs) can effectively expand S&C coverage and control the degree of freedom of channels between the transmitters and receivers, thereby realizing increasing integration gains. In this work, we first delve into the fundamental characteristics of IRS-empowered ISAC and innovative IRS-assisted sensing architectures. Then, we discuss various objectives for IRS channel control and deployment optimization in ISAC systems. Furthermore, the interplay between S&C in different deployment strategies is investigated and some promising directions for IRS enhanced ISAC are outlined.

Nonlinear distortion stemming from low-cost power amplifiers may severely affect wireless communication performance through out-of-band (OOB) radiation and in-band distortion. The distortion is correlated between different transmit antennas in an antenna array, which results in a beamforming gain at the receiver side that grows with the number of antennas. In this paper, we investigate how the strength of the distortion is affected by the frequency selectivity of the channel. A closed-form expression for the received distortion power is derived as a function of the number of multipath components (MPCs) and the delay spread, which highlight their impact. The performed analysis, which is verified via numerical simulations, reveals that as the number of MPCs increases, distortion exhibits distinct characteristics for in-band and OOB frequencies. It is shown that the received in-band and OOB distortion power is inversely proportional to the number of MPCs, and it is reported that as the delay spread gets narrower, the in-band distortion power is beamformed towards the intended user, which yields higher received in-band distortion compared to the OOB distortion.

Faults occurring in ad-hoc robot networks may fatally perturb their topologies leading to disconnection of subsets of those networks. Optimal topology synthesis is generally resource-intensive and time-consuming to be done in real time for large ad-hoc robot networks. One should only perform topology re-computations if the probability of topology recoverability after the occurrence of any fault surpasses that of its irrecoverability. We formulate this problem as a binary classification problem. Then, we develop a two-pathway data-driven model based on Bayesian Gaussian mixture models that predicts the solution to a typical problem by two different pre-fault and post-fault prediction pathways. The results, obtained by the integration of the predictions of those pathways, clearly indicate the success of our model in solving the topology (ir)recoverability prediction problem compared to the best of current strategies found in the literature.

Human-robot interaction (HRI) research is progressively addressing multi-party scenarios, where a robot interacts with more than one human user at the same time. Conversely, research is still at an early stage for human-robot collaboration. The use of machine learning techniques to handle such type of collaboration requires data that are less feasible to produce than in a typical HRC setup. This work outlines scenarios of concurrent tasks for non-dyadic HRC applications. Based upon these concepts, this study also proposes an alternative way of gathering data regarding multi-user activity, by collecting data related to single users and merging them in post-processing, to reduce the effort involved in producing recordings of pair settings. To validate this statement, 3D skeleton poses of activity of single users were collected and merged in pairs. After this, such datapoints were used to separately train a long short-term memory (LSTM) network and a variational autoencoder (VAE) composed of spatio-temporal graph convolutional networks (STGCN) to recognise the joint activities of the pairs of people. The results showed that it is possible to make use of data collected in this way for pair HRC settings and get similar performances compared to using training data regarding groups of users recorded under the same settings, relieving from the technical difficulties involved in producing these data. The related code and collected data are publicly available.

Reconfigurable surface (RS) has been shown to be an effective solution for improving wireless communication links in general multi-user multiple-input multiple-output (MU-MIMO) setting. Current research efforts have been largely directed towards the study of reconfigurable intelligent surface (RIS), which corresponds to an RS made of passive reconfigurable elements with only phase shifting capabilities. RIS constitutes a cost- and energy- efficient solution for increased beamforming gain since it allows to generate constructive interference towards desired directions, e.g., towards a base station (BS). However, in many situations, multiplexing gain may have greater impact on the achievable transmission rates and number of simultaneously connected devices, while RIS has only been able to achieve minor improvements in this aspect. Recent work has proposed the use of alternative RS technologies, namely amplitude-reconfigurable intelligent surface (ARIS) and fully-reconfigurable intelligent surface (FRIS), to achieve perfect orthogonalization of MU-MIMO channels, thus allowing for maximum multiplexing gain at reduced complexity. In this work we consider the use of ARIS and FRIS for simultaneously orthogonalizing a MU-MIMO channel, while embedding extra information in the orthogonalized channel. We show that the resulting achievable rates allow for full exploitation of the degrees of freedom in a MU-MIMO system with excess of BS antennas.

In-band Network Telemetry (INT) has emerged as a promising network measurement technology. However, existing network telemetry systems lack the flexibility to meet diverse telemetry requirements and are also difficult to adapt to dynamic network environments. In this paper, we propose AdapINT, a versatile and adaptive in-band network telemetry framework assisted by dual-timescale probes, including long-period auxiliary probes (APs) and short-period dynamic probes (DPs). Technically, the APs collect basic network status information, which is used for the path planning of DPs. To achieve full network coverage, we propose an auxiliary probes path deployment (APPD) algorithm based on the Depth-First-Search (DFS). The DPs collect specific network information for telemetry tasks. To ensure that the DPs can meet diverse telemetry requirements and adapt to dynamic network environments, we apply the deep reinforcement learning (DRL) technique and transfer learning method to design the dynamic probes path deployment (DPPD) algorithm. The evaluation results show that AdapINT can redesign the telemetry system according to telemetry requirements and network environments. AdapINT can reduce telemetry latency by 75\% in online games and video conferencing scenarios. For overhead-aware networks, AdapINT can reduce control overheads by 34\% in cloud computing services.

Graph neural networks (GNNs) have demonstrated significant promise in modelling relational data and have been widely applied in various fields of interest. The key mechanism behind GNNs is the so-called message passing where information is being iteratively aggregated to central nodes from their neighbourhood. Such a scheme has been found to be intrinsically linked to a physical process known as heat diffusion, where the propagation of GNNs naturally corresponds to the evolution of heat density. Analogizing the process of message passing to the heat dynamics allows to fundamentally understand the power and pitfalls of GNNs and consequently informs better model design. Recently, there emerges a plethora of works that proposes GNNs inspired from the continuous dynamics formulation, in an attempt to mitigate the known limitations of GNNs, such as oversmoothing and oversquashing. In this survey, we provide the first systematic and comprehensive review of studies that leverage the continuous perspective of GNNs. To this end, we introduce foundational ingredients for adapting continuous dynamics to GNNs, along with a general framework for the design of graph neural dynamics. We then review and categorize existing works based on their driven mechanisms and underlying dynamics. We also summarize how the limitations of classic GNNs can be addressed under the continuous framework. We conclude by identifying multiple open research directions.

Orthogonal time frequency space (OTFS) has been widely acknowledged as a promising wireless technology for challenging transmission scenarios, including high-mobility channels. In this paper, we investigate the pilot design for the multi-user OTFS system based on the a priori statistical channel state information (CSI), where the practical threshold-based estimation scheme is adopted. Specifically, we first derive the a posteriori Cramer-Rao bound (PCRB) based on a priori channel information for each user. According to our derivation, the PCRB only relates to the user's pilot signal-to-noise ratio (SNR) and the range of delay and Doppler shifts under the practical power-delay and power-Doppler profiles. Then, a pilot scheme is proposed to minimize the average PCRB of different users, where a closed-form global optimal pilot power allocation is derived. Our numerical results verify the multi-user PCRB analysis. Also, we demonstrate an around 3 dB improvement in the average normalized-mean-square error (NMSE) by using the proposed pilot design in comparison to the conventional embedded pilot design under the same total pilot power.

Extremely large-scale multiple-input-multiple output (XL-MIMO) is a promising technology to achieve high spectral efficiency (SE) and energy efficiency (EE) in future wireless systems. The larger array aperture of XL-MIMO makes communication scenarios closer to the near-field region. Therefore, near-field resource allocation is essential in realizing the above key performance indicators (KPIs). Moreover, the overall performance of XL-MIMO systems heavily depends on the channel characteristics of the selected users, eliminating interference between users through beamforming, power control, etc. The above resource allocation issue constitutes a complex joint multi-objective optimization problem since many variables and parameters must be optimized, including the spatial degree of freedom, rate, power allocation, and transmission technique. In this article, we review the basic properties of near-field communications and focus on the corresponding "resource allocation" problems. First, we identify available resources in near-field communication systems and highlight their distinctions from far-field communications. Then, we summarize optimization tools, such as numerical techniques and machine learning methods, for addressing near-field resource allocation, emphasizing their strengths and limitations. Finally, several important research directions of near-field communications are pointed out for further investigation.

Unmanned aerial vehicle (UAV) swarm enabled edge computing is envisioned to be promising in the sixth generation wireless communication networks due to their wide application sensories and flexible deployment. However, most of the existing works focus on edge computing enabled by a single or a small scale UAVs, which are very different from UAV swarm-enabled edge computing. In order to facilitate the practical applications of UAV swarm-enabled edge computing, the state of the art research is presented in this article. The potential applications, architectures and implementation considerations are illustrated. Moreover, the promising enabling technologies for UAV swarm-enabled edge computing are discussed. Furthermore, we outline challenges and open issues in order to shed light on the future research directions.

北京阿比特科技有限公司