亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The growing demand for radio access networks (RANs) driven by advanced wireless technology and the everincreasing mobile traffic, faces significant energy consumption challenges that threaten sustainability. To address this, an architecture referring to the vertical heterogeneous network (vHetNet) has recently been proposed. Our study seeks to enhance network operations in terms of energy efficiency and sustainability by examining a vHetNet configuration, comprising a high altitude platform station (HAPS) acting as a super macro base station (SMBS), along with a macro base station (MBS) and a set of small base stations (SBSs) in a densely populated area.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國際網(wang)絡會議。 Publisher:IFIP。 SIT:

We consider unsourced random access (uRA) in a cell-free (CF) user-centric wireless network, where a large number of potential users compete for a random access slot, while only a finite subset is active. The random access users transmit codewords of length $L$ symbols from a shared codebook, which are received by $B$ geographically distributed radio units (RUs) equipped with $M$ antennas each. Our goal is to devise and analyze a \emph{centralized} decoder to detect the transmitted messages (without prior knowledge of the active users) and estimate the corresponding channel state information. A specific challenge lies in the fact that, due to the geographically distributed nature of the CF network, there is no fixed correspondence between codewords and large-scale fading coefficients (LSFCs). To overcome this problem, we propose a scheme where the access codebook is partitioned in "location-based" subcodes, such that users in a particular location make use of the corresponding subcode. The joint message detection and channel estimation is obtained via a novel {\em Approximated Message Passing} (AMP) algorithm to estimate the linear superposition of matrix-valued "sources" corrupted by Gaussian noise. The matrices to be estimated exhibit zero rows for inactive messages and Gaussian-distributed rows corresponding to the active messages. The asymmetry in the LSFCs and message activity probabilities leads to \emph{different statistics} for the matrix sources, which distinguishes the AMP formulation from previous cases.In the regime where the codebook size scales linearly with $L$, while $B$ and $M$ are fixed, we present a rigorous high-dimensional analysis of the proposed AMP algorithm. Then, exploiting the fundamental decoupling principle of AMP, we provide a comprehensive analysis of Neyman-Pearson message detection, along with the subsequent channel estimation.

In relay-enabled cellular networks, the intertwined nature of network agents calls for complex schemes to allocate wireless resources. Resources need to be distributed among mobile users while considering how relay resources are allocated, and constrained by the traffic rate achievable by base stations and over backhaul links. In this letter, we derive an exact resource allocation scheme that achieves max-min fairness across mobile users, found with a linear complexity with respect to the number of mobile users and relays. The results reveal that the proposed scheme remarkably outperforms current solutions.

In the 6G era, real-time radio resource monitoring and management are urged to support diverse wireless-empowered applications. This calls for fast and accurate estimation on the distribution of the radio resources, which is usually represented by the spatial signal power strength over the geographical environment, known as a radio map. In this paper, we present a cooperative radio map estimation (CRME) approach enabled by the generative adversarial network (GAN), called as GAN-CRME, which features fast and accurate radio map estimation without the transmitters' information. The radio map is inferred by exploiting the interaction between distributed received signal strength (RSS) measurements at mobile users and the geographical map using a deep neural network estimator, resulting in low data-acquisition cost and computational complexity. Moreover, a GAN-based learning algorithm is proposed to boost the inference capability of the deep neural network estimator by exploiting the power of generative AI. Simulation results showcase that the proposed GAN-CRME is even capable of coarse error-correction when the geographical map information is inaccurate.

Grant-free access is a key enabler for connecting wireless devices with low latency and low signaling overhead in massive machine-type communications (mMTC). For massive grant-free access, user-specific signatures are uniquely assigned to mMTC devices. In this paper, we first derive a sufficient condition for the successful identification of active devices through maximum likelihood (ML) estimation in massive grant-free access. The condition is represented by the coherence of a signature sequence matrix containing the signatures of all devices. Then, we present a design framework of non-orthogonal signature sequences in a deterministic fashion. The design principle relies on unimodular masking sequences with low correlation, which are applied as masking sequences to the columns of the discrete Fourier transform (DFT) matrix. For example constructions, we use four polyphase masking sequences represented by characters over finite fields. Leveraging algebraic techniques, we show that the signature sequence matrix of proposed non-orthogonal sequences has theoretically bounded low coherence. Simulation results demonstrate that the deterministic non-orthogonal signatures achieve the excellent performance of joint activity and data detection by ML- and approximate message passing (AMP)-based algorithms for massive grant-free access in mMTC.

With the growing interest in satellite networks, satellite-terrestrial integrated networks (STINs) have gained significant attention because of their potential benefits. However, due to the lack of a tractable network model for the STIN architecture, analytical studies allowing one to investigate the performance of such networks are not yet available. In this work, we propose a unified network model that jointly captures satellite and terrestrial networks into one analytical framework. Our key idea is based on Poisson point processes distributed on concentric spheres, assigning a random height to each point as a mark. This allows one to consider each point as a source of desired signal or a source of interference while ensuring visibility to the typical user. Thanks to this model, we derive the probability of coverage of STINs as a function of major system parameters, chiefly path-loss exponent, satellites and terrestrial base stations' height distributions and density, transmit power and biasing factors. Leveraging the analysis, we concretely explore two benefits that STINs provide: i) coverage extension in remote rural areas and ii) data offloading in dense urban areas.

Recommender systems (RSs) have become an essential tool for mitigating information overload in a range of real-world applications. Recent trends in RSs have revealed a major paradigm shift, moving the spotlight from model-centric innovations to data-centric efforts (e.g., improving data quality and quantity). This evolution has given rise to the concept of data-centric recommender systems (Data-Centric RSs), marking a significant development in the field. This survey provides the first systematic overview of Data-Centric RSs, covering 1) the foundational concepts of recommendation data and Data-Centric RSs; 2) three primary issues of recommendation data; 3) recent research developed to address these issues; and 4) several potential future directions of Data-Centric RSs.

In this paper, the problem of low-latency communication and computation resource allocation for digital twin (DT) over wireless networks is investigated. In the considered model, multiple physical devices in the physical network (PN) needs to frequently offload the computation task related data to the digital network twin (DNT), which is generated and controlled by the central server. Due to limited energy budget of the physical devices, both computation accuracy and wireless transmission power must be considered during the DT procedure. This joint communication and computation problem is formulated as an optimization problem whose goal is to minimize the overall transmission delay of the system under total PN energy and DNT model accuracy constraints. To solve this problem, an alternating algorithm with iteratively solving device scheduling, power control, and data offloading subproblems. For the device scheduling subproblem, the optimal solution is obtained in closed form through the dual method. For the special case with one physical device, the optimal number of transmission times is reveled. Based on the theoretical findings, the original problem is transformed into a simplified problem and the optimal device scheduling can be found. Numerical results verify that the proposed algorithm can reduce the transmission delay of the system by up to 51.2\% compared to the conventional schemes.

Compared with the terrestrial networks (TN), which can only support limited coverage areas, low-earth orbit (LEO) satellites can provide seamless global coverage and high survivability in case of emergencies. Nevertheless, the swift movement of the LEO satellites poses a challenge: frequent handovers are inevitable, compromising the quality of service (QoS) of users and leading to discontinuous connectivity. Moreover, considering LEO satellite connectivity for different flying vehicles (FVs) when coexisting with ground terminals, an efficient satellite handover decision control and mobility management strategy is required to reduce the number of handovers and allocate resources that align with different users' requirements. In this paper, a novel distributed satellite handover strategy based on Multi-Agent Reinforcement Learning (MARL) and game theory named Nash-SAC has been proposed to solve these problems. From the simulation results, the Nash-SAC-based handover strategy can effectively reduce the handovers by over 16 percent and the blocking rate by over 18 percent, outperforming local benchmarks such as traditional Q-learning. It also greatly improves the network utility used to quantify the performance of the whole system by up to 48 percent and caters to different users requirements, providing reliable and robust connectivity for both FVs and ground terminals.

With the extremely rapid advances in remote sensing (RS) technology, a great quantity of Earth observation (EO) data featuring considerable and complicated heterogeneity is readily available nowadays, which renders researchers an opportunity to tackle current geoscience applications in a fresh way. With the joint utilization of EO data, much research on multimodal RS data fusion has made tremendous progress in recent years, yet these developed traditional algorithms inevitably meet the performance bottleneck due to the lack of the ability to comprehensively analyse and interpret these strongly heterogeneous data. Hence, this non-negligible limitation further arouses an intense demand for an alternative tool with powerful processing competence. Deep learning (DL), as a cutting-edge technology, has witnessed remarkable breakthroughs in numerous computer vision tasks owing to its impressive ability in data representation and reconstruction. Naturally, it has been successfully applied to the field of multimodal RS data fusion, yielding great improvement compared with traditional methods. This survey aims to present a systematic overview in DL-based multimodal RS data fusion. More specifically, some essential knowledge about this topic is first given. Subsequently, a literature survey is conducted to analyse the trends of this field. Some prevalent sub-fields in the multimodal RS data fusion are then reviewed in terms of the to-be-fused data modalities, i.e., spatiospectral, spatiotemporal, light detection and ranging-optical, synthetic aperture radar-optical, and RS-Geospatial Big Data fusion. Furthermore, We collect and summarize some valuable resources for the sake of the development in multimodal RS data fusion. Finally, the remaining challenges and potential future directions are highlighted.

Vast amount of data generated from networks of sensors, wearables, and the Internet of Things (IoT) devices underscores the need for advanced modeling techniques that leverage the spatio-temporal structure of decentralized data due to the need for edge computation and licensing (data access) issues. While federated learning (FL) has emerged as a framework for model training without requiring direct data sharing and exchange, effectively modeling the complex spatio-temporal dependencies to improve forecasting capabilities still remains an open problem. On the other hand, state-of-the-art spatio-temporal forecasting models assume unfettered access to the data, neglecting constraints on data sharing. To bridge this gap, we propose a federated spatio-temporal model -- Cross-Node Federated Graph Neural Network (CNFGNN) -- which explicitly encodes the underlying graph structure using graph neural network (GNN)-based architecture under the constraint of cross-node federated learning, which requires that data in a network of nodes is generated locally on each node and remains decentralized. CNFGNN operates by disentangling the temporal dynamics modeling on devices and spatial dynamics on the server, utilizing alternating optimization to reduce the communication cost, facilitating computations on the edge devices. Experiments on the traffic flow forecasting task show that CNFGNN achieves the best forecasting performance in both transductive and inductive learning settings with no extra computation cost on edge devices, while incurring modest communication cost.

北京阿比特科技有限公司