亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS) assisted wireless powered communication network (WPCN) is proposed, where two energy-limited devices first harvest energy from a hybrid access point (HAP) and then use that energy to transmit information back. To fully eliminate the doubly-near-far effect in WPCNs, two STAR-RIS operating protocol-driven transmission strategies, namely energy splitting non-orthogonal multiple access (ES- NOMA) and time switching time division multiple access (TS- TDMA) are proposed. For each strategy, the corresponding optimization problem is formulated to maximize the minimum throughput by jointly optimizing time allocation, user transmit power, active HAP beamforming, and passive STAR-RIS beamforming. For ES-NOMA, the resulting intractable problem is solved via a two-layer algorithm, which exploits the one-dimensional search and block coordinate descent methods in an iterative manner. For TS-TDMA, the optimal active beamforming and passive beamforming are first determined according to the maximum-ratio transmission beamformer. Then, the optimal solution of the time allocation variables is obtained by solving a standard convex problem. Numerical results show that: 1) the STAR-RIS can achieve considerable performance improvements for both strategies compared to the conventional RIS; 2) TS- TDMA is preferred for single-antenna scenarios, whereas ES- NOMA is better suited for multi-antenna scenarios; and 3) the superiority of ES-NOMA over TS-TDMA is enhanced as the number of STAR-RIS elements increases.

相關內容

Port-Hamiltonian neural networks (pHNNs) are emerging as a powerful modeling tool that integrates physical laws with deep learning techniques. While most research has focused on modeling the entire dynamics of interconnected systems, the potential for identifying and modeling individual subsystems while operating as part of a larger system has been overlooked. This study addresses this gap by introducing a novel method for using pHNNs to identify such subsystems based solely on input-output measurements. By utilizing the inherent compositional property of the port-Hamiltonian systems, we developed an algorithm that learns the dynamics of individual subsystems, without requiring direct access to their internal states. On top of that, by choosing an output error (OE) model structure, we have been able to handle measurement noise effectively. The effectiveness of the proposed approach is demonstrated through tests on interconnected systems, including multi-physics scenarios, demonstrating its potential for identifying subsystem dynamics and facilitating their integration into new interconnected models.

This work focuses on designing a power-efficient network for Dynamic Metasurface Antennas (DMAs)-aided multiuser multiple-input single output (MISO) antenna systems. The main objective is to minimize total transmitted power by the DMAs while ensuring a guaranteed signal-to-noise-and-interference ratio (SINR) for multiple users in downlink beamforming. Unlike conventional MISO systems, which have well-explored beamforming solutions, DMAs require specialized methods due to their unique physical constraints and wavedomain precoding capabilities. To achieve this, optimization algorithms relying on alternating optimization and semi-definite programming, are developed, including spherical-wave channel modelling of near-field communication. The dynamic reconfigurability and holography-based beamforming of metasurface arrays make DMAs promising candidates for power-efficient networks by reducing the need for power-hungry RF chains. On the other hand, the physical constraints on DMA weights and wave-domain precoding of multiple DMA elements through reduced number of RF suppliers can limit the degrees of freedom (DoF) in beamforming optimizations compared to conventional fully digital (FD) architectures. This paper investigates the optimization of downlink beamforming in DMA-aided networks, focusing on power efficiency and addressing these challenges.

Vehicle-to-Everything (V2X) communication, which includes Vehicle-to-Infrastructure (V2I), Vehicle-to-Vehicle (V2V), and Vehicle-to-Pedestrian (V2P) networks, is gaining significant attention due to the rise of connected and autonomous vehicles. V2X systems require diverse Quality of Service (QoS) provisions, with V2V communication demanding stricter latency and reliability compared to V2I. The 5G New Radio-V2X (NR-V2X) standard addresses these needs using multi-numerology Orthogonal Frequency Division Multiple Access (OFDMA), which allows for flexible allocation of radio resources. However, V2I and V2V users sharing the same radio resources leads to interference, necessitating efficient power and resource allocation. In this work, we propose a novel resource allocation and sharing algorithm for 5G-based V2X systems. Our approach first groups Resource Blocks (RBs) into Resource Chunks (RCs) and allocates them to V2I users using the Gale-Shapley stable matching algorithm. Power is then allocated to RCs to facilitate efficient resource sharing between V2I and V2V users through a bisection search method. Finally, the Gale-Shapley algorithm is used to pair V2I and V2V users, maintaining low computational complexity while ensuring high performance. Simulation results demonstrate that our proposed Gale-Shapley Resource Allocation with Gale-Shapley Sharing (GSRAGS) achieves competitive performance with lower complexity compared to existing works while effectively meeting the QoS demands of V2X communication systems.

This paper studies the device activity detection problem in a massive multiple-input multiple-output (MIMO) system for near-field communications (NFC). In this system, active devices transmit their signature sequences to the base station (BS), which detects the active devices based on the received signal. In this paper, we model the near-field channels as correlated Rician fading channels and formulate the device activity detection problem as a maximum likelihood estimation (MLE) problem. Compared to the traditional uncorrelated channel model, the correlation of channels complicates both algorithm design and theoretical analysis of the MLE problem. On the algorithmic side, we propose two computationally efficient algorithms for solving the MLE problem: an exact coordinate descent (CD) algorithm and an inexact CD algorithm. The exact CD algorithm solves the one-dimensional optimization subproblem exactly using matrix eigenvalue decomposition and polynomial root-finding. By approximating the objective function appropriately, the inexact CD algorithm solves the one-dimensional optimization subproblem inexactly with lower complexity and more robust numerical performance. Additionally, we analyze the detection performance of the MLE problem under correlated channels by comparing it with the case of uncorrelated channels. The analysis shows that when the overall number of devices $N$ is large or the signature sequence length $L$ is small, the detection performance of MLE under correlated channels tends to be better than that under uncorrelated channels. Conversely, when $N$ is small or $L$ is large, MLE performs better under uncorrelated channels than under correlated ones. Simulation results demonstrate the computational efficiency of the proposed algorithms and verify the correctness of the analysis.

Sensing-assisted communication is critical to enhance the system efficiency in integrated sensing and communication (ISAC) systems. However, most existing literature focuses on large-scale channel sensing, without considering the impacts of small-scale channel aging. In this paper, we investigate a dual-scale channel estimation framework for sensing-assisted communication, where both large-scale channel sensing and small-scale channel aging are considered. By modeling the channel aging effect with block fading and incorporating CRB (Cram\'er-Rao bound)-based sensing errors, we optimize both the time duration of large-scale detection and the frequency of small-scale update within each subframe to maximize the achievable rate while satisfying sensing requirements. Since the formulated optimization problem is non-convex, we propose a two-dimensional search-based optimization algorithm to obtain the optimal solution. Simulation results demonstrate the superiority of our proposed optimal design over three counterparts.

Reconfigurable intelligent surfaces (RIS) and index modulation (IM) represent key technologies for enabling reliable wireless communication with high energy efficiency. However, to fully take advantage of these technologies in practical deployments, comprehending the impact of the non-ideal nature of the underlying transceivers is paramount. In this context, this paper introduces two RIS-assisted IM communication models, in which the RIS is part of the transmitter and space-shift keying (SSK) is employed for IM, and assesses their performance in the presence of hardware impairments. In the first model, the RIS acts as a passive reflector only, reflecting the oncoming SSK modulated signal intelligently towards the desired receive diversity branch/antenna. The second model employs RIS as a transmitter, employing M-ary phase-shift keying for reflection phase modulation (RPM), and as a reflector for the incoming SSK modulated signal. Considering transmissions subjected to Nakagami-m fading, and a greedy detection rule at the receiver, the performance of both the system configurations is evaluated. Specifically, the pairwise probability of erroneous index detection and the probability of erroneous index detection are adopted as performance metrics, and their closed-form expressions are derived for the RIS-assisted SSK and RIS-assisted SSK-RPM system models. Monte-Carlo simulation studies are carried out to verify the analytical framework, and numerical results are presented to study the dependency of the error performance on the system parameters. The findings highlight the effect of hardware impairment on the performance of the communication system under study.

Perfect complementary sequence sets (PCSSs) are widely used in multi-carrier code-division multiple-access (MC-CDMA) communication system. However, the set size of a PCSS is upper bounded by the number of row sequences of each two-dimensional matrix in PCSS. Then quasi-complementary sequence set (QCSS) was proposed to support more users in MC-CDMA communications. For practical applications, it is desirable to construct an $(M,K,N,\vartheta_{max})$-QCSS with $M$ as large as possible and $\vartheta_{max}$ as small as possible, where $M$ is the number of matrices with $K$ rows and $N$ columns in the set and $\vartheta_{max}$ denotes its periodic tolerance. There exists a tradoff among these parameters and constructing QCSSs achieving or nearly achieving the known correlation lower bound has been an interesting research topic. Up to now, only a few constructions of asymptotically optimal or near-optimal periodic QCSSs were reported in the literature. In this paper, we construct five families of asymptotically optimal or near-optimal periodic QCSSs with large set sizes and low periodic tolerances. These families of QCSSs have set size $\Theta(q^2)$ or $\Theta(q^3)$ and flock size $\Theta(q)$, where $q$ is a power of a prime. To the best of our knowledge, only three known families of periodic QCSSs with set size $\Theta(q^2)$ and flock size $\Theta(q)$ were constructed and all other known periodic QCSSs have set sizes much smaller than $\Theta(q^2)$. Our new constructed periodic QCSSs with set size $\Theta(q^2)$ and flock size $\Theta(q)$ have better parameters than known ones. They have larger set sizes or lower periodic tolerances.The periodic QCSSs with set size $\Theta(q^3)$ and flock size $\Theta(q)$ constructed in this paper have the largest set size among all known families of asymptotically optimal or near-optimal periodic QCSSs.

Retrieval-Augmented Generation (RAG) merges retrieval methods with deep learning advancements to address the static limitations of large language models (LLMs) by enabling the dynamic integration of up-to-date external information. This methodology, focusing primarily on the text domain, provides a cost-effective solution to the generation of plausible but incorrect responses by LLMs, thereby enhancing the accuracy and reliability of their outputs through the use of real-world data. As RAG grows in complexity and incorporates multiple concepts that can influence its performance, this paper organizes the RAG paradigm into four categories: pre-retrieval, retrieval, post-retrieval, and generation, offering a detailed perspective from the retrieval viewpoint. It outlines RAG's evolution and discusses the field's progression through the analysis of significant studies. Additionally, the paper introduces evaluation methods for RAG, addressing the challenges faced and proposing future research directions. By offering an organized framework and categorization, the study aims to consolidate existing research on RAG, clarify its technological underpinnings, and highlight its potential to broaden the adaptability and applications of LLMs.

With the extremely rapid advances in remote sensing (RS) technology, a great quantity of Earth observation (EO) data featuring considerable and complicated heterogeneity is readily available nowadays, which renders researchers an opportunity to tackle current geoscience applications in a fresh way. With the joint utilization of EO data, much research on multimodal RS data fusion has made tremendous progress in recent years, yet these developed traditional algorithms inevitably meet the performance bottleneck due to the lack of the ability to comprehensively analyse and interpret these strongly heterogeneous data. Hence, this non-negligible limitation further arouses an intense demand for an alternative tool with powerful processing competence. Deep learning (DL), as a cutting-edge technology, has witnessed remarkable breakthroughs in numerous computer vision tasks owing to its impressive ability in data representation and reconstruction. Naturally, it has been successfully applied to the field of multimodal RS data fusion, yielding great improvement compared with traditional methods. This survey aims to present a systematic overview in DL-based multimodal RS data fusion. More specifically, some essential knowledge about this topic is first given. Subsequently, a literature survey is conducted to analyse the trends of this field. Some prevalent sub-fields in the multimodal RS data fusion are then reviewed in terms of the to-be-fused data modalities, i.e., spatiospectral, spatiotemporal, light detection and ranging-optical, synthetic aperture radar-optical, and RS-Geospatial Big Data fusion. Furthermore, We collect and summarize some valuable resources for the sake of the development in multimodal RS data fusion. Finally, the remaining challenges and potential future directions are highlighted.

Vast amount of data generated from networks of sensors, wearables, and the Internet of Things (IoT) devices underscores the need for advanced modeling techniques that leverage the spatio-temporal structure of decentralized data due to the need for edge computation and licensing (data access) issues. While federated learning (FL) has emerged as a framework for model training without requiring direct data sharing and exchange, effectively modeling the complex spatio-temporal dependencies to improve forecasting capabilities still remains an open problem. On the other hand, state-of-the-art spatio-temporal forecasting models assume unfettered access to the data, neglecting constraints on data sharing. To bridge this gap, we propose a federated spatio-temporal model -- Cross-Node Federated Graph Neural Network (CNFGNN) -- which explicitly encodes the underlying graph structure using graph neural network (GNN)-based architecture under the constraint of cross-node federated learning, which requires that data in a network of nodes is generated locally on each node and remains decentralized. CNFGNN operates by disentangling the temporal dynamics modeling on devices and spatial dynamics on the server, utilizing alternating optimization to reduce the communication cost, facilitating computations on the edge devices. Experiments on the traffic flow forecasting task show that CNFGNN achieves the best forecasting performance in both transductive and inductive learning settings with no extra computation cost on edge devices, while incurring modest communication cost.

北京阿比特科技有限公司