Low Earth orbit (LEO) satellites play a crucial role in providing global connectivity for non-terrestrial networks (NTNs) and supporting various Internet-of-Remote-Things (IoRT) applications. Each LEO satellite functions as a relay node in the sky, employing store-and-forward transmission strategies that necessitate the use of buffers. However, due to the finite size of these buffers, occurrences of buffer overflow leading to packet loss are inevitable. In this paper, we demonstrate how inter-satellite links (ISLs) can mitigate the probability of buffer overflow. Specifically, we propose an approach to reallocate packets among LEO satellites via ISLs to minimize the occurrence of buffer overflow events. Consequently, the implementation of ISLs can lead to a more reliable satellite network, enabling efficient packet reallocation to reduce the probability of buffer overflow.
Integrated satellite, aerial, and terrestrial networks (ISATNs) represent a sophisticated convergence of diverse communication technologies to ensure seamless connectivity across different altitudes and platforms. This paper explores the transformative potential of integrating Large Language Models (LLMs) into ISATNs, leveraging advanced Artificial Intelligence (AI) and Machine Learning (ML) capabilities to enhance these networks. We outline the current architecture of ISATNs and highlight the significant role LLMs can play in optimizing data flow, signal processing, and network management to advance 5G/6G communication technologies through advanced predictive algorithms and real-time decision-making. A comprehensive analysis of ISATN components is conducted, assessing how LLMs can effectively address traditional data transmission and processing bottlenecks. The paper delves into the network management challenges within ISATNs, emphasizing the necessity for sophisticated resource allocation strategies, traffic routing, and security management to ensure seamless connectivity and optimal performance under varying conditions. Furthermore, we examine the technical challenges and limitations associated with integrating LLMs into ISATNs, such as data integration for LLM processing, scalability issues, latency in decision-making processes, and the design of robust, fault-tolerant systems. The study also identifies key future research directions for fully harnessing LLM capabilities in ISATNs, which is crucial for enhancing network reliability, optimizing performance, and achieving a truly interconnected and intelligent global network system.
The extensive deployment of Low Earth Orbit (LEO) satellites introduces significant security challenges for communication security issues in Internet of Things (IoT) networks. With the rising number of satellites potentially acting as eavesdroppers, integrating Physical Layer Security (PLS) into satellite communications has become increasingly critical. However, these studies are facing challenges such as dealing with dynamic topology difficulties, limitations in interference analysis, and the high complexity of performance evaluation. To address these challenges, for the first time, we investigate PLS strategies in satellite communications using the Stochastic Geometry (SG) analytical framework. We consider the uplink communication scenario in an LEO-enabled IoT network, where multi-tier satellites from different operators respectively serve as legitimate receivers and eavesdroppers. In this scenario, we derive low-complexity analytical expressions for the security performance metrics, namely availability probability, successful communication probability, and secure communication probability. By introducing the power allocation parameters, we incorporate the Artificial Noise (AN) technique, which is an important PLS strategy, into this analytical framework and evaluate the gains it brings to secure transmission. In addition to the AN technique, we also analyze the impact of constellation configuration, physical layer parameters, and network layer parameters on the aforementioned metrics.
Cross-Domain Recommendation (CDR) seeks to utilize knowledge from different domains to alleviate the problem of data sparsity in the target recommendation domain, and it has been gaining more attention in recent years. Although there have been notable advancements in this area, most current methods represent users and items in Euclidean space, which is not ideal for handling long-tail distributed data in recommendation systems. Additionally, adding data from other domains can worsen the long-tail characteristics of the entire dataset, making it harder to train CDR models effectively. Recent studies have shown that hyperbolic methods are particularly suitable for modeling long-tail distributions, which has led us to explore hyperbolic representations for users and items in CDR scenarios. However, due to the distinct characteristics of the different domains, applying hyperbolic representation learning to CDR tasks is quite challenging. In this paper, we introduce a new framework called Hyperbolic Contrastive Learning (HCTS), designed to capture the unique features of each domain while enabling efficient knowledge transfer between domains. We achieve this by embedding users and items from each domain separately and mapping them onto distinct hyperbolic manifolds with adjustable curvatures for prediction. To improve the representations of users and items in the target domain, we develop a hyperbolic contrastive learning module for knowledge transfer. Extensive experiments on real-world datasets demonstrate that hyperbolic manifolds are a promising alternative to Euclidean space for CDR tasks.
Space-air-ground integrated networks (SAGINs) are emerging as a pivotal element in the evolution of future wireless networks. Despite their potential, the joint design of communication and computation within SAGINs remains a formidable challenge. In this paper, the problem of energy efficiency in SAGIN-enabled probabilistic semantic communication (PSC) system is investigated. In the considered model, a satellite needs to transmit data to multiple ground terminals (GTs) via an unmanned aerial vehicle (UAV) acting as a relay. During transmission, the satellite and the UAV can use PSC technique to compress the transmitting data, while the GTs can automatically recover the missing information. The PSC is underpinned by shared probability graphs that serve as a common knowledge base among the transceivers, allowing for resource-saving communication at the expense of increased computation resource. Through analysis, the computation overhead function in PSC is a piecewise function with respect to the semantic compression ratio. Therefore, it is important to make a balance between communication and computation to achieve optimal energy efficiency. The joint communication and computation problem is formulated as an optimization problem aiming to minimize the total communication and computation energy consumption of the network under latency, power, computation capacity, bandwidth, semantic compression ratio, and UAV location constraints. To solve this non-convex non-smooth problem, we propose an iterative algorithm where the closed-form solutions for computation capacity allocation and UAV altitude are obtained at each iteration. Numerical results show the effectiveness of the proposed algorithm.
We consider scalable cell-free massive multiple-input multiple-output networks under an open radio access network paradigm comprising user equipments (UEs), radio units (RUs), and decentralized processing units (DUs). UEs are served by dynamically allocated user-centric clusters of RUs. The corresponding cluster processors (implementing the physical layer for each user) are hosted by the DUs as software-defined virtual network functions. Unlike the current literature, mainly focused on the characterization of the user rates under unrestricted fronthaul communication and computation, in this work we explicitly take into account the fronthaul topology, the limited fronthaul communication capacity, and computation constraints at the DUs. In particular, we systematically address the new problem of joint fronthaul load balancing and allocation of the computation resource. As a consequence of our new optimization framework, we present representative numerical results highlighting the existence of an optimal number of quantization bits in the analog-to-digital conversion at the RUs.
Sensing is anticipated to have wider extensions in communication systems with the boom of non-terrestrial networks (NTNs) during the past years. In this paper, we study a bistatic sensing system by maximizing the signal-to-interference-plus-noise ration (SINR) from the target aircraft in the space-air-ground integrated network (SAGIN). We formulate a joint optimization problem for the transmit beamforming of low-earth orbit (LEO) satellite and the receive filtering of ground base station. To tackle this problem, we decompose the original problem into two sub-problems and use the alternating optimization to solve them iteratively. Using techniques of fractional programming and generalized Rayleigh quotient, the closed-form solution for each sub-problem is returned. Simulation results show that the proposed algorithm has good convergence performance.Moreover, the optimization of receive filtering dominates the optimality, especially when the satellite altitude becomes higher, which provides valuable network design insights.
Zero-knowledge proofs (ZKPs) have emerged as a promising solution to address the scalability challenges in modern blockchain systems. This study proposes a methodology for generating and verifying ZKPs to ensure the computational integrity of cryptographic hashing, specifically focusing on the SHA-256 algorithm. By leveraging the Plonky2 framework, which implements the PLONK protocol with FRI commitment scheme, we demonstrate the efficiency and scalability of our approach for both random data and real data blocks from the NEAR blockchain. The experimental results show consistent performance across different data sizes and types, with the time required for proof generation and verification remaining within acceptable limits. The generated circuits and proofs maintain manageable sizes, even for real-world data blocks with a large number of transactions. The proposed methodology contributes to the development of secure and trustworthy blockchain systems, where the integrity of computations can be verified without revealing the underlying data. Further research is needed to assess the applicability of the approach to other cryptographic primitives and to evaluate its performance in more complex real-world scenarios.
This paper introduces a relay-assisted solution for downlink communications in a mixed system of Radio Frequency (RF) and Underwater Optical Wireless Communications (UOWC) technologies. During the initial downlink phase, data transmission occurs via RF link between hovering Unmanned Aerial Vehicle (UAV) and the floating buoys at the water surface. As fixed buoy acts as amplify-and-forward relays, the second UOWC link represents downlink signal transmission from the floating buoy to the underwater device. Best relay selection is adopted, meaning that only the buoy with the best estimated RF-based UAV-buoy channel will perform signal transmission to an underwater device. Analytical expression for the outage probability is derived and utilized to examine the system's performance behaviour for various UOWC and RF channel conditions.
Vast amount of data generated from networks of sensors, wearables, and the Internet of Things (IoT) devices underscores the need for advanced modeling techniques that leverage the spatio-temporal structure of decentralized data due to the need for edge computation and licensing (data access) issues. While federated learning (FL) has emerged as a framework for model training without requiring direct data sharing and exchange, effectively modeling the complex spatio-temporal dependencies to improve forecasting capabilities still remains an open problem. On the other hand, state-of-the-art spatio-temporal forecasting models assume unfettered access to the data, neglecting constraints on data sharing. To bridge this gap, we propose a federated spatio-temporal model -- Cross-Node Federated Graph Neural Network (CNFGNN) -- which explicitly encodes the underlying graph structure using graph neural network (GNN)-based architecture under the constraint of cross-node federated learning, which requires that data in a network of nodes is generated locally on each node and remains decentralized. CNFGNN operates by disentangling the temporal dynamics modeling on devices and spatial dynamics on the server, utilizing alternating optimization to reduce the communication cost, facilitating computations on the edge devices. Experiments on the traffic flow forecasting task show that CNFGNN achieves the best forecasting performance in both transductive and inductive learning settings with no extra computation cost on edge devices, while incurring modest communication cost.
Data transmission between two or more digital devices in industry and government demands secure and agile technology. Digital information distribution often requires deployment of Internet of Things (IoT) devices and Data Fusion techniques which have also gained popularity in both, civilian and military environments, such as, emergence of Smart Cities and Internet of Battlefield Things (IoBT). This usually requires capturing and consolidating data from multiple sources. Because datasets do not necessarily originate from identical sensors, fused data typically results in a complex Big Data problem. Due to potentially sensitive nature of IoT datasets, Blockchain technology is used to facilitate secure sharing of IoT datasets, which allows digital information to be distributed, but not copied. However, blockchain has several limitations related to complexity, scalability, and excessive energy consumption. We propose an approach to hide information (sensor signal) by transforming it to an image or an audio signal. In one of the latest attempts to the military modernization, we investigate sensor fusion approach by investigating the challenges of enabling an intelligent identification and detection operation and demonstrates the feasibility of the proposed Deep Learning and Anomaly Detection models that can support future application for specific hand gesture alert system from wearable devices.