Multiple access technology is a key technology in various generations of wireless communication systems. As a potential multiple access technology for the next generation wireless communication systems, model division multiple access (MDMA) technology improves spectrum efficiency and feasibility regions. This implies that the MDMA scheme can achieve greater performance gains compared to traditional schemes. Relayassisted cooperative networks, as a infrastructure of wireless communication, can effectively utilize resources and improve performance when MDMA is applied. In this paper, a communication relay cooperative network based on MDMA in dissimilar rayleigh fading channels is proposed, which consists of two source nodes, any number of decode-and-forward (DF) relay nodes, and one destination node, as well as using the maximal ratio combining (MRC) at the destination to combine the signals received from the source and relays. By applying the state transition matrix (STM) and moment generating function (MGF), closed-form analytical solutions for outage probability and resource utilization efficiency are derived. Theoretical and simulation results are conducted to verify the validity of the theoretical analysis.
Joint channel estimation and signal detection (JCESD) in wireless communication systems is a crucial and challenging task, especially since it inherently poses a nonlinear inverse problem. This challenge is further highlighted in low signal-to-noise ratio (SNR) scenarios, where traditional algorithms often perform poorly. Deep learning (DL) methods have been investigated, but concerns regarding computational expense and lack of validation in low-SNR settings remain. Hence, the development of a robust and low-complexity model that can deliver excellent performance across a wide range of SNRs is highly desirable. In this paper, we aim to establish a benchmark where traditional algorithms and DL methods are validated on different channel models, Doppler, and SNR settings. In particular, we propose a new DL model where the backbone network is formed by unrolling the iterative algorithm, and the hyperparameters are estimated by hypernetworks. Additionally, we adapt a lightweight DenseNet to the task of JCESD for comparison. We evaluate different methods in three aspects: generalization in terms of bit error rate (BER), robustness, and complexity. Our results indicate that DL approaches outperform traditional algorithms in the challenging low-SNR setting, while the iterative algorithm performs better in high-SNR settings. Furthermore, the iterative algorithm is more robust in the presence of carrier frequency offset, whereas DL methods excel when signals are corrupted by asymmetric Gaussian noise.
Movable antenna (MA) has emerged as a promising technology to enhance wireless communication performance by enabling the local movement of antennas at the transmitter (Tx) and/or receiver (Rx) for achieving more favorable channel conditions. As the existing studies on MA-aided wireless communications have mainly considered narrow-band transmission in flat fading channels, we investigate in this paper the MA-aided wideband communications employing orthogonal frequency division multiplexing (OFDM) in frequency-selective fading channels. Under the general multi-tap field-response channel model, the wireless channel variations in both space and frequency are characterized with different positions of the MAs. Unlike the narrow-band transmission where the optimal MA position at the Tx/Rx simply maximizes the single-tap channel amplitude, the MA position in the wideband case needs to balance the amplitudes and phases over multiple channel taps in order to maximize the OFDM transmission rate over multiple frequency subcarriers. First, we derive an upper bound on the OFDM achievable rate in closed form when the size of the Tx/Rx region for antenna movement is arbitrarily large. Next, we develop a parallel greedy ascent (PGA) algorithm to obtain locally optimal solutions to the MAs' positions for OFDM rate maximization subject to finite-size Tx/Rx regions. To reduce computational complexity, a simplified PGA algorithm is also provided to optimize the MAs' positions more efficiently. Simulation results demonstrate that the proposed PGA algorithms can approach the OFDM rate upper bound closely with the increase of Tx/Rx region sizes and outperform conventional systems with fixed-position antennas (FPAs) under the wideband channel setup.
Physical-layer security (PLS) is a promising technique to complement communication security in beyond-5G wireless networks. However, PLS developments in current research are often based on the ideal assumption of infinite coding blocklengths or perfect knowledge of the wiretap link's channel state information (CSI). In this work, we study the performance of finite blocklength (FBL) transmissions using a new secrecy metric - the average information leakage (AIL). We evaluate the exact and approximate AIL with arbitrary signaling and fading channels, assuming that the eavesdropper's instantaneous CSI is unknown. We then conduct case studies that use artificial noise (AN) beamforming to thoroughly analyze the AIL in both Rayleigh and Rician fading channels. The accuracy of the analytical expressions is verified through extensive simulations, and various insights regarding the impact of key system parameters on the AIL are obtained. Particularly, our results reveal that allowing a small level of AIL can potentially lead to significant reliability improvements. To improve the system performance, we formulate and solve an average secrecy throughput (AST) optimization problem via both non-adaptive and adaptive design strategies. Our findings highlight the significance of blocklength design and AN power allocation, as well as the impact of their trade-off on the AST.
Mobile edge computing offers a myriad of opportunities to innovate and introduce novel applications, thereby enhancing user experiences considerably. A critical issue extensively investigated in this domain is efficient deployment of Service Function Chains (SFCs) across the physical network, spanning from the edge to the cloud. This problem is known to be NP-hard. As a result of its practical importance, there is significant interest in the development of high-quality sub-optimal solutions. In this paper, we consider this problem and propose a novel near-optimal heuristic that is extremely efficient and scalable. We compare our solution to the state-of-the-art heuristic and to the theoretical optimum. In our large-scale evaluations, we use realistic topologies which were previously reported in the literature. We demonstrate that the execution time offered by our solution grows slowly as the number of Virtual Network Function (VNF) forwarding graph embedding requests grows, and it handles one million requests in slightly more than 20 seconds for 100 nodes and 150 edges physical topology.
Decentralized applications (DApps), which are innovative blockchain-powered software systems designed to serve as the fundamental building blocks for the next generation of Internet services, have witnessed exponential growth in recent years. This paper thoroughly compares and analyzes two blockchain-based decentralized storage networks (DSNs), which are crucial foundations for DApp and blockchain ecosystems. The study examines their respective mechanisms for data persistence, strategies for enforcing data retention, and token economics. In addition to delving into technical details, the suitability of each storage solution for decentralized application development is assessed, taking into consideration network performance, storage costs, and existing use cases. By evaluating these factors, the paper aims to provide insights into the effectiveness of these technologies in supporting the desirable properties of truly decentralized blockchain applications. In conclusion, the findings of this research are discussed and synthesized, offering valuable perspectives on the capabilities of these technologies. It sheds light on their potential to facilitate the development of DApps and provides an understanding of the ongoing trends in blockchain development.
Molecular communications is a technique emulated by researchers, which has already been used by the nature for millions of years. In Molecular Communications via Diffusion (MCvD), messenger molecules are emitted by a transmitter and propagate in the fluidic environment in a random manner. In biological systems, the environment can be considered a bounded space, surrounded by different structures, such as tissues and organs. The propagation of molecules is affected by these structures in the environment, which reflect the molecules upon collision. Hence, understanding the behavior of MCvD systems near reflecting surfaces is important for modeling molecular communication systems analytically. However, deriving the channel response of MCvD systems with an absorbing spherical receiver requires solving the diffusion equation in 3-D space in the presence of a reflecting boundary, which is extremely challenging. Therefore, derivation of the channel response in a bounded environment has remained one of the unanswered questions in the literature. In this paper, a method to model molecular communication systems near reflecting surfaces is proposed, and an analytical closed-form solution for the channel response is derived.
We consider a wireless communication system with a passive eavesdropper, in which a transmitter and legitimate receiver generate and use key bits to secure the transmission of their data. These bits are added to and used from a pool of available key bits. In this work, we analyze the reliability of the system in terms of the probability that the budget of available key bits will be exhausted. In addition, we investigate the latency before a transmission can take place. Since security, reliability, and latency are three important metrics for modern communication systems, it is of great interest to jointly analyze them in relation to the system parameters. In particular, we show under what conditions the system may remain in an active state indefinitely, i.e., never run out of available secret-key bits. The results presented in this work will allow system designers to adjust the system parameters in such a way that the requirements of the application in terms of both reliability and latency are met.
The effectiveness of recommendation systems is pivotal to user engagement and satisfaction in online platforms. As these recommendation systems increasingly influence user choices, their evaluation transcends mere technical performance and becomes central to business success. This paper addresses the multifaceted nature of recommendations system evaluation by introducing a comprehensive suite of metrics, each tailored to capture a distinct aspect of system performance. We discuss * Similarity Metrics: to quantify the precision of content-based filtering mechanisms and assess the accuracy of collaborative filtering techniques. * Candidate Generation Metrics: to evaluate how effectively the system identifies a broad yet relevant range of items. * Predictive Metrics: to assess the accuracy of forecasted user preferences. * Ranking Metrics: to evaluate the effectiveness of the order in which recommendations are presented. * Business Metrics: to align the performance of the recommendation system with economic objectives. Our approach emphasizes the contextual application of these metrics and their interdependencies. In this paper, we identify the strengths and limitations of current evaluation practices and highlight the nuanced trade-offs that emerge when optimizing recommendation systems across different metrics. The paper concludes by proposing a framework for selecting and interpreting these metrics to not only improve system performance but also to advance business goals. This work is to aid researchers and practitioners in critically assessing recommendation systems and fosters the development of more nuanced, effective, and economically viable personalization strategies. Our code is available at GitHub - //github.com/aryan-jadon/Evaluation-Metrics-for-Recommendation-Systems.
Artificial intelligence (AI) has emerged as a powerful tool for addressing complex and dynamic tasks in radio communication systems. Research in this area, however, focused on AI solutions for specific, limited conditions, hindering models from learning and adapting to generic situations, such as those met across radio communication systems. This paper emphasizes the pivotal role of achieving model generalization in enhancing performance and enabling scalable AI integration within radio communications. We outline design principles for model generalization in three key domains: environment for robustness, intents for adaptability to system objectives, and control tasks for reducing AI-driven control loops. Implementing these principles can decrease the number of models deployed and increase adaptability in diverse radio communication environments. To address the challenges of model generalization in communication systems, we propose a learning architecture that leverages centralization of training and data management functionalities, combined with distributed data generation. We illustrate these concepts by designing a generalized link adaptation algorithm, demonstrating the benefits of our proposed approach.
Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.