In mobile networks, Open Radio Access Network (ORAN) provides a framework for implementing network slicing that interacts with the resources at the lower layers. Both monitoring and Radio Access Network (RAN) control is feasible for both 4G and 5G systems. In this work, we consider how data-driven resource allocation in a 4G context can enable adaptive slice allocation to steer the experienced latency of Virtual Reality (VR) traffic towards a requested latency. We develop an xApp for the near real-time RAN Intelligent Controller (RIC) that embeds a heuristic algorithm for latency control, aiming to: (1) maintain latency of a VR stream around a requested value; and (2) improve the available RAN allocation to offer higher bit rate to another user. We have experimentally demonstrated the proposed approach in an ORAN testbed. Our results show that the data-driven approach can dynamically follow the variation of the traffic load while satisfying the required latency. This results in 15.8% more resources to secondary users than a latency-equivalent static allocation.
The fifth generation of the telecommunication networks (5G) established the service-oriented paradigm on the mobile networks. In this new context, the 5G Core component has become extremely flexible so, in addition to serving mobile networks, it can also be used to connect devices from the so-called non-3GPP networks, which contains technologies such as WiFi. The implementation of this connectivity requires specific protocols to ensure authentication and reliability. Given these characteristics and the possibility of convergence, it is necessary to carefully choose the encryption algorithms and authentication methods used by non-3GPP user equipment. In light of the above, this paper highlights key findings resulting from an analysis on the subject conducted through a test environment which could be used in the context of the Eduroam federation.
The internet of things (IoT) based wireless sensor networks (WSNs) face an energy shortage challenge that could be overcome by the novel wireless power transfer (WPT) technology. The combination of WSNs and WPT is known as wireless rechargeable sensor networks (WRSNs), with the charging efficiency and charging scheduling being the primary concerns. Therefore, this paper proposes a probabilistic on-demand charging scheduling for integrated sensing and communication (ISAC)-assisted WRSNs with multiple mobile charging vehicles (MCVs) that addresses three parts. First, it considers the four attributes with their probability distributions to balance the charging load on each MCV. The distributions are residual energy of charging node, distance from MCV to charging node, degree of charging node, and charging node betweenness centrality. Second, it considers the efficient charging factor strategy to partially charge network nodes. Finally, it employs the ISAC concept to efficiently utilize the wireless resources to reduce the traveling cost of each MCV and to avoid the charging conflicts between them. The simulation results show that the proposed protocol outperforms cutting-edge protocols in terms of energy usage efficiency, charging delay, survival rate, and travel distance.
This paper presents a convolutional neural network model for precipitation nowcasting that combines data-driven learning with physics-informed domain knowledge. We propose LUPIN, a Lagrangian Double U-Net for Physics-Informed Nowcasting, that draws from existing extrapolation-based nowcasting methods and implements the Lagrangian coordinate system transformation of the data in a fully differentiable and GPU-accelerated manner to allow for real-time end-to-end training and inference. Based on our evaluation, LUPIN matches and exceeds the performance of the chosen benchmark, opening the door for other Lagrangian machine learning models.
Sixth-generation (6G) wireless communication systems, as stated in the European 6G flagship project Hexa-X, are anticipated to feature the integration of intelligence, communication, sensing, positioning, and computation. An important aspect of this integration is integrated sensing and communication (ISAC), in which the same waveform is used for both systems both sensing and communication, to address the challenge of spectrum scarcity. Recently, the orthogonal time frequency space (OTFS) waveform has been proposed to address OFDM's limitations due to the high Doppler spread in some future wireless communication systems. In this paper, we review existing OTFS waveforms for ISAC systems and provide some insights into future research. Firstly, we introduce the basic principles and a system model of OTFS and provide a foundational understanding of this innovative technology's core concepts and architecture. Subsequently, we present an overview of OTFS-based ISAC system frameworks. We provide a comprehensive review of recent research developments and the current state of the art in the field of OTFS-assisted ISAC systems to gain a thorough understanding of the current landscape and advancements. Furthermore, we perform a thorough comparison between OTFS-enabled ISAC operations and traditional OFDM, highlighting the distinctive advantages of OTFS, especially in high Doppler spread scenarios. Subsequently, we address the primary challenges facing OTFS-based ISAC systems, identifying potential limitations and drawbacks. Then, finally, we suggest future research directions, aiming to inspire further innovation in the 6G wireless communication landscape.
We focus on the problem of managing a shared physical wireless sensor network where a single network infrastructure provider leases the physical resources of the networks to application providers to run/deploy specific applications/services. In this scenario, we solve jointly the problems of Application Admission Control (AAC), that is, whether to admit the application/service to the physical network, and wireless Sensor Network Slicing (SNS), that is, to allocate the required physical resources to the admitted applications in a transparent and effective way. We propose a mathematical programming framework to model the joint AAC-SNS problem which is then leveraged to design effective solution algorithms. The proposed framework is thoroughly evaluated on realistic wireless sensor networks infrastructures.
Low Earth Orbit (LEO) satellite networks are rapidly gaining traction today. Although several real-world deployments exist, our preliminary analysis of LEO topology performance with the soon-to-be operational Inter-Satellite Links (ISLs) reveals several interesting characteristics that are difficult to explain based on our current understanding of topologies. For example, a real-world satellite shell with a low density of satellites offers better latency performance than another shell with nearly double the number of satellites. In this work, we conduct an in-depth investigation of LEO satellite topology design parameters and their impact on network performance while using the ISLs. In particular, we focus on three design parameters: the number of orbits in a shell, the inclination of orbits, and the number of satellites per orbit. Through an extensive analysis of real-world and synthetic satellite configurations, we uncover several interesting properties of satellite topologies. Notably, there exist thresholds for the number of satellites per orbit and the number of orbits below which the latency performance degrades significantly. Moreover, network delay between a pair of traffic endpoints depends on the alignment of the satellite's orbit (Inclination) with the geographic locations of endpoints.
Music streaming services heavily rely on recommender systems to improve their users' experience, by helping them navigate through a large musical catalog and discover new songs, albums or artists. However, recommending relevant and personalized content to new users, with few to no interactions with the catalog, is challenging. This is commonly referred to as the user cold start problem. In this applied paper, we present the system recently deployed on the music streaming service Deezer to address this problem. The solution leverages a semi-personalized recommendation strategy, based on a deep neural network architecture and on a clustering of users from heterogeneous sources of information. We extensively show the practical impact of this system and its effectiveness at predicting the future musical preferences of cold start users on Deezer, through both offline and online large-scale experiments. Besides, we publicly release our code as well as anonymized usage data from our experiments. We hope that this release of industrial resources will benefit future research on user cold start recommendation.
The prevalence of networked sensors and actuators in many real-world systems such as smart buildings, factories, power plants, and data centers generate substantial amounts of multivariate time series data for these systems. The rich sensor data can be continuously monitored for intrusion events through anomaly detection. However, conventional threshold-based anomaly detection methods are inadequate due to the dynamic complexities of these systems, while supervised machine learning methods are unable to exploit the large amounts of data due to the lack of labeled data. On the other hand, current unsupervised machine learning approaches have not fully exploited the spatial-temporal correlation and other dependencies amongst the multiple variables (sensors/actuators) in the system for detecting anomalies. In this work, we propose an unsupervised multivariate anomaly detection method based on Generative Adversarial Networks (GANs). Instead of treating each data stream independently, our proposed MAD-GAN framework considers the entire variable set concurrently to capture the latent interactions amongst the variables. We also fully exploit both the generator and discriminator produced by the GAN, using a novel anomaly score called DR-score to detect anomalies by discrimination and reconstruction. We have tested our proposed MAD-GAN using two recent datasets collected from real-world CPS: the Secure Water Treatment (SWaT) and the Water Distribution (WADI) datasets. Our experimental results showed that the proposed MAD-GAN is effective in reporting anomalies caused by various cyber-intrusions compared in these complex real-world systems.
Convolutional networks (ConvNets) have achieved great successes in various challenging vision tasks. However, the performance of ConvNets would degrade when encountering the domain shift. The domain adaptation is more significant while challenging in the field of biomedical image analysis, where cross-modality data have largely different distributions. Given that annotating the medical data is especially expensive, the supervised transfer learning approaches are not quite optimal. In this paper, we propose an unsupervised domain adaptation framework with adversarial learning for cross-modality biomedical image segmentations. Specifically, our model is based on a dilated fully convolutional network for pixel-wise prediction. Moreover, we build a plug-and-play domain adaptation module (DAM) to map the target input to features which are aligned with source domain feature space. A domain critic module (DCM) is set up for discriminating the feature space of both domains. We optimize the DAM and DCM via an adversarial loss without using any target domain label. Our proposed method is validated by adapting a ConvNet trained with MRI images to unpaired CT data for cardiac structures segmentations, and achieved very promising results.
We present Generative Adversarial Capsule Network (CapsuleGAN), a framework that uses capsule networks (CapsNets) instead of the standard convolutional neural networks (CNNs) as discriminators within the generative adversarial network (GAN) setting, while modeling image data. We provide guidelines for designing CapsNet discriminators and the updated GAN objective function, which incorporates the CapsNet margin loss, for training CapsuleGAN models. We show that CapsuleGAN outperforms convolutional-GAN at modeling image data distribution on the MNIST dataset of handwritten digits, evaluated on the generative adversarial metric and at semi-supervised image classification.