Recent advancements in wireless technologies towards the next-generation cellular networks have brought a new era that made it possible to apply cellular technology on traditionally-wired networks with tighter requirements, such as industrial networks. The next-generation cellular technologies (e.g., 5G and Beyond) introduce the concept of ultra-reliable low-latency communications (URLLC). This thesis presents a Software-Defined Networking (SDN) architecture with programmable data planes for the next-generation cellular networks to achieve URLLC. Our design deploys programmable switches between the cellular core and Radio Access Networks (RAN) to monitor and modify data traffic at the line speed. We introduce the concept of \textit{intra-cellular optimization}, a relaxation in cellular networks to allow pre-authorized in-network devices to communicate without being required to signal the core network. We also present a control structure, Unified Control Plane (UCP), containing a novel Ethernet Layer control protocol and an adapted version of link-state routing information distribution among the programmable switches. Our implementation uses P4 with an 5G implementation (Open5Gs) and a UE/RAN simulator. We implement a Python simulator to evaluate the performance of our system on multi-switch topologies by simulating the switch behavior. Our evaluation indicates latency reduction up to 2x with \textit{intra-cellular optimization} compared to the conventional architecture. We show that our design has a ten-millisecond level of control latency, and achieves fine-grained network security and monitoring.
Offloading is a popular way to overcome the resource and power constraints of networked embedded devices, which are increasingly found in industrial environments. It involves moving resource-intensive computational tasks to a more powerful device on the network, often in close proximity to enable wireless communication. However, many Industrial Internet of Things (IIoT) applications have real-time constraints. Offloading such tasks over a wireless network with latency uncertainties poses new challenges. In this paper, we aim to better understand these challenges by proposing a system architecture and scheduler for real-time task offloading in wireless IIoT environments. Based on a prototype, we then evaluate different system configurations and discuss their trade-offs and implications. Our design showed to prevent deadline misses under high load and network uncertainties and was able to outperform a reference scheduler in terms of successful task throughput. Under heavy task load, where the reference scheduler had a success rate of 5%, our design achieved a success rate of 60%.
In this work, we investigate the performance of a joint sensing and communication (JSC) network consisting of multiple base stations (BSs) that cooperate through a fusion center (FC) to exchange information about the sensed environment while concurrently establishing communication links with a set of user equipments (UEs). Each BS within the network operates as a monostatic radar system, enabling comprehensive scanning of the monitored area and generating range-angle maps that provide information regarding the position of a group of heterogeneous objects. The acquired maps are subsequently fused in the FC. Then, a convolutional neural network (CNN) is employed to infer the category of the targets, e.g., pedestrians or vehicles, and such information is exploited by an adaptive clustering algorithm to group the detections originating from the same target more effectively. Finally, two multi-target tracking algorithms, the probability hypothesis density (PHD) filter and multi-Bernoulli mixture (MBM) filter, are applied to estimate the state of the targets. Numerical results demonstrated that our framework could provide remarkable sensing performance, achieving an optimal sub-pattern assignment (OSPA) less than 60 cm, while keeping communication services to UEs with a reduction of the communication capacity in the order of 10% to 20%. The impact of the number of BSs engaged in sensing is also examined, and we show that in the specific case study, 3 BSs ensure a localization error below 1 m.
Energy saving in wireless networks is growing in importance due to increasing demand for evolving new-gen cellular networks, environmental and regulatory concerns, and potential energy crises arising from geopolitical tensions. In this work, we propose an approximate dynamic programming (ADP)-based method coupled with online optimization to switch on/off the cells of base stations to reduce network power consumption while maintaining adequate Quality of Service (QoS) metrics. We use a multilayer perceptron (MLP) given each state-action pair to predict the power consumption to approximate the value function in ADP for selecting the action with optimal expected power saved. To save the largest possible power consumption without deteriorating QoS, we include another MLP to predict QoS and a long short-term memory (LSTM) for predicting handovers, incorporated into an online optimization algorithm producing an adaptive QoS threshold for filtering cell switching actions based on the overall QoS history. The performance of the method is evaluated using a practical network simulator with various real-world scenarios with dynamic traffic patterns.
Optical wireless communication (OWC) offers several complementary advantages to radio-frequency wireless networks such as its massive available spectrum; hence, it is widely anticipated that OWC will assume a pivotal role in the forthcoming sixth generation wireless communication networks. Although significant progress has been achieved in OWC over the past decades, the outage induced by occasionally low received optical power continues to pose a key limiting factor for its deployment. In this work, we discuss the potential role of single-photon counting (SPC) receivers as a promising solution to overcome this limitation. We present an overview of the applications of SPC-based OWC systems in 6G networks, introduce their major performance-limiting factors, propose a performance enhancement framework to tackle these issues, and identify critical areas of open problems for future research.
Scheduling packets with end-to-end deadline constraints in multihop networks is an important problem that has been notoriously difficult to tackle. Recently, there has been progress on this problem in the worst-case traffic setting, with the objective of maximizing the number of packets delivered within their deadlines. Specifically, the proposed algorithms were shown to achieve $\Omega(1/\log(L))$ fraction of the optimal objective value if the minimum link capacity in the network is $C_{\min}=\Omega(\log (L))$, where $L$ is the maximum length of a packet's route in the network (which is bounded by the packet's maximum deadline). However, such guarantees can be quite pessimistic due to the strict worst-case traffic assumption and may not accurately reflect real-world settings. In this work, we aim to address this limitation by exploring whether it is possible to design algorithms that achieve a constant fraction of the optimal value while relaxing the worst-case traffic assumption. We provide a positive answer by demonstrating that in stochastic traffic settings, such as i.i.d. packet arrivals, near-optimal, $(1-\epsilon)$-approximation algorithms can be designed if $C_{\min} = \Omega\big(\frac{\log (L/\epsilon) } {\epsilon^2}\big)$. To the best of our knowledge, this is the first result that shows this problem can be solved near-optimally under nontrivial assumptions on traffic and link capacity. We further present extended simulations using real network traces with non-stationary traffic, which demonstrate that our algorithms outperform worst-case-based algorithms in practical settings.
Recent interest in integrated sensing and communications has led to the design of novel signal processing techniques to recover information from an overlaid radar-communications signal. Here, we focus on a spectral coexistence scenario, wherein the channels and transmit signals of both radar and communications systems are unknown to the common receiver. In this dual-blind deconvolution (DBD) problem, the receiver admits a multi-carrier wireless communications signal that is overlaid with the radar signal reflected off multiple targets. The communications and radar channels are represented by continuous-valued range-times or delays corresponding to multiple transmission paths and targets, respectively. Prior works addressed recovery of unknown channels and signals in this ill-posed DBD problem through atomic norm minimization but contingent on individual minimum separation conditions for radar and communications channels. In this paper, we provide an optimal joint separation condition using extremal functions from the Beurling-Selberg interpolation theory. Thereafter, we formulate DBD as a low-rank modified Hankel matrix retrieval and solve it via nuclear norm minimization. We estimate the unknown target and communications parameters from the recovered low-rank matrix using multiple signal classification (MUSIC) method. We show that the joint separation condition also guarantees that the underlying Vandermonde matrix for MUSIC is well-conditioned. Numerical experiments validate our theoretical findings.
We describe ACE0, a lightweight platform for evaluating the suitability and viability of AI methods for behaviour discovery in multiagent simulations. Specifically, ACE0 was designed to explore AI methods for multi-agent simulations used in operations research studies related to new technologies such as autonomous aircraft. Simulation environments used in production are often high-fidelity, complex, require significant domain knowledge and as a result have high R&D costs. Minimal and lightweight simulation environments can help researchers and engineers evaluate the viability of new AI technologies for behaviour discovery in a more agile and potentially cost effective manner. In this paper we describe the motivation for the development of ACE0.We provide a technical overview of the system architecture, describe a case study of behaviour discovery in the aerospace domain, and provide a qualitative evaluation of the system. The evaluation includes a brief description of collaborative research projects with academic partners, exploring different AI behaviour discovery methods.
Vast amount of data generated from networks of sensors, wearables, and the Internet of Things (IoT) devices underscores the need for advanced modeling techniques that leverage the spatio-temporal structure of decentralized data due to the need for edge computation and licensing (data access) issues. While federated learning (FL) has emerged as a framework for model training without requiring direct data sharing and exchange, effectively modeling the complex spatio-temporal dependencies to improve forecasting capabilities still remains an open problem. On the other hand, state-of-the-art spatio-temporal forecasting models assume unfettered access to the data, neglecting constraints on data sharing. To bridge this gap, we propose a federated spatio-temporal model -- Cross-Node Federated Graph Neural Network (CNFGNN) -- which explicitly encodes the underlying graph structure using graph neural network (GNN)-based architecture under the constraint of cross-node federated learning, which requires that data in a network of nodes is generated locally on each node and remains decentralized. CNFGNN operates by disentangling the temporal dynamics modeling on devices and spatial dynamics on the server, utilizing alternating optimization to reduce the communication cost, facilitating computations on the edge devices. Experiments on the traffic flow forecasting task show that CNFGNN achieves the best forecasting performance in both transductive and inductive learning settings with no extra computation cost on edge devices, while incurring modest communication cost.
The prevalence of networked sensors and actuators in many real-world systems such as smart buildings, factories, power plants, and data centers generate substantial amounts of multivariate time series data for these systems. The rich sensor data can be continuously monitored for intrusion events through anomaly detection. However, conventional threshold-based anomaly detection methods are inadequate due to the dynamic complexities of these systems, while supervised machine learning methods are unable to exploit the large amounts of data due to the lack of labeled data. On the other hand, current unsupervised machine learning approaches have not fully exploited the spatial-temporal correlation and other dependencies amongst the multiple variables (sensors/actuators) in the system for detecting anomalies. In this work, we propose an unsupervised multivariate anomaly detection method based on Generative Adversarial Networks (GANs). Instead of treating each data stream independently, our proposed MAD-GAN framework considers the entire variable set concurrently to capture the latent interactions amongst the variables. We also fully exploit both the generator and discriminator produced by the GAN, using a novel anomaly score called DR-score to detect anomalies by discrimination and reconstruction. We have tested our proposed MAD-GAN using two recent datasets collected from real-world CPS: the Secure Water Treatment (SWaT) and the Water Distribution (WADI) datasets. Our experimental results showed that the proposed MAD-GAN is effective in reporting anomalies caused by various cyber-intrusions compared in these complex real-world systems.
In this paper, we propose the joint learning attention and recurrent neural network (RNN) models for multi-label classification. While approaches based on the use of either model exist (e.g., for the task of image captioning), training such existing network architectures typically require pre-defined label sequences. For multi-label classification, it would be desirable to have a robust inference process, so that the prediction error would not propagate and thus affect the performance. Our proposed model uniquely integrates attention and Long Short Term Memory (LSTM) models, which not only addresses the above problem but also allows one to identify visual objects of interests with varying sizes without the prior knowledge of particular label ordering. More importantly, label co-occurrence information can be jointly exploited by our LSTM model. Finally, by advancing the technique of beam search, prediction of multiple labels can be efficiently achieved by our proposed network model.