Terrestrial and satellite communication networks often rely on two-hop wireless architectures with an access channel followed by backhaul links. Examples include Cloud-Radio Access Networks (C-RAN) and Low-Earth Orbit (LEO) satellite systems. Furthermore, communication services characterized by the coexistence of heterogeneous requirements are emerging as key use cases. This paper studies the performance of critical service (CS) and non-critical service (NCS) for Internet of Things (IoT) systems sharing a grant-free channel consisting of radio access and backhaul segments. On the radio access segment, IoT devices send packets to a set of non-cooperative access points (APs) using slotted ALOHA (SA). The APs then forward correctly received messages to a base station over a shared wireless backhaul segment adopting SA. We study first a simplified erasure channel model, which is well suited for satellite applications. Then, in order to account for terrestrial scenarios, the impact of fading is considered. Among the main conclusions, we show that orthogonal inter-service resource allocation is generally preferred for NCS devices, while non-orthogonal protocols can improve the throughput and packet success rate of CS devices for both terrestrial and satellite scenarios.
We consider the analysis and design of distributed wireless networks wherein remote radio heads (RRHs) coordinate transmissions to serve multiple users on the same resource block (RB). Specifically, we analyze two possible multiple-input multiple-output wireless fronthaul solutions: multicast and zero forcing (ZF) beamforming. We develop a statistical model for the fronthaul rate and, coupled with an analysis of the user access rate, we optimize the placement of the RRHs. This model allows us to formulate the location optimization problem with a statistical constraint on fronthaul outage. Our results are cautionary, showing that the fronthaul requires considerable bandwidth to enable joint service to users. This requirement can be relaxed by serving a low number of users on the same RB. Additionally, we show that, with a fixed number of antennas, for the multicast fronthaul, it is prudent to concentrate these antennas on a few RRHs. However, for the ZF beamforming fronthaul, it is better to distribute the antennas on more RRHs. For the parameters chosen, using a ZF beamforming fronthaul improves the typical access rate by approximately 8% compared to multicast. Crucially, our work quantifies the effect of these fronthaul solutions and provides an effective tool for the design of distributed networks.
Optimal MIMO detection has been one of the most challenging and computationally inefficient tasks in wireless systems. We show that the new analog computing techniques like Coherent Ising Machines (CIM) are promising candidates for performing near-optimal MIMO detection. We propose a novel regularized Ising formulation for MIMO detection that mitigates a common error floor problem and further evolves it into an algorithm that achieves near-optimal MIMO detection. Massive MIMO systems, that have a large number of antennas at the Access point (AP), allow linear detectors to be near-optimal. However, the simplified detection in these systems comes at the cost of overall throughput, which could be improved by supporting more users. By means of numerical simulations, we show that in principle a MIMO detector based on a hybrid use of a CIM would allow us to add more transmitter antennas/users and increase the overall throughput of the cell by a significant factor. This would open up the opportunity to operate using more aggressive modulation and coding schemes and hence achieve high throughput: for a $16\times16$ large MIMO system, we estimate around 2.5$\times$ more throughput in mid-SNR regime ($\approx 12 dB$) and 2$\times$ more throughput in high-SNR regime( $>$ 20dB) than the industry standard, Minimum-Mean Square Error decoding (MMSE).
Gaussian Processes (GP) is a staple in the toolkit of a spatial statistician. Well-documented computing roadblocks in the analysis of large geospatial datasets using Gaussian Processes have now been successfully mitigated via several recent statistical innovations. Nearest Neighbor Gaussian Processes (NNGP) has emerged as one of the leading candidates for such massive-scale geospatial analysis owing to their empirical success. This article reviews the connection of NNGP to sparse Cholesky factors of the spatial precision (inverse-covariance) matrices. Focus of the review is on these sparse Cholesky matrices which are versatile and have recently found many diverse applications beyond the primary usage of NNGP for fast parameter estimation and prediction in the spatial (generalized) linear models. In particular, we discuss applications of sparse NNGP Cholesky matrices to address multifaceted computational issues in spatial bootstrapping, simulation of large-scale realizations of Gaussian random fields, and extensions to non-parametric mean function estimation of a Gaussian Process using Random Forests. We also review a sparse-Cholesky-based model for areal (geographically-aggregated) data that addresses interpretability issues of existing areal models. Finally, we highlight some yet-to-be-addressed issues of such sparse Cholesky approximations that warrants further research.
The breakthrough of blockchain technology has facilitated the emergence and deployment of a wide range of Unmanned Aerial Vehicles (UAV) network-based applications. Yet, the full utilization of these applications is still limited due to the fact that each application is operating on an isolated blockchain. Thus, it is inevitable to orchestrate these blockchain fragments by introducing a cross-blockchain platform that governs the inter-communication and transfer of assets in the UAV networks context. In this paper, we provide an up-to-date survey of blockchain-based UAV networks applications. We also survey the literature on the state-of-the-art cross blockchain frameworks to highlight the latest advances in the field. Based on the outcomes of our survey, we introduce a spectrum of scenarios related to UAV networks that may leverage the potentials of the currently available cross-blockchain solutions. Finally, we identify open issues and potential challenges associated with the application of a cross-blockchain scheme for UAV networks that will hopefully guide future research directions.
We propose a set of tools to replay wireless network traffic traces, while preserving the privacy of the original traces. Traces are generated by a user- and context-aware trained generative adversarial network (GAN). The replay allows for realistic traces from any number of users and of any trace duration to be produced given contextual parameters like the type of application and the real-time signal strength. We demonstrate the usefulness of the tools in three replay scenarios: Linux- and Android-station experiments and NS3 simulations. We also evaluate the ability of the GAN model to generate traces that retain key statistical properties of the original traces such as feature correlation, statistical moments, and novelty. Our results show that we beat both traditional statistical distribution fitting approaches as well as a state-of-the-art GAN time series generator across these metrics. The ability of our GAN model to generate any number of user traces regardless of the number of users in the original trace also makes our tools more practically applicable compared to previous GAN approaches. Furthermore, we present a use case where our tools were employed in a Wi-Fi research experiment.
This paper develops a 3GPP-inspired design for the in-band-full-duplex (IBFD) integrated access and backhaul (IAB) networks in the frequency range 2 (FR2) band, which can enhance the spectral efficiency (SE) and coverage while reducing the latency. However, the self-interference (SI), which is usually more than 100 dB higher than the signal-of-interest, becomes the major bottleneck in developing these IBFD networks. We design and analyze a subarray-based hybrid beamforming IBFD-IAB system with the RF beamformers obtained via RF codebooks given by a modified Linde-Buzo-Gray (LBG) algorithm. The SI is canceled in three stages, where the first stage of antenna isolation is assumed to be successfully deployed. The second stage consists of the optical domain (OD)-based RF cancellation, where cancelers are connected with the RF chain pairs. The third stage is comprised of the digital cancellation via successive interference cancellation followed by minimum mean-squared error baseband receiver. Multiuser interference in the access link is canceled by zero-forcing at the IAB-node transmitter. Simulations show that under 400 MHz bandwidth, our proposed OD-based RF cancellation can achieve around 25 dB of cancellation with 100 taps. Moreover, the higher the hardware impairment and channel estimation error, the worse digital cancellation ability we can obtain.
Cooperative vehicle platooning significantly improves highway safety and fuel efficiency. In this model, a set of vehicles move in line formation and coordinate functions such as acceleration, braking, and steering using a combination of physical sensing and vehicle-to-vehicle (V2V) messaging. The authenticity and integrity of the V2V messages are paramount to highway safety. For this reason, recent V2V and V2X standards support the integration of a PKI. However, a PKI cannot bind a vehicle's digital identity to the vehicle's physical state (location, heading, velocity, etc.). As a result, a vehicle with valid cryptographic credentials can impact the platoon by creating "ghost" vehicles and injecting false state information. In this paper, we seek to provide the missing link between the physical and the digital world in the context of verifying a vehicle's platoon membership. We focus on the property of following, where vehicles follow each other in a close and coordinated manner. We aim at developing a Proof-of-Following (PoF) protocol that enables a candidate vehicle to prove that it follows a verifier within the typical platooning distance. The main idea of the proposed PoF protocol is to draw security from the common, but constantly changing environment experienced by the closely traveling vehicles. We use the large-scale fading effect of ambient RF signals as a common source of randomness to construct a PoF primitive. The correlation of large-scale fading is an ideal candidate for the mobile outdoor environment because it exponentially decays with distance and time. We evaluate our PoF protocol on an experimental platoon of two vehicles in freeway, highway, and urban driving conditions. In such realistic conditions, we demonstrate that the PoF withstands both the pre-recording and following attacks with overwhelming probability.
Channel estimation in mmWave and THz-range wireless communications (producing Gb/Tb-range of data) is critical to configuring system parameters related to transmission signal quality, and yet it remains a daunting challenge both in software and hardware. Current methods of channel estimations, be it modeling- or data-based (machine learning (ML)), - use and create big data. This in turn requires a large amount of computational resources, read operations to prove if there is some predefined channel configurations, e.g., QoS requirements, in the database, as well as write operations to store the new combinations of QoS parameters in the database. Especially the ML-based approach requires high computational and storage resources, low latency and a higher hardware flexibility. In this paper, we engineer and study the offloading of the above operations to edge and cloud computing systems to understand the suitability of edge and cloud computing to provide rapid response with channel and link configuration parameters on the example of THz channel modeling. We evaluate the performance of the engineered system when the computational and storage resources are orchestrated based on: 1) monolithic architecture, 2) microservices architectures, both in edge-cloud based approach. For microservices approach, we engineer both Docker Swarm and Kubernetes systems. The measurements show a great promise of edge computing and microservices that can quickly respond to properly configure parameters and improve transmission distance and signal quality with ultra-high speed wireless communications.
We introduce a general definition of hybrid transforms for constructible functions. These are integral transforms combining Lebesgue integration and Euler calculus. Lebesgue integration gives access to well-studied kernels and to regularity results, while Euler calculus conveys topological information and allows for compatibility with operations on constructible functions. We conduct a systematic study of such transforms and introduce two new ones: the Euler-Fourier and Euler-Laplace transforms. We show that the first has a left inverse and that the second provides a satisfactory generalization of Govc and Hepworth's persistent magnitude to constructible sheaves, in particular to multi-parameter persistent modules. Finally, we prove index-theoretic formulae expressing a wide class of hybrid transforms as generalized Euler integral transforms. This yields expectation formulae for transforms of constructible functions associated to (sub)level-sets persistence of random Gaussian filtrations.
It has been a long time that computer architecture and systems are optimized to enable efficient execution of machine learning (ML) algorithms or models. Now, it is time to reconsider the relationship between ML and systems, and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: the improvement of designers' productivity, and the completion of the virtuous cycle. In this paper, we present a comprehensive review of work that applies ML for system design, which can be grouped into two major categories, ML-based modelling that involves predictions of performance metrics or some other criteria of interest, and ML-based design methodology that directly leverages ML as the design tool. For ML-based modelling, we discuss existing studies based on their target level of system, ranging from the circuit level to the architecture/system level. For ML-based design methodology, we follow a bottom-up path to review current work, with a scope of (micro-)architecture design (memory, branch prediction, NoC), coordination between architecture/system and workload (resource allocation and management, data center management, and security), compiler, and design automation. We further provide a future vision of opportunities and potential directions, and envision that applying ML for computer architecture and systems would thrive in the community.