With the rapid growth of IoT networks, ubiquitous coverage is becoming increasingly necessary. Low Earth Orbit (LEO) satellite constellations for IoT have been proposed to provide coverage to regions where terrestrial systems cannot. However, LEO constellations for uplink communications are severely limited by the high density of user devices, which causes a high level of co-channel interference. This research presents a novel framework that utilizes spiking neural networks (SNNs) to detect IoT signals in the presence of uplink interference. The key advantage of SNNs is the extremely low power consumption relative to traditional deep learning (DL) networks. The performance of the spiking-based neural network detectors is compared against state-of-the-art DL networks and the conventional matched filter detector. Results indicate that both DL and SNN-based receivers surpass the matched filter detector in interference-heavy scenarios, owing to their capacity to effectively distinguish target signals amidst co-channel interference. Moreover, our work highlights the ultra-low power consumption of SNNs compared to other DL methods for signal detection. The strong detection performance and low power consumption of SNNs make them particularly suitable for onboard signal detection in IoT LEO satellites, especially in high interference conditions.
In this paper, we propose a novel integrated sensing and communication (ISAC) complex convolution neural network (CNN) CSI enhancer for 6G networks, which exploits the correlation between the sensing parameters, such as angle-of-arrival (AoA) and range, and the channel state information (CSI) to significantly improve the CSI estimation accuracy and further enhance the sensing accuracy. The ISAC complex CNN CSI enhancer uses the complex-value computation layers to form the CNN to better maintain the phase information of CSI. Furthermore, we incorporate the ISAC transform modules into the CNN enhancer to transform the CSI into the sparse angle-delay domain, which can be treated as images with prominent peaks and are suitable to be processed by CNN. Then, we further propose a novel biased FFT-based sensing scheme, where we actively add known phase bias terms to the original CSI to generate multiple estimation results using a simple FFT-based sensing method, and we finally calculate the average of all the debiased sensing results to obtain more accurate range estimates. The extensive simulation results show that the ISAC complex CNN CSI enhancer can converge within 30 training epochs. Its CSI estimation normalized mean square error (NMSE) is about 17 dB lower than the MMSE method, and the bit error rate (BER) of demodulation using the enhanced CSI approaches the perfect CSI. Finally, the range estimation MSE of the proposed biased FFT-based sensing method can approach the subspace-based method with much lower complexity.
Smooth coordination within a swarm robotic system is essential for the effective execution of collective robot missions. Having efficient communication is key to the successful coordination of swarm robots. This paper proposes a new communication-efficient decentralized cooperative reinforcement learning algorithm for coordinating swarm robots. It is made efficient by hierarchically building on the use of local information exchanges. We consider a case study application of maze solving through cooperation among a group of robots, where the time and costs are minimized while avoiding inter-robot collisions and path overlaps during exploration. With a solid theoretical basis, we extensively analyze the algorithm with realistic CORE network simulations and evaluate it against state-of-the-art solutions in terms of maze coverage percentage and efficiency under communication-degraded environments. The results demonstrate significantly higher coverage accuracy and efficiency while reducing costs and overlaps even in high packet loss and low communication range scenarios.
Fast and reliable wireless communication has become a critical demand in human life. In the case of mission-critical (MC) scenarios, for instance, when natural disasters strike, providing ubiquitous connectivity becomes challenging by using traditional wireless networks. In this context, unmanned aerial vehicle (UAV) based aerial networks offer a promising alternative for fast, flexible, and reliable wireless communications. Due to unique characteristics such as mobility, flexible deployment, and rapid reconfiguration, drones can readily change location dynamically to provide on-demand communications to users on the ground in emergency scenarios. As a result, the usage of UAV base stations (UAV-BSs) has been considered an appropriate approach for providing rapid connection in MC scenarios. In this paper, we study how to control multiple UAV-BSs in both static and dynamic environments. We use a system-level simulator to model an MC scenario in which a macro BS of a cellular network is out of service and multiple UAV-BSs are deployed using integrated access and backhaul (IAB) technology to provide coverage for users in the disaster area. With the data collected from the system-level simulation, a deep reinforcement learning algorithm is developed to jointly optimize the three-dimensional placement of these multiple UAV-BSs, which adapt their 3-D locations to the on-ground user movement. The evaluation results show that the proposed algorithm can support the autonomous navigation of the UAV-BSs to meet the MC service requirements in terms of user throughput and drop rate.
The biological neural systems evolved to adapt to ecological environment for efficiency and effectiveness, wherein neurons with heterogeneous structures and rich dynamics are optimized to accomplish complex cognitive tasks. Most of the current research of biologically inspired spiking neural networks (SNNs) are, however, grounded on a homogeneous neural coding scheme, which limits their overall performance in terms of accuracy, latency, efficiency, and robustness, etc. In this work, we argue that one should holistically design the network architecture to incorporate diverse neuronal functions and neural coding schemes for best performance. As an early attempt in this research direction, we put forward a hybrid neural coding framework that integrates multiple neural coding schemes discovered in neuroscience. We demonstrate that the proposed hybrid coding scheme achieves a comparable accuracy with the state-of-the-art SNNs with homogeneous neural coding on CIFAR-10, CIFAR-100, and Tiny-ImageNet datasets with less than eight time steps and at least 3.90x fewer computations. Furthermore, we demonstrate accurate, rapid, and robust sound source localization on SoClas dataset. This study yields valuable insights into the performance of various hybrid neural coding designs and hold significant implications for designing high performance SNNs.
The study assesses the impact of cloud-based microservices architectures on application performance. Several aspects of performance evaluation are discussed, including response time, throughput, scalability, and reliability. This article examines the advantages and challenges of adopting a cloud-based approach. It explores potential bottlenecks and issues in a microservices architecture and presents optimization techniques. With the help of case studies and empirical studies, it compares cloud-based microservices architectures with traditional monolithic architectures. Furthermore, the paper examines the challenges of monitoring and troubleshooting distributed microservices. In conclusion, it emphasizes the importance of planning, designing, and testing during the adoption of cloud-based microservices.
Deployment of Internet of Things (IoT) devices and Data Fusion techniques have gained popularity in public and government domains. This usually requires capturing and consolidating data from multiple sources. As datasets do not necessarily originate from identical sensors, fused data typically results in a complex data problem. Because military is investigating how heterogeneous IoT devices can aid processes and tasks, we investigate a multi-sensor approach. Moreover, we propose a signal to image encoding approach to transform information (signal) to integrate (fuse) data from IoT wearable devices to an image which is invertible and easier to visualize supporting decision making. Furthermore, we investigate the challenge of enabling an intelligent identification and detection operation and demonstrate the feasibility of the proposed Deep Learning and Anomaly Detection models that can support future application that utilizes hand gesture data from wearable devices.
It has been a long time that computer architecture and systems are optimized to enable efficient execution of machine learning (ML) algorithms or models. Now, it is time to reconsider the relationship between ML and systems, and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: the improvement of designers' productivity, and the completion of the virtuous cycle. In this paper, we present a comprehensive review of work that applies ML for system design, which can be grouped into two major categories, ML-based modelling that involves predictions of performance metrics or some other criteria of interest, and ML-based design methodology that directly leverages ML as the design tool. For ML-based modelling, we discuss existing studies based on their target level of system, ranging from the circuit level to the architecture/system level. For ML-based design methodology, we follow a bottom-up path to review current work, with a scope of (micro-)architecture design (memory, branch prediction, NoC), coordination between architecture/system and workload (resource allocation and management, data center management, and security), compiler, and design automation. We further provide a future vision of opportunities and potential directions, and envision that applying ML for computer architecture and systems would thrive in the community.
Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.
This paper focuses on two fundamental tasks of graph analysis: community detection and node representation learning, which capture the global and local structures of graphs, respectively. In the current literature, these two tasks are usually independently studied while they are actually highly correlated. We propose a probabilistic generative model called vGraph to learn community membership and node representation collaboratively. Specifically, we assume that each node can be represented as a mixture of communities, and each community is defined as a multinomial distribution over nodes. Both the mixing coefficients and the community distribution are parameterized by the low-dimensional representations of the nodes and communities. We designed an effective variational inference algorithm which regularizes the community membership of neighboring nodes to be similar in the latent space. Experimental results on multiple real-world graphs show that vGraph is very effective in both community detection and node representation learning, outperforming many competitive baselines in both tasks. We show that the framework of vGraph is quite flexible and can be easily extended to detect hierarchical communities.
This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations. Although deep neural networks have exhibited superior performance in various tasks, the interpretability is always the Achilles' heel of deep neural networks. At present, deep neural networks obtain high discrimination power at the cost of low interpretability of their black-box representations. We believe that high model interpretability may help people to break several bottlenecks of deep learning, e.g., learning from very few annotations, learning via human-computer communications at the semantic level, and semantically debugging network representations. We focus on convolutional neural networks (CNNs), and we revisit the visualization of CNN representations, methods of diagnosing representations of pre-trained CNNs, approaches for disentangling pre-trained CNN representations, learning of CNNs with disentangled representations, and middle-to-end learning based on model interpretability. Finally, we discuss prospective trends in explainable artificial intelligence.