During the last decade, wireless data services have had an incredible impact on people's lives in ways we could never have imagined. The number of mobile devices has increased exponentially and data traffic has almost doubled every year. Undoubtedly, the rate of growth will continue to be rapid with the explosive increase in demands for data rates, latency, massive connectivity, network reliability, and energy efficiency. In order to manage this level of growth and meet these requirements, the fifth-generation (5G) mobile communications network is envisioned as a revolutionary advancement combining various improvements to previous mobile generation networks and new technologies, including the use of millimeter wavebands (mm-wave), massive multiple-input multipleoutput (mMIMO) multi-beam antennas, network densification, dynamic Time Division Duplex (TDD) transmission, and new waveforms with mixed numerologies. New revolutionary features including terahertz (THz) communications and the integration of Non-Terrestrial Networks (NTN) can further improve the performance and signal quality for future 6G networks. However, despite the inevitable benefits of all these key technologies, the heterogeneous and ultra-flexible structure of the 5G and beyond network brings non-orthogonality into the system and generates significant interference that needs to be handled carefully. Therefore, it is essential to design effective interference management schemes to mitigate severe and sometimes unpredictable interference in mobile networks. In this paper, we provide a comprehensive review of interference management in 5G and Beyond networks and discuss its future evolution. We start with a unified classification and a detailed explanation of the different types of interference and continue by presenting our taxonomy of existing interference management approaches. Then, after explaining interference measurement reports and signaling, we provide for each type of interference identified, an in-depth literature review and technical discussion of appropriate management schemes. We finish by discussing the main interference challenges that will be encountered in future 6G networks and by presenting insights on the suggested new interference management approaches, including useful guidelines for an AI-based solution. This review will provide a first-hand guide to the industry in determining the most relevant technology for interference management, and will also allow for consideration of future challenges and research directions.
To support the development of internet-of-things applications, an enormous population of low-power devices are expected to be incorporated in wireless networks performing sensing and communication tasks. As a key technology for improving the data collection efficiency, integrated sensing and communication (ISAC) enables simultaneous data transmission and radar sensing by reusing the same radio signals. In addition to information carriers, wireless signals can also serve as energy delivers, which enables simultaneous wireless information and power transfer (SWIPT). To improve the energy and spectrum efficiency, the advantages of ISAC and SWIPT are expected to be exploited, leading to the emerging technology of integrating sensing, communication, and power transfer (ISCPT). In this article, a timely overview of ISCPT is provided with the description of the fundamentals, the characterization of the theoretical boundary, the discussion on the key technologies, and the demonstration of the implementation platform.
With the recent growth in demand for large-scale deep neural networks, compute in-memory (CiM) has come up as a prominent solution to alleviate bandwidth and on-chip interconnect bottlenecks that constrain Von-Neuman architectures. However, the construction of CiM hardware poses a challenge as any specific memory hierarchy in terms of cache sizes and memory bandwidth at different interfaces may not be ideally matched to any neural network's attributes such as tensor dimension and arithmetic intensity, thus leading to suboptimal and under-performing systems. Despite the success of neural architecture search (NAS) techniques in yielding efficient sub-networks for a given hardware metric budget (e.g., DNN execution time or latency), it assumes the hardware configuration to be frozen, often yielding sub-optimal sub-networks for a given budget. In this paper, we present CiMNet, a framework that jointly searches for optimal sub-networks and hardware configurations for CiM architectures creating a Pareto optimal frontier of downstream task accuracy and execution metrics (e.g., latency). The proposed framework can comprehend the complex interplay between a sub-network's performance and the CiM hardware configuration choices including bandwidth, processing element size, and memory size. Exhaustive experiments on different model architectures from both CNN and Transformer families demonstrate the efficacy of the CiMNet in finding co-optimized sub-networks and CiM hardware configurations. Specifically, for similar ImageNet classification accuracy as baseline ViT-B, optimizing only the model architecture increases performance (or reduces workload execution time) by 1.7x while optimizing for both the model architecture and hardware configuration increases it by 3.1x.
Motivated by real-world applications such as the allocation of public housing, we examine the problem of assigning a group of agents to vertices (e.g., spatial locations) of a network so that the diversity level is maximized. Specifically, agents are of two types (characterized by features), and we measure diversity by the number of agents who have at least one neighbor of a different type. This problem is known to be NP-hard, and we focus on developing approximation algorithms with provable performance guarantees. We first present a local-improvement algorithm for general graphs that provides an approximation factor of 1/2. For the special case where the sizes of agent subgroups are similar, we present a randomized approach based on semidefinite programming that yields an approximation factor better than 1/2. Further, we show that the problem can be solved efficiently when the underlying graph is treewidth-bounded and obtain a polynomial time approximation scheme (PTAS) for the problem on planar graphs. Lastly, we conduct experiments to evaluate the per-performance of the proposed algorithms on synthetic and real-world networks.
In a misspecified social learning setting, agents are condescending if they perceive their peers as having private information that is of lower quality than it is in reality. Applying this to a standard sequential model, we show that outcomes improve when agents are mildly condescending. In contrast, too much condescension leads to worse outcomes, as does anti-condescension.
The internet of things (IoT) based wireless sensor networks (WSNs) face an energy shortage challenge that could be overcome by the novel wireless power transfer (WPT) technology. The combination of WSNs and WPT is known as wireless rechargeable sensor networks (WRSNs), with the charging efficiency and charging scheduling being the primary concerns. Therefore, this paper proposes a probabilistic on-demand charging scheduling for integrated sensing and communication (ISAC)-assisted WRSNs with multiple mobile charging vehicles (MCVs) that addresses three parts. First, it considers the four attributes with their probability distributions to balance the charging load on each MCV. The distributions are residual energy of charging node, distance from MCV to charging node, degree of charging node, and charging node betweenness centrality. Second, it considers the efficient charging factor strategy to partially charge network nodes. Finally, it employs the ISAC concept to efficiently utilize the wireless resources to reduce the traveling cost of each MCV and to avoid the charging conflicts between them. The simulation results show that the proposed protocol outperforms cutting-edge protocols in terms of energy usage efficiency, charging delay, survival rate, and travel distance.
We analyze how secure a block is after the block becomes k-deep, i.e., security-latency, for Nakamoto consensus under an exponential network delay model. We give parameter regimes for which transactions are safe when sufficiently deep in the chain. We compare our results for Nakamoto consensus under bounded network delay models and obtain analogous bounds for safety violation threshold. Next, modeling the blockchain system as a batch service queue with exponential network delay, we connect the security-latency analysis to sustainable transaction rate of the queue system. As our model assumes exponential network delay, batch service queue models give a meaningful trade-off between transaction capacity, security and latency. As adversary can attack the queue service to hamper the service process, we consider two different attacks for adversary. In an extreme scenario, we modify the selfish-mining attack for this purpose and consider its effect on the sustainable transaction rate of the queue.
The new era of technology has brought us to the point where it is convenient for people to share their opinions over an abundance of platforms. These platforms have a provision for the users to express themselves in multiple forms of representations, including text, images, videos, and audio. This, however, makes it difficult for users to obtain all the key information about a topic, making the task of automatic multi-modal summarization (MMS) essential. In this paper, we present a comprehensive survey of the existing research in the area of MMS.
Recent years have witnessed significant advances in technologies and services in modern network applications, including smart grid management, wireless communication, cybersecurity as well as multi-agent autonomous systems. Considering the heterogeneous nature of networked entities, emerging network applications call for game-theoretic models and learning-based approaches in order to create distributed network intelligence that responds to uncertainties and disruptions in a dynamic or an adversarial environment. This paper articulates the confluence of networks, games and learning, which establishes a theoretical underpinning for understanding multi-agent decision-making over networks. We provide an selective overview of game-theoretic learning algorithms within the framework of stochastic approximation theory, and associated applications in some representative contexts of modern network systems, such as the next generation wireless communication networks, the smart grid and distributed machine learning. In addition to existing research works on game-theoretic learning over networks, we highlight several new angles and research endeavors on learning in games that are related to recent developments in artificial intelligence. Some of the new angles extrapolate from our own research interests. The overall objective of the paper is to provide the reader a clear picture of the strengths and challenges of adopting game-theoretic learning methods within the context of network systems, and further to identify fruitful future research directions on both theoretical and applied studies.
Ensembles over neural network weights trained from different random initialization, known as deep ensembles, achieve state-of-the-art accuracy and calibration. The recently introduced batch ensembles provide a drop-in replacement that is more parameter efficient. In this paper, we design ensembles not only over weights, but over hyperparameters to improve the state of the art in both settings. For best performance independent of budget, we propose hyper-deep ensembles, a simple procedure that involves a random search over different hyperparameters, themselves stratified across multiple random initializations. Its strong performance highlights the benefit of combining models with both weight and hyperparameter diversity. We further propose a parameter efficient version, hyper-batch ensembles, which builds on the layer structure of batch ensembles and self-tuning networks. The computational and memory costs of our method are notably lower than typical ensembles. On image classification tasks, with MLP, LeNet, and Wide ResNet 28-10 architectures, our methodology improves upon both deep and batch ensembles.
Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.