We consider networks of small, autonomous devices that communicate with each other wirelessly. Minimizing energy usage is an important consideration in designing algorithms for such networks, as battery life is a crucial and limited resource. Working in a model where both sending and listening for messages deplete energy, we consider the problem of finding a maximal matching of the nodes in a radio network of arbitrary and unknown topology. We present a distributed randomized algorithm that produces, with high probability, a maximal matching. The maximum energy cost per node is $O(\log^2 n)$, where $n$ is the size of the network. The total latency of our algorithm is $O(n \log n)$ time steps. We observe that there exist families of network topologies for which both of these bounds are simultaneously optimal up to polylog factors, so any significant improvement will require additional assumptions about the network topology. We also consider the related problem of assigning, for each node in the network, a neighbor to back up its data in case of node failure. Here, a key goal is to minimize the maximum load, defined as the number of nodes assigned to a single node. We present a decentralized low-energy algorithm that finds a neighbor assignment whose maximum load is at most a polylog($n$) factor bigger that the optimum.
End-to-End (E2E) network slicing enables wireless networks to provide diverse services on a common infrastructure. Each E2E slice, including resources of radio access network (RAN) and core network, is rented to mobile virtual network operators (MVNOs) to provide a specific service to end-users. RAN slicing, which is realized through wireless network virtualization, involves sharing the frequency spectrum and base station antennas in RAN. Similarly, in core slicing, which is achieved by network function virtualization, data center resources such as commodity servers and physical links are shared between users of different MVNOs. In this paper, we study E2E slicing with the aim of minimizing the total energy consumption. The stated optimization problem is non-convex that is solved by a sub-optimal algorithm proposed here. The simulation results show that our proposed joint power control, server and link allocation (JPSLA) algorithm achieves 30% improvement compared to the disjoint scheme, where RAN and core are sliced separately.
Low-rank approximations are essential in modern data science. The interpolative decomposition provides one such approximation. Its distinguishing feature is that it reuses columns from the original matrix. This enables it to preserve matrix properties such as sparsity and non-negativity. It also helps save space in memory. In this work, we introduce two optimized algorithms to construct an interpolative decomposition along with numerical evidence that they outperform the current state of the art.
Unmanned aerial vehicles (UAVs) are becoming a viable platform for sensing and estimation in a wide variety of applications including disaster response, search and rescue, and security monitoring. These sensing UAVs have limited battery and computational capabilities, and thus must offload their data so it can be processed to provide actionable intelligence. We consider a compute platform consisting of a limited number of highly-resourced UAVs that act as mobile edge computing (MEC) servers to process the workload on premises. We propose a novel distributed solution to the collaborative processing problem that adaptively positions the MEC UAVs in response to the changing workload that arises both from the sensing UAVs' mobility and the task generation. Our solution consists of two key building blocks: (1) an efficient workload estimation process by which the UAVs estimate the task field - a continuous approximation of the number of tasks to be processed at each location in the airspace, and (2) a distributed optimization method by which the UAVs partition the task field so as to maximize the system throughput. We evaluate our proposed solution using realistic models of surveillance UAV mobility and show that our method achieves up to 28% improvement in throughput over a non-adaptive baseline approach.
With the rapid development of connecting massive devices to the Internet, especially for remote areas without cellular network infrastructures, space-air-ground integrated networks (SAGINs) emerge and offload computation-intensive tasks. In this paper, we consider a SAGIN, where multiple low-earth-orbit (LEO) satellites providing connections to the cloud server, an unmanned aerial vehicle (UAV), and nearby base stations (BSs) providing edge computing services are included. The UAV flies along a fixed trajectory to collect tasks generated by Internet of Things (IoT) devices, and forwards these tasks to a BS or the cloud server for further processing. To facilitate efficient processing, the UAV needs to decide where to offload as well as the proportion of offloaded tasks. However, in practice, due to the variability of environment and actual demand, the amount of arrival tasks is uncertain. If the deterministic optimization is utilized to develop offloading strategy, unnecessary system overhead or higher task drop rate may occur, which severely damages the system robustness. To address this issue, we characterize the uncertainty with a data-driven approach, and formulate a distributionally robust optimization problem to minimize the expected energy-constrained system latency under the worst-case probability distribution. Furthermore, the distributionally robust latency optimization algorithm is proposed to reach the suboptimal solution. Finally, we perform simulations on the realworld data set, and compare with other benchmark schemes to verify the efficiency and robustness of our proposed algorithm.
Gradient based meta-learning methods are prone to overfit on the meta-training set, and this behaviour is more prominent with large and complex networks. Moreover, large networks restrict the application of meta-learning models on low-power edge devices. While choosing smaller networks avoid these issues to a certain extent, it affects the overall generalization leading to reduced performance. Clearly, there is an approximately optimal choice of network architecture that is best suited for every meta-learning problem, however, identifying it beforehand is not straightforward. In this paper, we present MetaDOCK, a task-specific dynamic kernel selection strategy for designing compressed CNN models that generalize well on unseen tasks in meta-learning. Our method is based on the hypothesis that for a given set of similar tasks, not all kernels of the network are needed by each individual task. Rather, each task uses only a fraction of the kernels, and the selection of the kernels per task can be learnt dynamically as a part of the inner update steps. MetaDOCK compresses the meta-model as well as the task-specific inner models, thus providing significant reduction in model size for each task, and through constraining the number of active kernels for every task, it implicitly mitigates the issue of meta-overfitting. We show that for the same inference budget, pruned versions of large CNN models obtained using our approach consistently outperform the conventional choices of CNN models. MetaDOCK couples well with popular meta-learning approaches such as iMAML. The efficacy of our method is validated on CIFAR-fs and mini-ImageNet datasets, and we have observed that our approach can provide improvements in model accuracy of up to 2% on standard meta-learning benchmark, while reducing the model size by more than 75%.
Visible light communication (VLC) is envisioned as a core component of future wireless communication networks due to, among others, the huge unlicensed bandwidth it offers and the fact that it does not cause any interference to existing radio frequency (RF) communication systems. Most research on RF and VLC coexistence has focused on hybrid designs where data transmission to any user could originate from either an RF or a VLC access point (AP). However, hybrid RF/VLC systems fail to exploit the distinct transmission characteristics of RF and VLC systems to fully reap the benefits they can offer. Aggregated RF/VLC systems, in which any user can be served simultaneously by both RF and VLC APs, have recently emerged as a more promising and robust design for the coexistence of RF and VLC systems. To this end, this paper, for the first time, investigates AP assignment, subchannel allocation (SA), and transmit power allocation (PA) to optimize the energy efficiency (EE) of aggregated RF/VLC systems while considering the effects of interference and VLC line-of-sight link blockages. A novel and challenging EE optimization problem is formulated for which an efficient joint solution based on alternating optimization is developed. More particularly, an energy-efficient AP assignment algorithm based on matching theory is proposed. Then, a low-complexity SA scheme that allocates subchannels to users based on their channel conditions is developed. Finally, an effective PA algorithm is presented by utilizing the quadratic transform approach and a multi-objective optimization framework. Extensive simulation results reveal that: 1) the proposed joint AP assignment, SA, and PA solution obtains significant EE, sum-rate, and outage performance gains with low complexity, and 2) the aggregated RF/VLC system provides considerable performance improvement compared to hybrid RF/VLC systems.
We propose a geometric scattering-based graph neural network (GNN) for approximating solutions of the NP-hard maximal clique (MC) problem. We construct a loss function with two terms, one which encourages the network to find a large set of nodes and the other which acts as a surrogate for the constraint that the nodes form a clique. We then use this loss to train a novel GNN architecture that outputs a vector representing the probability for each node to be part of the MC and apply a rule-based decoder to make our final prediction. The incorporation of the scattering transform alleviates the so-called oversmoothing problem that is often encountered in GNNs and would degrade the performance of our proposed setup. Our empirical results demonstrate that our method outperforms representative GNN baselines in terms of solution accuracy and inference speed as well as conventional solvers like GUROBI with limited time budgets.
As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.
The growing energy and performance costs of deep learning have driven the community to reduce the size of neural networks by selectively pruning components. Similarly to their biological counterparts, sparse networks generalize just as well, if not better than, the original dense networks. Sparsity can reduce the memory footprint of regular networks to fit mobile devices, as well as shorten training time for ever growing networks. In this paper, we survey prior work on sparsity in deep learning and provide an extensive tutorial of sparsification for both inference and training. We describe approaches to remove and add elements of neural networks, different training strategies to achieve model sparsity, and mechanisms to exploit sparsity in practice. Our work distills ideas from more than 300 research papers and provides guidance to practitioners who wish to utilize sparsity today, as well as to researchers whose goal is to push the frontier forward. We include the necessary background on mathematical methods in sparsification, describe phenomena such as early structure adaptation, the intricate relations between sparsity and the training process, and show techniques for achieving acceleration on real hardware. We also define a metric of pruned parameter efficiency that could serve as a baseline for comparison of different sparse networks. We close by speculating on how sparsity can improve future workloads and outline major open problems in the field.
Graph convolutional network (GCN) has been successfully applied to many graph-based applications; however, training a large-scale GCN remains challenging. Current SGD-based algorithms suffer from either a high computational cost that exponentially grows with number of GCN layers, or a large space requirement for keeping the entire graph and the embedding of each node in memory. In this paper, we propose Cluster-GCN, a novel GCN algorithm that is suitable for SGD-based training by exploiting the graph clustering structure. Cluster-GCN works as the following: at each step, it samples a block of nodes that associate with a dense subgraph identified by a graph clustering algorithm, and restricts the neighborhood search within this subgraph. This simple but effective strategy leads to significantly improved memory and computational efficiency while being able to achieve comparable test accuracy with previous algorithms. To test the scalability of our algorithm, we create a new Amazon2M data with 2 million nodes and 61 million edges which is more than 5 times larger than the previous largest publicly available dataset (Reddit). For training a 3-layer GCN on this data, Cluster-GCN is faster than the previous state-of-the-art VR-GCN (1523 seconds vs 1961 seconds) and using much less memory (2.2GB vs 11.2GB). Furthermore, for training 4 layer GCN on this data, our algorithm can finish in around 36 minutes while all the existing GCN training algorithms fail to train due to the out-of-memory issue. Furthermore, Cluster-GCN allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy---using a 5-layer Cluster-GCN, we achieve state-of-the-art test F1 score 99.36 on the PPI dataset, while the previous best result was 98.71 by [16]. Our codes are publicly available at //github.com/google-research/google-research/tree/master/cluster_gcn.