One of the properties of interest in the analysis of networks is \emph{global communicability}, i.e., how easy or difficult it is, generally, to reach nodes from other nodes by following edges. Different global communicability measures provide quantitative assessments of this property, emphasizing different aspects of the problem. This paper investigates the sensitivity of global measures of communicability to local changes. In particular, for directed, weighted networks, we study how different global measures of communicability change when the weight of a single edge is changed; or, in the unweighted case, when an edge is added or removed. The measures we study include the \emph{total network communicability}, based on the matrix exponential of the adjacency matrix, and the \emph{Perron network communicability}, defined in terms of the Perron root of the adjacency matrix and the associated left and right eigenvectors. Finding what local changes lead to the largest changes in global communicability has many potential applications, including assessing the resilience of a system to failure or attack, guidance for incremental system improvements, and studying the sensitivity of global communicability measures to errors in the network connection data.
In this work we consider the problem of mobile robots that need to manipulate/transport an object via cables or robotic arms. We consider the scenario where the number of manipulating robots is redundant, i.e. a desired object configuration can be obtained by different configurations of the robots. The objective of this work is to show that communication can be used to implement cooperative local feedback controllers in the robots to improve disturbance rejection and reduce structural stress in the object. In particular we consider the realistic scenario where measurements are sampled and transmitted over wireless, and the sampling period is comparable with the system dynamics time constants. We first propose a kinematic model which is consistent with the overall systems dynamics under high-gain control and then we provide sufficient conditions for the exponential stability and monotonic decrease of the configuration error under different norms. Finally, we test the proposed controllers on the full dynamical systems showing the benefit of local communication.
Index modulation (IM) reduces the power consumption and hardware cost of the multiple-input multiple-output (MIMO) system by activating part of the antennas for data transmission. However, IM significantly increases the complexity of the receiver and needs accurate channel estimation to guarantee its performance. To tackle these challenges, in this paper, we design a deep learning (DL) based detector for the IM aided MIMO (IM-MIMO) systems. We first formulate the detection process as a sparse reconstruction problem by utilizing the inherent attributes of IM. Then, based on greedy strategy, we design a DL based detector, called IMRecoNet, to realize this sparse reconstruction process. Different from the general neural networks, we introduce complex value operations to adapt the complex signals in communication systems. To the best of our knowledge, this is the first attempt that introduce complex valued neural network to the design of detector for the IM-MIMO systems. Finally, to verify the adaptability and robustness of the proposed detector, simulations are carried out with consideration of inaccurate channel state information (CSI) and correlated MIMO channels. The simulation results demonstrate that the proposed detector outperforms existing algorithms in terms of antenna recognition accuracy and bit error rate under various scenarios.
A proof of work (PoW) is an important cryptographic construct enabling a party to convince others that they invested some effort in solving a computational task. Arguably, its main impact has been in the setting of cryptocurrencies such as Bitcoin and its underlying blockchain protocol, which received significant attention in recent years due to its potential for various applications as well as for solving fundamental distributed computing questions in novel threat models. PoWs enable the linking of blocks in the blockchain data structure and thus the problem of interest is the feasibility of obtaining a sequence (chain) of such proofs. In this work, we examine the hardness of finding such chain of PoWs against quantum strategies. We prove that the chain of PoWs problem reduces to a problem we call multi-solution Bernoulli search, for which we establish its quantum query complexity. Effectively, this is an extension of a threshold direct product theorem to an average-case unstructured search problem. Our proof, adding to active recent efforts, simplifies and generalizes the recording technique due to Zhandry (Crypto 2019). In addition, we revisit the formal treatment of security of the core of the Bitcoin consensus protocol, called the Bitcoin backbone (Eurocrypt 2015), against quantum adversaries and show that its security holds under a quantum analogue of the ``honest majority'' assumption that we formulate. Our analysis indicates that security of the Bitcoin backbone protocol is guaranteed provided that the number of adversarial quantum queries is bounded so that each quantum query is worth $O(p^{-1/2})$ classical ones, where $p$ is the probability of success of a single classical query to the protocol's underlying hash function. Somewhat surprisingly, the wait time for safe settlement in the case of quantum adversaries matches the safe settlement time in the classical case.
In a $k$-party communication problem, the $k$ players with inputs $x_1, x_2, \ldots, x_k$, respectively, want to evaluate a function $f(x_1, x_2, \ldots, x_k)$ using as little communication as possible. We consider the message-passing model, in which the inputs are partitioned in an arbitrary, possibly worst-case manner, among a smaller number $t$ of players ($t<k$). The $t$-player communication cost of computing $f$ can only be smaller than the $k$-player communication cost, since the $t$ players can trivially simulate the $k$-player protocol. But how much smaller can it be? We study deterministic and randomized protocols in the one-way model, and provide separations for product input distributions, which are optimal for low error probability protocols. We also provide much stronger separations when the input distribution is non-product. A key application of our results is in proving lower bounds for data stream algorithms. In particular, we give an optimal $\Omega(\epsilon^{-2}\log(N) \log \log(mM))$ bits of space lower bound for the fundamental problem of $(1\pm\epsilon)$-approximating the number $\|x\|_0$ of non-zero entries of an $n$-dimensional vector $x$ after $m$ integer updates each of magnitude at most $M$, and with success probability $\ge 2/3$, in a strict turnstile stream. We additionally prove the matching $\Omega(\epsilon^{-2}\log(N) \log \log(T))$ space lower bound for the problem when we have access to a heavy hitters oracle with threshold $T$. Our results match the best known upper bounds when $\epsilon\ge 1/\operatorname{polylog}(mM)$ and when $T = 2^{\operatorname{poly}(1/\epsilon)}$ respectively. It also improves on the prior $\Omega(\epsilon^{-2}\log(mM) )$ lower bound and separates the complexity of approximating $L_0$ from approximating the $p$-norm $L_p$ for $p$ bounded away from $0$, since the latter has an $O(\epsilon^{-2}\log (mM))$ bit upper bound.
Deep neural networks have revolutionized many machine learning tasks in power systems, ranging from pattern recognition to signal processing. The data in these tasks is typically represented in Euclidean domains. Nevertheless, there is an increasing number of applications in power systems, where data are collected from non-Euclidean domains and represented as the graph-structured data with high dimensional features and interdependency among nodes. The complexity of graph-structured data has brought significant challenges to the existing deep neural networks defined in Euclidean domains. Recently, many studies on extending deep neural networks for graph-structured data in power systems have emerged. In this paper, a comprehensive overview of graph neural networks (GNNs) in power systems is proposed. Specifically, several classical paradigms of GNNs structures (e.g., graph convolutional networks, graph recurrent neural networks, graph attention networks, graph generative networks, spatial-temporal graph convolutional networks, and hybrid forms of GNNs) are summarized, and key applications in power systems such as fault diagnosis, power prediction, power flow calculation, and data generation are reviewed in detail. Furthermore, main issues and some research trends about the applications of GNNs in power systems are discussed.
The aim of this work is to develop a fully-distributed algorithmic framework for training graph convolutional networks (GCNs). The proposed method is able to exploit the meaningful relational structure of the input data, which are collected by a set of agents that communicate over a sparse network topology. After formulating the centralized GCN training problem, we first show how to make inference in a distributed scenario where the underlying data graph is split among different agents. Then, we propose a distributed gradient descent procedure to solve the GCN training problem. The resulting model distributes computation along three lines: during inference, during back-propagation, and during optimization. Convergence to stationary solutions of the GCN training problem is also established under mild conditions. Finally, we propose an optimization criterion to design the communication topology between agents in order to match with the graph describing data relationships. A wide set of numerical results validate our proposal. To the best of our knowledge, this is the first work combining graph convolutional neural networks with distributed optimization.
The chronological order of user-item interactions can reveal time-evolving and sequential user behaviors in many recommender systems. The items that users will interact with may depend on the items accessed in the past. However, the substantial increase of users and items makes sequential recommender systems still face non-trivial challenges: (1) the hardness of modeling the short-term user interests; (2) the difficulty of capturing the long-term user interests; (3) the effective modeling of item co-occurrence patterns. To tackle these challenges, we propose a memory augmented graph neural network (MA-GNN) to capture both the long- and short-term user interests. Specifically, we apply a graph neural network to model the item contextual information within a short-term period and utilize a shared memory network to capture the long-range dependencies between items. In addition to the modeling of user interests, we employ a bilinear function to capture the co-occurrence patterns of related items. We extensively evaluate our model on five real-world datasets, comparing with several state-of-the-art methods and using a variety of performance metrics. The experimental results demonstrate the effectiveness of our model for the task of Top-K sequential recommendation.
An important problem in geostatistics is to build models of the subsurface of the Earth given physical measurements at sparse spatial locations. Typically, this is done using spatial interpolation methods or by reproducing patterns from a reference image. However, these algorithms fail to produce realistic patterns and do not exhibit the wide range of uncertainty inherent in the prediction of geology. In this paper, we show how semantic inpainting with Generative Adversarial Networks can be used to generate varied realizations of geology which honor physical measurements while matching the expected geological patterns. In contrast to other algorithms, our method scales well with the number of data points and mimics a distribution of patterns as opposed to a single pattern or image. The generated conditional samples are state of the art.
In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.
Graph Convolutional Neural Networks (Graph CNNs) are generalizations of classical CNNs to handle graph data such as molecular data, point could and social networks. Current filters in graph CNNs are built for fixed and shared graph structure. However, for most real data, the graph structures varies in both size and connectivity. The paper proposes a generalized and flexible graph CNN taking data of arbitrary graph structure as input. In that way a task-driven adaptive graph is learned for each graph data while training. To efficiently learn the graph, a distance metric learning is proposed. Extensive experiments on nine graph-structured datasets have demonstrated the superior performance improvement on both convergence speed and predictive accuracy.