The Phasor measurement unit (PMU) measurements are mandatory to monitor the power system's voltage stability margin in an online manner. Monitoring is key to the secure operation of the grid. Traditionally, online monitoring of voltage stability using synchrophasors required a centralized communication architecture, which leads to high investment cost and cyber-security concerns. The increasing importance of cyber-security and low investment cost have recently led to the development of distributed algorithms for online monitoring of the grid that are inherently less prone to malicious attacks. In this work, a novel distributed non-iterative voltage stability index (VSI) is proposed by recasting the power flow equations as circles. The online computations of VSI are simultaneously performed by the processors embedded at each bus in the smart grid with the help of PMUs and communication of voltage phasors between neighboring buses. The distributed nature of the index enables the real-time identification of the critical bus of the system with minimal communication infrastructure. The effectiveness of the proposed distributed index is demonstrated on IEEE test systems and contrasted with existing methods to show the benefits of the proposed method in speed, interpretability, identification of outage location, and low sensitivity to noisy measurements.
Robotics applications process large amounts of data in real-time and require compute platforms that provide high performance and energy-efficiency. FPGAs are well-suited for many of these applications, but there is a reluctance in the robotics community to use hardware acceleration due to increased design complexity and a lack of consistent programming models across the software/hardware boundary. In this paper we present ReconROS, a framework that integrates the widely-used robot operating system (ROS) with ReconOS, which features multithreaded programming of hardware and software threads for reconfigurable computers. This unique combination gives ROS2 developers the flexibility to transparently accelerate parts of their robotics applications in hardware. We elaborate on the architecture and the design flow for ReconROS and report on a set of experiments that underline the feasibility and flexibility of our approach.
Differential privacy (DP) has been widely used to protect the privacy of confidential cyber physical energy systems (CPES) data. However, applying DP without analyzing the utility, privacy, and security requirements can affect the data utility as well as help the attacker to conduct integrity attacks (e.g., False Data Injection(FDI)) leveraging the differentially private data. Existing anomaly-detection-based defense strategies against data integrity attacks in DP-based smart grids fail to minimize the attack impact while maximizing data privacy and utility. To address this challenge, it is nontrivial to apply a defensive approach during the design process. In this paper, we formulate and develop the defense strategy as a part of the design process to investigate data privacy, security, and utility in a DP-based smart grid network. We have proposed a provable relationship among the DP-parameters that enables the defender to design a fault-tolerant system against FDI attacks. To experimentally evaluate and prove the effectiveness of our proposed design approach, we have simulated the FDI attack in a DP-based grid. The evaluation indicates that the attack impact can be minimized if the designer calibrates the privacy level according to the proposed correlation of the DP-parameters to design the grid network. Moreover, we analyze the feasibility of the DP mechanism and QoS of the smart grid network in an adversarial setting. Our analysis suggests that the DP mechanism is feasible over existing privacy-preserving mechanisms in the smart grid domain. Also, the QoS of the differentially private grid applications is found satisfactory in adversarial presence.
Edge computing is an emerging paradigm to enable low-latency applications, like mobile augmented reality, because it takes the computation on processing devices that are closer to the users. On the other hand, the need for highly scalable execution of stateless tasks for cloud systems is driving the definition of new technologies based on serverless computing. In this paper, we propose a novel architecture where the two converge to enable low-latency applications: this is achieved by offloading short-lived stateless tasks from the user terminals to edge nodes. Furthermore, we design a distributed algorithm that tackles the research challenge of selecting the best executor, based on real-time measurements and simple, yet effective, prediction algorithms. Finally, we describe a new performance evaluation framework specifically designed for an accurate assessment of algorithms and protocols in edge computing environments, where the nodes may have very heterogeneous networking and processing capabilities. The proposed framework relies on the use of real components on lightweight virtualization mixed with simulated computation and is well-suited to the analysis of several applications and network environments. Using our framework, we evaluate our proposed architecture and algorithms in small- and large-scale edge computing scenarios, showing that our solution achieves similar or better delay performance than a centralized solution, with far less network utilization.
The Internet of Things (IoT) paradigm has displayed tremendous growth in recent years, resulting in innovations like Industry 4.0 and smart environments that provide improvements to efficiency, management of assets and facilitate intelligent decision making. However, these benefits are offset by considerable cybersecurity concerns that arise due to inherent vulnerabilities, which hinder IoT-based systems' Confidentiality, Integrity, and Availability. Security vulnerabilities can be detected through the application of penetration testing, and specifically, a subset of the information-gathering stage, known as vulnerability identification. Yet, existing penetration testing solutions can not discover zero-day vulnerabilities from IoT environments, due to the diversity of generated data, hardware constraints, and environmental complexity. Thus, it is imperative to develop effective penetration testing solutions for the detection of vulnerabilities in smart IoT environments. In this paper, we propose a deep learning-based penetration testing framework, namely Long Short-Term Memory Recurrent Neural Network-Enabled Vulnerability Identification (LSTM-EVI). We utilize this framework through a novel cybersecurity-oriented testbed, which is a smart airport-based testbed comprised of both physical and virtual elements. The framework was evaluated using this testbed and on real-time data sources. Our results revealed that the proposed framework achieves about 99% detection accuracy for scanning attacks, outperforming other four peer techniques.
A group behavior of a heterogeneous multi-agent system is studied which obeys an "average of individual vector fields" under strong couplings among the agents. Under stability of the averaged dynamics (not asking stability of individual agents), the behavior of heterogeneous multi-agent system can be estimated by the solution to the averaged dynamics. A following idea is to "design" individual agent's dynamics such that the averaged dynamics performs the desired task. A few applications are discussed including estimation of the number of agents in a network, distributed least-squares or median solver, distributed optimization, distributed state estimation, and robust synchronization of coupled oscillators. Since stability of the averaged dynamics makes the initial conditions forgotten as time goes on, these algorithms are initialization-free and suitable for plug-and-play operation. At last, nonlinear couplings are also considered, which potentially asserts that enforced synchronization gives rise to an emergent behavior of a heterogeneous multi-agent system.
In this paper we describe LUNES-Blockchain, an agent-based simulator of blockchains that relies on Parallel and Distributed Simulation (PADS) techniques to obtain high scalability. The software is organized as a multi-level simulator that permits to simulate a virtual environment, made of many nodes running the protocol of a specific Distributed Ledger Technology (DLT), such as the Bitcoin or the Ethereum blockchains. This virtual environment is executed on top of a lower-level Peer-to-Peer (P2P) network overlay, which can be structured based on different topologies and with a given number of nodes and edges. Functionalities at different levels of abstraction are managed separately, by different software modules and with different time granularity. This allows for accurate simulations, where (and when) it is needed, and enhances the simulation performance. Using LUNES-Blockchain, it is possible to simulate different types of attacks on the DLT. In this paper, we specifically focus on the P2P layer, considering the selfish mining, the 51% attack and the Sybil attack. For which concerns selfish mining and the 51% attack, our aim is to understand how much the hash-rate (i.e. a general measure of the processing power in the blockchain network) of the attacker can influence the outcome of the misbehaviour. On the other hand, in the filtering denial of service (i.e. Sybil Attack), we investigate which dissemination protocol in the underlying P2P network makes the system more resilient to a varying number of nodes that drop the messages. The results confirm the viability of the simulation-based techniques for the investigation of security aspects of DLTs.
Federated learning is a new distributed machine learning framework, where a bunch of heterogeneous clients collaboratively train a model without sharing training data. In this work, we consider a practical and ubiquitous issue in federated learning: intermittent client availability, where the set of eligible clients may change during the training process. Such an intermittent client availability model would significantly deteriorate the performance of the classical Federated Averaging algorithm (FedAvg for short). We propose a simple distributed non-convex optimization algorithm, called Federated Latest Averaging (FedLaAvg for short), which leverages the latest gradients of all clients, even when the clients are not available, to jointly update the global model in each iteration. Our theoretical analysis shows that FedLaAvg attains the convergence rate of $O(1/(N^{1/4} T^{1/2}))$, achieving a sublinear speedup with respect to the total number of clients. We implement and evaluate FedLaAvg with the CIFAR-10 dataset. The evaluation results demonstrate that FedLaAvg indeed reaches a sublinear speedup and achieves 4.23% higher test accuracy than FedAvg.
In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.
Many problems on signal processing reduce to nonparametric function estimation. We propose a new methodology, piecewise convex fitting (PCF), and give a two-stage adaptive estimate. In the first stage, the number and location of the change points is estimated using strong smoothing. In the second stage, a constrained smoothing spline fit is performed with the smoothing level chosen to minimize the MSE. The imposed constraint is that a single change point occurs in a region about each empirical change point of the first-stage estimate. This constraint is equivalent to requiring that the third derivative of the second-stage estimate has a single sign in a small neighborhood about each first-stage change point. We sketch how PCF may be applied to signal recovery, instantaneous frequency estimation, surface reconstruction, image segmentation, spectral estimation and multivariate adaptive regression.
The field of Multi-Agent System (MAS) is an active area of research within Artificial Intelligence, with an increasingly important impact in industrial and other real-world applications. Within a MAS, autonomous agents interact to pursue personal interests and/or to achieve common objectives. Distributed Constraint Optimization Problems (DCOPs) have emerged as one of the prominent agent architectures to govern the agents' autonomous behavior, where both algorithms and communication models are driven by the structure of the specific problem. During the last decade, several extensions to the DCOP model have enabled them to support MAS in complex, real-time, and uncertain environments. This survey aims at providing an overview of the DCOP model, giving a classification of its multiple extensions and addressing both resolution methods and applications that find a natural mapping within each class of DCOPs. The proposed classification suggests several future perspectives for DCOP extensions, and identifies challenges in the design of efficient resolution algorithms, possibly through the adaptation of strategies from different areas.