The smart-grid introduces several new data-gathering, communication, and information-sharing capabilities into the electrical system, as well as additional privacy threats, vulnerabilities, and cyber-attacks. In this study, Modbus is regarded as one of the most prevalent interfaces for control systems in power plants. Modern control interfaces are vulnerable to cyber-attacks, posing a risk to the entire energy infrastructure. In order to strengthen resistance to cyber-attacks, this study introduces a test bed for cyber-physical systems that operate in real-time. To investigate the network vulnerabilities of smart power grids, Modbus protocol has been examined combining a real-time power system simulator with a communication system simulator and the effects of the system presented and analyzed. The goal is to detect the vulnerability in Modbus protocol and perform the Man-in-the-middle attack with its impact on the system. This proposed testbed can be evaluated as a research model for vulnerability assessment as well as a tool for evaluating cyber-attacks and enquire into any detection mechanism for safeguarding and defending smart grid systems from a variety of cyberattacks. We present here the preliminary findings on using the testbed to identify a particular MiTM attack and the effects on system performance. Finally, we suggest a cyber security strategy as a solution to address such network vulnerabilities and deploy appropriate countermeasures.
The increased adoption of smart contracts in many industries has made them an attractive target for cybercriminals, leading to millions of dollars in losses. Thus, deploying smart contracts with detected vulnerabilities (known to developers) are not acceptable, and fixing all the detected vulnerabilities is needed, which incurs high manual labor cost without effective tool support. To fill this need, in this paper, we propose ContractFix, a novel framework that automatically generates security patches for vulnerable smart contracts. ContractFix is a general framework that can incorporate different fix patterns for different types of vulnerabilities. Users can use it as a security fix-it tool that automatically applies patches and verifies the patched contracts before deploying the contracts. To address the unique challenges in fixing smart contract vulnerabilities, given an input smart contract, \tool conducts our proposed ensemble identification based on multiple static verification tools to identify vulnerabilities that are amenable for automatic fix. Then, ContractFix generates patches using template-based fix patterns and conducts program analysis (program dependency computation and pointer analysis) for smart contracts to accurately infer and populate the parameter values for the fix patterns. Finally, ContractFix performs static verification that guarantees the patched contract is free of vulnerabilities. Our evaluations on $144$ real vulnerable contracts demonstrate that \tool can successfully fix $94\%$ of the detected vulnerabilities ($565$ out of $601$) and preserve the expected behaviors of the smart contracts.
The vision of the upcoming 6G technologies, characterized by ultra-dense network, low latency, and fast data rate is to support Pervasive AI (PAI) using zero-touch solutions enabling self-X (e.g., self-configuration, self-monitoring, and self-healing) services. However, the research on 6G is still in its infancy, and only the first steps have been taken to conceptualize its design, investigate its implementation, and plan for use cases. Toward this end, academia and industry communities have gradually shifted from theoretical studies of AI distribution to real-world deployment and standardization. Still, designing an end-to-end framework that systematizes the AI distribution by allowing easier access to the service using a third-party application assisted by a zero-touch service provisioning has not been well explored. In this context, we introduce a novel platform architecture to deploy a zero-touch PAI-as-a-Service (PAIaaS) in 6G networks supported by a blockchain-based smart system. This platform aims to standardize the pervasive AI at all levels of the architecture and unify the interfaces in order to facilitate the service deployment across application and infrastructure domains, relieve the users worries about cost, security, and resource allocation, and at the same time, respect the 6G stringent performance requirements. As a proof of concept, we present a Federated Learning-as-a-service use case where we evaluate the ability of our proposed system to self-optimize and self-adapt to the dynamics of 6G networks in addition to minimizing the users' perceived costs.
The examination of post-disaster recovery (PDR) in a socio-physical system enables us to elucidate the complex relationships between humans and infrastructures. Although existing studies have identified many patterns in the PDR process, they fall short of describing how individual recoveries contribute to the overall recovery of the system. To enhance the understanding of individual return behavior and the recovery of point-of-interests (POIs), we propose an agent-based model (ABM), called PostDisasterSim. We apply the model to analyze the recovery of five counties in Texas following Hurricane Harvey in 2017. Specifically, we construct a three-layer network comprising the human layer, the social infrastructure layer, and the physical infrastructure layer, using mobile phone location data and POI data. Based on prior studies and a household survey, we develop the ABM to simulate how evacuated individuals return to their homes, and social and physical infrastructures recover. By implementing the ABM, we unveil the heterogeneity in recovery dynamics in terms of agent types, housing types, household income levels, and geographical locations. Moreover, simulation results across nine scenarios quantitatively demonstrate the positive effects of social and physical infrastructure improvement plans. This study can assist disaster scientists in uncovering nuanced recovery patterns and policymakers in translating policies like resource allocation into practice.
Understanding and characterizing the vulnerability of urban infrastructures, which refers to the engineering facilities essential for the regular running of cities and that exist naturally in the form of networks, is of great value to us. Potential applications include protecting fragile facilities and designing robust topologies, etc. Due to the strong correlation between different topological characteristics and infrastructure vulnerability and their complicated evolution mechanisms, some heuristic and machine-assisted analysis fall short in addressing such a scenario. In this paper, we model the interdependent network as a heterogeneous graph and propose a system based on graph neural network with reinforcement learning, which can be trained on real-world data, to characterize the vulnerability of the city system accurately. The presented system leverages deep learning techniques to understand and analyze the heterogeneous graph, which enables us to capture the risk of cascade failure and discover vulnerable infrastructures of cities. Extensive experiments with various requests demonstrate not only the expressive power of our system but also transferring ability and necessity of the specific components.
Smart grids are being increasingly deployed worldwide, as they constitute the electricity grid of the future, providing bidirectional communication between households. One of their main potential applications is the peer-to-peer (P2P) energy trading market, which promises users better electricity prices and higher incentives to produce renewable energy. However, most P2P markets require users to submit energy bids/offers in advance, which cannot account for unexpected surpluses of energy consumption/production. Moreover, the fine-grained metering information used in calculating and settling bills/rewards is inherently sensitive and must be protected in conformity with existing privacy regulations. To address these issues, this report proposes a novel privacy-preserving billing and settlements protocol, PPBSP, for use in local energy markets with imperfect bid-offer fulfillment, which only uses homomorphically encrypted versions of the half-hourly user consumption data. PPBSP also supports various cost-sharing mechanisms among market participants, including two new and improved methods of proportionally redistributing the cost of maintaining the balance of the grid in a fair manner. An informal privacy analysis is performed, highlighting the privacy-enhancing characteristics of the protocol, which include metering data and bill confidentiality. PPBSP is also evaluated in terms of computation cost and communication overhead, demonstrating its efficiency and feasibility for markets with varying sizes.
The study of network robustness is a critical tool in the characterization and sense making of complex interconnected systems such as infrastructure, communication and social networks. While significant research has been conducted in all of these areas, gaps in the surveying literature still exist. Answers to key questions are currently scattered across multiple scientific fields and numerous papers. In this survey, we distill key findings across numerous domains and provide researchers crucial access to important information by--(1) summarizing and comparing recent and classical graph robustness measures; (2) exploring which robustness measures are most applicable to different categories of networks (e.g., social, infrastructure; (3) reviewing common network attack strategies, and summarizing which attacks are most effective across different network topologies; and (4) extensive discussion on selecting defense techniques to mitigate attacks across a variety of networks. This survey guides researchers and practitioners in navigating the expansive field of network robustness, while summarizing answers to key questions. We conclude by highlighting current research directions and open problems.
Games and simulators can be a valuable platform to execute complex multi-agent, multiplayer, imperfect information scenarios with significant parallels to military applications: multiple participants manage resources and make decisions that command assets to secure specific areas of a map or neutralize opposing forces. These characteristics have attracted the artificial intelligence (AI) community by supporting development of algorithms with complex benchmarks and the capability to rapidly iterate over new ideas. The success of artificial intelligence algorithms in real-time strategy games such as StarCraft II have also attracted the attention of the military research community aiming to explore similar techniques in military counterpart scenarios. Aiming to bridge the connection between games and military applications, this work discusses past and current efforts on how games and simulators, together with the artificial intelligence algorithms, have been adapted to simulate certain aspects of military missions and how they might impact the future battlefield. This paper also investigates how advances in virtual reality and visual augmentation systems open new possibilities in human interfaces with gaming platforms and their military parallels.
Deep Learning (DL) is the most widely used tool in the contemporary field of computer vision. Its ability to accurately solve complex problems is employed in vision research to learn deep neural models for a variety of tasks, including security critical applications. However, it is now known that DL is vulnerable to adversarial attacks that can manipulate its predictions by introducing visually imperceptible perturbations in images and videos. Since the discovery of this phenomenon in 2013~[1], it has attracted significant attention of researchers from multiple sub-fields of machine intelligence. In [2], we reviewed the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses) until the advent of year 2018. Many of those contributions have inspired new directions in this area, which has matured significantly since witnessing the first generation methods. Hence, as a legacy sequel of [2], this literature review focuses on the advances in this area since 2018. To ensure authenticity, we mainly consider peer-reviewed contributions published in the prestigious sources of computer vision and machine learning research. Besides a comprehensive literature review, the article also provides concise definitions of technical terminologies for non-experts in this domain. Finally, this article discusses challenges and future outlook of this direction based on the literature reviewed herein and [2].
The growing energy and performance costs of deep learning have driven the community to reduce the size of neural networks by selectively pruning components. Similarly to their biological counterparts, sparse networks generalize just as well, if not better than, the original dense networks. Sparsity can reduce the memory footprint of regular networks to fit mobile devices, as well as shorten training time for ever growing networks. In this paper, we survey prior work on sparsity in deep learning and provide an extensive tutorial of sparsification for both inference and training. We describe approaches to remove and add elements of neural networks, different training strategies to achieve model sparsity, and mechanisms to exploit sparsity in practice. Our work distills ideas from more than 300 research papers and provides guidance to practitioners who wish to utilize sparsity today, as well as to researchers whose goal is to push the frontier forward. We include the necessary background on mathematical methods in sparsification, describe phenomena such as early structure adaptation, the intricate relations between sparsity and the training process, and show techniques for achieving acceleration on real hardware. We also define a metric of pruned parameter efficiency that could serve as a baseline for comparison of different sparse networks. We close by speculating on how sparsity can improve future workloads and outline major open problems in the field.
As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continue to thrive in this new reality. Existing FL protocol design has been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this paper, we conduct the first comprehensive survey on this topic. Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions towards robust and privacy-preserving federated learning.