Many modern devices, including critical infrastructures, depend on the reliable operation of electrical power conversion systems. The small size and versatility of switched-mode power converters has resulted in their widespread adoption. Whereas transformer-based systems passively convert voltage, switched-mode converters feature an actively regulated feedback loop, which relies on accurate sensor measurements. Previous academic work has shown that many types of sensors are vulnerable to Intentional Electromagnetic Interference (IEMI) attacks, and it has been postulated that power converters, too, are affected. In this paper, we present the first detailed study on switched-mode power converters by targeting their voltage and current sensors through IEMI attacks. We present a theoretical framework for evaluating IEMI attacks against feedback-based power supplies in the general case. We experimentally validate our theoretical predictions by analyzing multiple AC-DC and DC-DC converters, automotive grade current sensors, and dedicated battery chargers, and demonstrate the systematic vulnerability of all examined categories under real-world conditions. Finally, we demonstrate that sensor attacks on power converters can cause permanent damage to Li-Ion batteries during the charging process.
Byzantine fault-tolerant (BFT) systems are able to maintain the availability and integrity of IoT systems, in presence of failure of individual components, random data corruption or malicious attacks. Fault-tolerant systems in general are essential in assuring continuity of service for mission critical applications. However, their implementation may be challenging and expensive. In this study, IoT Systems with Byzantine Fault-Tolerance are considered. Analytical models and solutions are presented as well as a detailed analysis for the evaluation of the availability. Byzantine Fault Tolerance is particularly important for blockchain mechanisms, and in turn for IoT, since it can provide a secure, reliable and decentralized infrastructure for IoT devices to communicate and transact with each other. The proposed model is based on continuous-time Markov chains, and it analyses the availability of Byzantine Fault-Tolerant systems. While the availability model is based on a continuous-time Markov chain where the breakdown and repair times follow exponential distributions, the number of the Byzantine nodes in the network studied follows various distributions. The numerical results presented report availability as a function of the number of participants and the relative number of honest actors in the system. It can be concluded from the model that there is a non-linear relationship between the number of servers and network availability; i.e. the availability is inversely proportional to the number of nodes in the system. This relationship is further strengthened as the ratio of break-down rate over repair rate increases.
In this work, an integer linear programming (ILP) based model is proposed for the computation of a minimal cost addition sequence for a given set of integers. Since exponents are additive under multiplication, the minimal length addition sequence will provide an economical solution for the evaluation of a requested set of power terms. This is turn, finds application in, e.g., window-based exponentiation for cryptography and polynomial evaluation. Not only is an optimal model proposed, the model is extended to consider different costs for multipliers and squarers as well as controlling the depth of the resulting addition sequence.
Thanks to the rapidly developing technology, unmanned aerial vehicles (UAVs) are able to complete a number of tasks in cooperation with each other without need for human intervention. In recent years, UAVs, which are widely utilized in military missions, have begun to be deployed in civilian applications and mostly for commercial purposes. With their growing numbers and range of applications, UAVs are becoming more and more popular; on the other hand, they are also the target of various threats which can exploit various vulnerabilities of UAV systems in order to cause destructive effects. It is therefore critical that security is ensured for UAVs and the networks that provide communication between UAVs. In this survey, we aimed to present a comprehensive detailed approach to security by classifying possible attacks against UAVs and flying ad hoc networks (FANETs). We classified the security threats into four major categories that make up the basic structure of UAVs; hardware attacks, software attacks, sensor attacks, and communication attacks. In addition, countermeasures against these attacks are presented in separate groups as prevention and detection. In particular, we focus on the security of FANETs, which face significant security challenges due to their characteristics and are also vulnerable to insider attacks. Therefore, this survey presents a review of the security fundamentals for FANETs, and also four different routing attacks against FANETs are simulated with realistic parameters and then analyzed. Finally, limitations and open issues are also discussed to direct future work
Sensitive information is intrinsically tied to interactions in healthcare, and its protection is of paramount importance for achieving high-quality patient outcomes. Research in healthcare privacy and security is predominantly focused on understanding the factors that increase the susceptibility of users to privacy and security breaches. To understand further, we systematically review 26 research papers in this domain to explore the existing user studies in healthcare privacy and security. Following the review, we conducted a card-sorting exercise, allowing us to identify 12 themes integral to this subject such as "Data Sharing," "Risk Awareness," and "Privacy." Further to the identification of these themes, we performed an in-depth analysis of the 26 research papers report on the insights into the discourse within the research community about healthcare privacy and security, particularly from the user perspective.
Todays industrial control systems consist of tightly coupled components allowing adversaries to exploit security attack surfaces from the information technology side, and, thus, also get access to automation devices residing at the operational technology level to compromise their safety functions. To identify these concerns, we propose a model-based testing approach which we consider a promising way to analyze the safety and security behavior of a system under test providing means to protect its components and to increase the quality and efficiency of the overall system. The structure of the underlying framework is divided into four parts, according to the critical factors in testing of operational technology environments. As a first step, this paper describes the ingredients of the envisioned framework. A system model allows to overview possible attack surfaces, while the foundations of testing and the recommendation of mitigation strategies will be based on process-specific safety and security standard procedures with the combination of existing vulnerability databases.
Fast developing artificial intelligence (AI) technology has enabled various applied systems deployed in the real world, impacting people's everyday lives. However, many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc., which not only degrades user experience but erodes the society's trust in all AI systems. In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems. We first introduce the theoretical framework of important aspects of AI trustworthiness, including robustness, generalization, explainability, transparency, reproducibility, fairness, privacy preservation, alignment with human values, and accountability. We then survey leading approaches in these aspects in the industry. To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems, ranging from data acquisition to model development, to development and deployment, finally to continuous monitoring and governance. In this framework, we offer concrete action items to practitioners and societal stakeholders (e.g., researchers and regulators) to improve AI trustworthiness. Finally, we identify key opportunities and challenges in the future development of trustworthy AI systems, where we identify the need for paradigm shift towards comprehensive trustworthy AI systems.
Deep Learning (DL) is the most widely used tool in the contemporary field of computer vision. Its ability to accurately solve complex problems is employed in vision research to learn deep neural models for a variety of tasks, including security critical applications. However, it is now known that DL is vulnerable to adversarial attacks that can manipulate its predictions by introducing visually imperceptible perturbations in images and videos. Since the discovery of this phenomenon in 2013~[1], it has attracted significant attention of researchers from multiple sub-fields of machine intelligence. In [2], we reviewed the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses) until the advent of year 2018. Many of those contributions have inspired new directions in this area, which has matured significantly since witnessing the first generation methods. Hence, as a legacy sequel of [2], this literature review focuses on the advances in this area since 2018. To ensure authenticity, we mainly consider peer-reviewed contributions published in the prestigious sources of computer vision and machine learning research. Besides a comprehensive literature review, the article also provides concise definitions of technical terminologies for non-experts in this domain. Finally, this article discusses challenges and future outlook of this direction based on the literature reviewed herein and [2].
Command, Control, Communication, and Intelligence (C3I) system is a kind of system-of-system that integrates computing machines, sensors, and communication networks. C3I systems are increasingly used in critical civil and military operations for achieving information superiority, assurance, and operational efficacy. C3I systems are no exception to the traditional systems facing widespread cyber-threats. However, the sensitive nature of the application domain (e.g., military operations) of C3I systems makes their security a critical concern. For instance, a cyber-attack on military installations can have detrimental impacts on national security. Therefore, in this paper, we review the state-of-the-art on the security of C3I systems. In particular, this paper aims to identify the security vulnerabilities, attack vectors, and countermeasures for C3I systems. We used the well-known systematic literature review method to select and review 77 studies on the security of C3I systems. Our review enabled us to identify 27 vulnerabilities, 22 attack vectors, and 62 countermeasures for C3I systems. This review has also revealed several areas for future research and identified key lessons with regards to C3I systems' security.
As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continue to thrive in this new reality. Existing FL protocol design has been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this paper, we conduct the first comprehensive survey on this topic. Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions towards robust and privacy-preserving federated learning.
While existing work in robust deep learning has focused on small pixel-level $\ell_p$ norm-based perturbations, this may not account for perturbations encountered in several real world settings. In many such cases although test data might not be available, broad specifications about the types of perturbations (such as an unknown degree of rotation) may be known. We consider a setup where robustness is expected over an unseen test domain that is not i.i.d. but deviates from the training domain. While this deviation may not be exactly known, its broad characterization is specified a priori, in terms of attributes. We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space, without having access to the data from the test domain. Our adversarial training solves a min-max optimization problem, with the inner maximization generating adversarial perturbations, and the outer minimization finding model parameters by optimizing the loss on adversarial perturbations generated from the inner maximization. We demonstrate the applicability of our approach on three types of naturally occurring perturbations -- object-related shifts, geometric transformations, and common image corruptions. Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations. We demonstrate the usefulness of the proposed approach by showing the robustness gains of deep neural networks trained using our adversarial training on MNIST, CIFAR-10, and a new variant of the CLEVR dataset.