亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Critical infrastructures (CI) and industrial organizations aggressively move towards integrating elements of modern Information Technology (IT) into their monolithic Operational Technology (OT) architectures. Yet, as OT systems progressively become more and more interconnected, they silently have turned into alluring targets for diverse groups of adversaries. Meanwhile, the inherent complexity of these systems, along with their advanced-in-age nature, prevents defenders from fully applying contemporary security controls in a timely manner. Forsooth, the combination of these hindering factors has led to some of the most severe cybersecurity incidents of the past years. This work contributes a full-fledged and up-to-date survey of the most prominent threats against Industrial Control Systems (ICS) along with the communication protocols and devices adopted in these environments. Our study highlights that threats against CI follow an upward spiral due to the mushrooming of commodity tools and techniques that can facilitate either the early or late stages of attacks. Furthermore, our survey exposes that existing vulnerabilities in the design and implementation of several of the OT-specific network protocols may easily grant adversaries the ability to decisively impact physical processes. We provide a categorization of such threats and the corresponding vulnerabilities based on various criteria. As far as we are aware, this is the first time an exhaustive and detailed survey of this kind is attempted.

相關內容

As cross-chain technologies make the interactions among different blockchains (hereinafter "chains") possible, multi-chains consensus is becoming more and more important in blockchain networks. However, more attention has been paid to the single-chain consensus schemes. The multi-chains consensus with trusted miners participation has been not considered, thus offering opportunities for malicious users to launch Diverse Miners Behaviors (DMB) attacks on different chains. DMB attackers can be friendly in the consensus process of some chains called mask-chains to enhance trust value, while on other chains called kill-chains they engage in destructive behaviors of network. In this paper, we propose a multi-chains consensus scheme named as Proof-of-DiscTrust (PoDT) to defend against DMB attacks. Distinctive trust idea (DiscTrust) is introduced to evaluate the trust value of each user concerning different chains. A dynamic behaviors prediction scheme is designed to strengthen DiscTrust to prevent intensive DMB attackers who maintain high trust by alternately creating true or false blocks on kill-chains. On this basis, a trusted miners selection algorithm for multi-chains can be achieved at a round of block creation. Experimental results show that PoDT is secure against DMB attacks and more effective than traditional consensus schemes in multi-chains environments.

Abstruse learning algorithms and complex datasets increasingly characterize modern clinical decision support systems (CDSS). As a result, clinicians cannot easily or rapidly scrutinize the CDSS recommendation when facing a difficult diagnosis or treatment decision in practice. Over-trust or under-trust are frequent. Prior research has explored supporting such assessments by explaining DST data inputs and algorithmic mechanisms. This paper explores a different approach: Providing precisely relevant, scientific evidence from biomedical literature. We present a proof-of-concept system, Clinical Evidence Engine, to demonstrate the technical and design feasibility of this approach across three domains (cardiovascular diseases, autism, cancer). Leveraging Clinical BioBERT, the system can effectively identify clinical trial reports based on lengthy clinical questions (e.g., "risks of catheter infection among adult patients in intensive care unit who require arterial catheters, if treated with povidone iodine-alcohol"). This capability enables the system to identify clinical trials relevant to diagnostic/treatment hypotheses -- a clinician's or a CDSS's. Further, Clinical Evidence Engine can identify key parts of a clinical trial abstract, including patient population (e.g., adult patients in intensive care unit who require arterial catheters), intervention (povidone iodine-alcohol), and outcome (risks of catheter infection). This capability opens up the possibility of enabling clinicians to 1) rapidly determine the match between a clinical trial and a clinical question, and 2) understand the result and contexts of the trial without extensive reading. We demonstrate this potential by illustrating two example use scenarios of the system. We discuss the idea of designing DST explanations not as specific to a DST or an algorithm, but as a domain-agnostic decision support infrastructure.

Artificial Intelligence (AI) is one of the disruptive technologies that is shaping the future. It has growing applications for data-driven decisions in major smart city solutions, including transportation, education, healthcare, public governance, and power systems. At the same time, it is gaining popularity in protecting critical cyber infrastructure from cyber threats, attacks, damages, or unauthorized access. However, one of the significant issues of those traditional AI technologies (e.g., deep learning) is that the rapid progress in complexity and sophistication propelled and turned out to be uninterpretable black boxes. On many occasions, it is very challenging to understand the decision and bias to control and trust systems' unexpected or seemingly unpredictable outputs. It is acknowledged that the loss of control over interpretability of decision-making becomes a critical issue for many data-driven automated applications. But how may it affect the system's security and trustworthiness? This chapter conducts a comprehensive study of machine learning applications in cybersecurity to indicate the need for explainability to address this question. While doing that, this chapter first discusses the black-box problems of AI technologies for Cybersecurity applications in smart city-based solutions. Later, considering the new technological paradigm, Explainable Artificial Intelligence (XAI), this chapter discusses the transition from black-box to white-box. This chapter also discusses the transition requirements concerning the interpretability, transparency, understandability, and Explainability of AI-based technologies in applying different autonomous systems in smart cities. Finally, it has presented some commercial XAI platforms that offer explainability over traditional AI technologies before presenting future challenges and opportunities.

We present a new type of attack in which source code is maliciously encoded so that it appears different to a compiler and to the human eye. This attack exploits subtleties in text-encoding standards such as Unicode to produce source code whose tokens are logically encoded in a different order from the one in which they are displayed, leading to vulnerabilities that cannot be perceived directly by human code reviewers. 'Trojan Source' attacks, as we call them, pose an immediate threat both to first-party software and of supply-chain compromise across the industry. We present working examples of Trojan-Source attacks in C, C++, C#, JavaScript, Java, Rust, Go, and Python. We propose definitive compiler-level defenses, and describe other mitigating controls that can be deployed in editors, repositories, and build pipelines while compilers are upgraded to block this attack.

Cyber-physical systems (CPS) data privacy protection during sharing, aggregating, and publishing is a challenging problem. Several privacy protection mechanisms have been developed in the literature to protect sensitive data from adversarial analysis and eliminate the risk of re-identifying the original properties of shared data. However, most of the existing solutions have drawbacks, such as (i) lack of a proper vulnerability characterization model to accurately identify where privacy is needed, (ii) ignoring data providers privacy preference, (iii) using uniform privacy protection which may create inadequate privacy for some provider while overprotecting others, and (iv) lack of a comprehensive privacy quantification model assuring data privacy-preservation. To address these issues, we propose a personalized privacy preference framework by characterizing and quantifying the CPS vulnerabilities as well as ensuring privacy. First, we introduce a standard vulnerability profiling library (SVPL) by arranging the nodes of an energy-CPS from maximum to minimum vulnerable based on their privacy loss. Based on this model, we present our personalized privacy framework (PDP) in which Laplace noise is added based on the individual node's selected privacy preferences. Finally, combining these two proposed methods, we demonstrate that our privacy characterization and quantification model can attain better privacy preservation by eliminating the trade-off between privacy, utility, and risk of losing information.

As a disruptive technology that originates from cryptocurrency, blockchain provides a trusted platform to facilitate industrial IoT (IIoT) applications. However, implementing a blockchain platform in IIoT scenarios confronts various security challenges due to the rigorous deployment condition. To this end, we present a novel design of secure blockchain based on trusted computing hardware for IIoT applications. Specifically, we employ the trusted execution environment (TEE) module and a customized security chip to safeguard the blockchain against different attacking vectors. Furthermore, we implement the proposed secure IIoT blockchain on the ARM-based embedded device and build a small-scale IIoT network to evaluate its performance. Our experimental results show that the secure blockchain platform achieves a high throughput (150TPS) with low transaction confirmation delay (below 66ms), demonstrating its feasibility in practical IIoT scenarios. Finally, we outline the open challenges and future research directions.

Unmanned aerial vehicles (UAVs) are gaining immense attention due to their potential to revolutionize various businesses and industries. However, the adoption of UAV-assisted applications will strongly rely on the provision of reliable systems that allow managing UAV operations at high levels of safety and security. Recently, the concept of UAV traffic management (UTM) has been introduced to support safe, efficient, and fair access to low-altitude airspace for commercial UAVs. A UTM system identifies multiple cooperating parties with different roles and levels of authority to provide real-time services to airspace users. However, current UTM systems are centralized and lack a clear definition of protocols that govern a secure interaction between authorities, service providers, and end-users. The lack of such protocols renders the UTM system unscalable and prone to various cyber attacks. Another limitation of the currently proposed UTM architecture is the absence of an efficient mechanism to enforce airspace rules and regulations. To address this issue, we propose a decentralized UTM protocol that controls access to airspace while ensuring high levels of integrity, availability, and confidentiality of airspace operations. To achieve this, we exploit key features of the blockchain and smart contract technologies. In addition, we employ a mobile crowdsensing (MCS) mechanism to seamlessly enforce airspace rules and regulations that govern the UAV operations. The solution is implemented on top of the Etheruem platform and verified using four different smart contract verification tools. We also provided a security and cost analysis of our solution. For reproducibility, we made our implementation publicly available on Github.

Deep Learning (DL) is the most widely used tool in the contemporary field of computer vision. Its ability to accurately solve complex problems is employed in vision research to learn deep neural models for a variety of tasks, including security critical applications. However, it is now known that DL is vulnerable to adversarial attacks that can manipulate its predictions by introducing visually imperceptible perturbations in images and videos. Since the discovery of this phenomenon in 2013~[1], it has attracted significant attention of researchers from multiple sub-fields of machine intelligence. In [2], we reviewed the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses) until the advent of year 2018. Many of those contributions have inspired new directions in this area, which has matured significantly since witnessing the first generation methods. Hence, as a legacy sequel of [2], this literature review focuses on the advances in this area since 2018. To ensure authenticity, we mainly consider peer-reviewed contributions published in the prestigious sources of computer vision and machine learning research. Besides a comprehensive literature review, the article also provides concise definitions of technical terminologies for non-experts in this domain. Finally, this article discusses challenges and future outlook of this direction based on the literature reviewed herein and [2].

A key challenge of big data analytics is how to collect a large volume of (labeled) data. Crowdsourcing aims to address this challenge via aggregating and estimating high-quality data (e.g., sentiment label for text) from pervasive clients/users. Existing studies on crowdsourcing focus on designing new methods to improve the aggregated data quality from unreliable/noisy clients. However, the security aspects of such crowdsourcing systems remain under-explored to date. We aim to bridge this gap in this work. Specifically, we show that crowdsourcing is vulnerable to data poisoning attacks, in which malicious clients provide carefully crafted data to corrupt the aggregated data. We formulate our proposed data poisoning attacks as an optimization problem that maximizes the error of the aggregated data. Our evaluation results on one synthetic and two real-world benchmark datasets demonstrate that the proposed attacks can substantially increase the estimation errors of the aggregated data. We also propose two defenses to reduce the impact of malicious clients. Our empirical results show that the proposed defenses can substantially reduce the estimation errors of the data poisoning attacks.

As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continue to thrive in this new reality. Existing FL protocol design has been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besides training powerful global models, it is of paramount importance to design FL systems that have privacy guarantees and are resistant to different types of adversaries. In this paper, we conduct the first comprehensive survey on this topic. Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks and defenses. Finally, we discuss promising future research directions towards robust and privacy-preserving federated learning.

北京阿比特科技有限公司