亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The evolution of the Internet of Things (IoT) has increased the connection of personal devices, mainly taking into account the habits and behavior of their owners. These environments demand access control mechanisms to protect them against intruders, like Sybil attacks. that can compromise data privacy or disrupt the network operation. The Social IoT paradigm enables access control systems to aggregate community context and sociability information from devices to enhance robustness and security. This work introduces the ELECTRON mechanism to control access in IoT networks based on social trust between devices to protect the network from Sybil attackers. ELECTRON groups IoT devices into communities by their social similarity and evaluates their social trust, strengthening the reliability between legitimate devices and their resilience against the interaction of Sybil attackers. NS-3 Simulations show the ELECTRON performance under Sybil attacks on several IoT communities so that it has gotten to detect more than 90% of attackers in a scenario with 150 nodes into offices, schools, gyms, and~parks communities, and in other scenarios for same communities it achieved around of 90\% of detection. Furthermore, it provided high accuracy, over 90-95%, and false positive rates closer to zero.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國際網絡會議。 Publisher:IFIP。 SIT:

Smart contract-enabled blockchains allow building decentralized applications in which mutually-distrusted parties can work together. Recently, oracle services emerged to provide these applications with real-world data feeds. Unfortunately, these capabilities have been used for malicious purposes under what is called criminal smart contracts. A few works explored this dark side and showed a variety of such attacks. However, none of them considered collaborative attacks against targets that reside outside the blockchain ecosystem. In this paper, we bridge this gap and introduce a smart contract-based framework that allows a sponsor to orchestrate a collaborative attack among (pseudo)anonymous attackers and reward them for that. While all previous works required a technique to quantify an attacker's individual contribution, which could be infeasible with respect to real-world targets, our framework avoids that. This is done by developing a novel scheme for trustless collaboration through betting. That is, attackers bet on an event (i.e., the attack takes place) and then work on making that event happen (i.e., perform the attack). By taking DDoS as a usecase, we formulate attackers' interaction as a game, and formally prove that these attackers will collaborate in proportion to the amount of their bets in the game's unique equilibrium. We also model our framework and its reward function as an incentive mechanism and prove that it is a strategy proof and budget-balanced one. Finally, we conduct numerical simulations to demonstrate the equilibrium behavior of our framework.

Anti-malware agents typically communicate with their remote services to share information about suspicious files. These remote services use their up-to-date information and global context (view) to help classify the files and instruct their agents to take a predetermined action (e.g., delete or quarantine). In this study, we provide a security analysis of a specific form of communication between anti-malware agents and their services, which takes place entirely over the insecure DNS protocol. These services, which we denote DNS anti-malware list (DNSAML) services, affect the classification of files scanned by anti-malware agents, therefore potentially putting their consumers at risk due to known integrity and confidentiality flaws of the DNS protocol. By analyzing a large-scale DNS traffic dataset made available to the authors by a well-known CDN provider, we identify anti-malware solutions that seem to make use of DNSAML services. We found that these solutions, deployed on almost three million machines worldwide, exchange hundreds of millions of DNS requests daily. These requests are carrying sensitive file scan information, oftentimes - as we demonstrate - without any additional safeguards to compensate for the insecurities of the DNS protocol. As a result, these anti-malware solutions that use DNSAML are made vulnerable to DNS attacks. For instance, an attacker capable of tampering with DNS queries, gains the ability to alter the classification of scanned files, without presence on the scanning machine. We showcase three attacks applicable to at least three anti-malware solutions that could result in the disclosure of sensitive information and improper behavior of the anti-malware agent, such as ignoring detected threats. Finally, we propose and review a set of countermeasures for anti-malware solution providers to prevent the attacks stemming from the use of DNSAML services.

Neural network implementations are known to be vulnerable to physical attack vectors such as fault injection attacks. As of now, these attacks were only utilized during the inference phase with the intention to cause a misclassification. In this work, we explore a novel attack paradigm by injecting faults during the training phase of a neural network in a way that the resulting network can be attacked during deployment without the necessity of further faulting. In particular, we discuss attacks against ReLU activation functions that make it possible to generate a family of malicious inputs, which are called fooling inputs, to be used at inference time to induce controlled misclassifications. Such malicious inputs are obtained by mathematically solving a system of linear equations that would cause a particular behaviour on the attacked activation functions, similar to the one induced in training through faulting. We call such attacks fooling backdoors as the fault attacks at the training phase inject backdoors into the network that allow an attacker to produce fooling inputs. We evaluate our approach against multi-layer perceptron networks and convolutional networks on a popular image classification task obtaining high attack success rates (from 60% to 100%) and high classification confidence when as little as 25 neurons are attacked while preserving high accuracy on the originally intended classification task.

Mobile devices have become an indispensable component of modern life. Their high storage capacity gives these devices the capability to store vast amounts of sensitive personal data, which makes them a high-value target: these devices are routinely stolen by criminals for data theft, and are increasingly viewed by law enforcement agencies as a valuable source of forensic data. Over the past several years, providers have deployed a number of advanced cryptographic features intended to protect data on mobile devices, even in the strong setting where an attacker has physical access to a device. Many of these techniques draw from the research literature, but have been adapted to this entirely new problem setting. This involves a number of novel challenges, which are incompletely addressed in the literature. In this work, we outline those challenges, and systematize the known approaches to securing user data against extraction attacks. Our work proposes a methodology that researchers can use to analyze cryptographic data confidentiality for mobile devices. We evaluate the existing literature for securing devices against data extraction adversaries with powerful capabilities including access to devices and to the cloud services they rely on. We then analyze existing mobile device confidentiality measures to identify research areas that have not received proper attention from the community and represent opportunities for future research.

Frequent and severe wildfires have been observed lately on a global scale. Wildfires not only threaten lives and properties, but also pose negative environmental impacts that transcend national boundaries (e.g., greenhouse gas emission and global warming). Thus, early wildfire detection with timely feedback is much needed. We propose to use the emerging beyond fifth-generation (B5G) and sixth-generation (6G) satellite Internet of Things (IoT) communication technology to enable massive sensor deployment for wildfire detection. We propose wildfire and carbon emission models that take into account real environmental data including wind speed, soil wetness, and biomass, to simulate the fire spreading process and quantify the fire burning areas, carbon emissions, and economical benefits of the proposed system against the backdrop of recent California wildfires. We also conduct a satellite IoT feasibility check by analyzing the satellite link budget. Future research directions to further illustrate the promise of the proposed system are discussed.

With the rapid development of machine learning for image classification, researchers have found new applications of visualization techniques in malware detection. By converting binary code into images, researchers have shown satisfactory results in applying machine learning to extract features that are difficult to discover manually. Such visualization-based malware detection methods can capture malware patterns from many different malware families and improve malware detection speed. On the other hand, recent research has also shown adversarial attacks against such visualization-based malware detection. Attackers can generate adversarial examples by perturbing the malware binary in non-reachable regions, such as padding at the end of the binary. Alternatively, attackers can perturb the malware image embedding and then verify the executability of the malware post-transformation. One major limitation of the first attack scenario is that a simple pre-processing step can remove the perturbations before classification. For the second attack scenario, it is hard to maintain the original malware's executability and functionality. In this work, we provide literature review on existing malware visualization techniques and attacks against them. We summarize the limitation of the previous work, and design a new adversarial example attack against visualization-based malware detection that can evade pre-processing filtering and maintain the original malware functionality. We test our attack on a public malware dataset and achieve a 98% success rate.

Deep learning has transformed the way we think of software and what it can do. But deep neural networks are fragile and their behaviors are often surprising. In many settings, we need to provide formal guarantees on the safety, security, correctness, or robustness of neural networks. This book covers foundational ideas from formal verification and their adaptation to reasoning about neural networks and deep learning.

With the recent advances of IoT (Internet of Things) new and more robust security frameworks are needed to detect and mitigate new forms of cyber-attacks, which exploit complex and heterogeneity IoT networks, as well as, the existence of many vulnerabilities in IoT devices. With the rise of blockchain technologies service providers pay considerable attention to better understand and adopt blockchain technologies in order to have better secure and trusted systems for own organisations and their customers. The present paper introduces a high level guide for the senior officials and decision makers in the organisations and technology managers for blockchain security framework by design principle for trust and adoption in IoT environments. The paper discusses Cyber-Trust project blockchain technology development as a representative case study for offered security framework. Security and privacy by design approach is introduced as an important consideration in setting up the framework.

Graph neural networks (GNNs) are widely used in many applications. However, their robustness against adversarial attacks is criticized. Prior studies show that using unnoticeable modifications on graph topology or nodal features can significantly reduce the performances of GNNs. It is very challenging to design robust graph neural networks against poisoning attack and several efforts have been taken. Existing work aims at reducing the negative impact from adversarial edges only with the poisoned graph, which is sub-optimal since they fail to discriminate adversarial edges from normal ones. On the other hand, clean graphs from similar domains as the target poisoned graph are usually available in the real world. By perturbing these clean graphs, we create supervised knowledge to train the ability to detect adversarial edges so that the robustness of GNNs is elevated. However, such potential for clean graphs is neglected by existing work. To this end, we investigate a novel problem of improving the robustness of GNNs against poisoning attacks by exploring clean graphs. Specifically, we propose PA-GNN, which relies on a penalized aggregation mechanism that directly restrict the negative impact of adversarial edges by assigning them lower attention coefficients. To optimize PA-GNN for a poisoned graph, we design a meta-optimization algorithm that trains PA-GNN to penalize perturbations using clean graphs and their adversarial counterparts, and transfers such ability to improve the robustness of PA-GNN on the poisoned graph. Experimental results on four real-world datasets demonstrate the robustness of PA-GNN against poisoning attacks on graphs.

This paper introduces a novel neural network-based reinforcement learning approach for robot gaze control. Our approach enables a robot to learn and to adapt its gaze control strategy for human-robot interaction neither with the use of external sensors nor with human supervision. The robot learns to focus its attention onto groups of people from its own audio-visual experiences, independently of the number of people, of their positions and of their physical appearances. In particular, we use a recurrent neural network architecture in combination with Q-learning to find an optimal action-selection policy; we pre-train the network using a simulated environment that mimics realistic scenarios that involve speaking/silent participants, thus avoiding the need of tedious sessions of a robot interacting with people. Our experimental evaluation suggests that the proposed method is robust against parameter estimation, i.e. the parameter values yielded by the method do not have a decisive impact on the performance. The best results are obtained when both audio and visual information is jointly used. Experiments with the Nao robot indicate that our framework is a step forward towards the autonomous learning of socially acceptable gaze behavior.

北京阿比特科技有限公司