亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present the first compositional, incremental static analysis for detecting memory-safety and information leakage vulnerabilities in C-like programs. To do so, we develop the first under-approximate relational program logics for reasoning about information flow, including Insecurity Separation Logic (InsecSL). Like prior under-approximate separation logics, we show that InsecSL can be automated via symbolic execution. We then adapt and extend a prior intra-procedural symbolic execution algorithm to build a bottom-up, inter-procedural and incremental analysis for detecting vulnerabilities. We prove our approach sound in Isabelle/HOL and implement it in a proof-of-concept tool, Underflow, for analysing C programs, which we apply to various case studies.

相關內容

The proliferation of interconnected battlefield information-sharing devices, known as the Internet of Battlefield Things (IoBT), introduced several security challenges. Inherent to the IoBT operating environment is the practice of adversarial machine learning, which attempts to circumvent machine learning models. This work examines the feasibility of cost-effective unsupervised learning and graph-based methods for anomaly detection in the network intrusion detection system setting, and also leverages an ensemble approach to supervised learning of the anomaly detection problem. We incorporate a realistic adversarial training mechanism when training supervised models to enable strong classification performance in adversarial environments. The results indicate that the unsupervised and graph-based methods were outperformed in detecting anomalies (malicious activity) by the supervised stacking ensemble method with two levels. This model consists of three different classifiers in the first level, followed by either a Naive Bayes or Decision Tree classifier for the second level. The model maintains an F1-score above 0.97 for malicious samples across all tested level two classifiers. Notably, Naive Bayes is the fastest level two classifier averaging 1.12 seconds while Decision Tree maintains the highest AUC score of 0.98.

Wikipedia has been turned into an immensely popular crowd-sourced encyclopedia for information dissemination on numerous versatile topics in the form of subscription free content. It allows anyone to contribute so that the articles remain comprehensive and updated. For enrichment of content without compromising standards, the Wikipedia community enumerates a detailed set of guidelines, which should be followed. Based on these, articles are categorized into several quality classes by the Wikipedia editors with increasing adherence to guidelines. This quality assessment task by editors is laborious as well as demands platform expertise. As a first objective, in this paper, we study evolution of a Wikipedia article with respect to such quality scales. Our results show novel non-intuitive patterns emerging from this exploration. As a second objective we attempt to develop an automated data driven approach for the detection of the early signals influencing the quality change of articles. We posit this as a change point detection problem whereby we represent an article as a time series of consecutive revisions and encode every revision by a set of intuitive features. Finally, various change point detection algorithms are used to efficiently and accurately detect the future change points. We also perform various ablation studies to understand which group of features are most effective in identifying the change points. To the best of our knowledge, this is the first work that rigorously explores English Wikipedia article quality life cycle from the perspective of quality indicators and provides a novel unsupervised page level approach to detect quality switch, which can help in automatic content monitoring in Wikipedia thus contributing significantly to the CSCW community.

It is notoriously difficult to securely configure HTTPS, and poor server configurations have contributed to several attacks including the FREAK, Logjam, and POODLE attacks. In this work, we empirically evaluate the TLS security posture of popular websites and endeavor to understand the configuration decisions that operators make. We correlate several sources of influence on sites' security postures, including software defaults, cloud providers, and online recommendations. We find a fragmented web ecosystem: while most websites have secure configurations, this is largely due to major cloud providers that offer secure defaults. Individually configured servers are more often insecure than not. This may be in part because common resources available to individual operators -- server software defaults and online configuration guides -- are frequently insecure. Our findings highlight the importance of considering SaaS services separately from individually-configured sites in measurement studies, and the need for server software to ship with secure defaults.

We present a new type of attack in which source code is maliciously encoded so that it appears different to a compiler and to the human eye. This attack exploits subtleties in text-encoding standards such as Unicode to produce source code whose tokens are logically encoded in a different order from the one in which they are displayed, leading to vulnerabilities that cannot be perceived directly by human code reviewers. 'Trojan Source' attacks, as we call them, pose an immediate threat both to first-party software and of supply-chain compromise across the industry. We present working examples of Trojan-Source attacks in C, C++, C#, JavaScript, Java, Rust, Go, and Python. We propose definitive compiler-level defenses, and describe other mitigating controls that can be deployed in editors, repositories, and build pipelines while compilers are upgraded to block this attack.

As the number of IoT devices has increased rapidly, IoT botnets have exploited the vulnerabilities of IoT devices. However, it is still challenging to detect the initial intrusion on IoT devices prior to massive attacks. Recent studies have utilized power side-channel information to identify this intrusion behavior on IoT devices but still lack accurate models in real-time for ubiquitous botnet detection. We proposed the first online intrusion detection system called DeepAuditor for IoT devices via power auditing. To develop the real-time system, we proposed a lightweight power auditing device called Power Auditor. We also designed a distributed CNN classifier for online inference in a laboratory setting. In order to protect data leakage and reduce networking redundancy, we then proposed a privacy-preserved inference protocol via Packed Homomorphic Encryption and a sliding window protocol in our system. The classification accuracy and processing time were measured, and the proposed classifier outperformed a baseline classifier, especially against unseen patterns. We also demonstrated that the distributed CNN design is secure against any distributed components. Overall, the measurements were shown to the feasibility of our real-time distributed system for intrusion detection on IoT devices.

The inclusion of Internet of Things (IoT) devices is growing rapidly in all application domains. Smart Farming supports devices connected, and with the support of Internet, cloud or edge computing infrastructure provide remote control of watering and fertilization, real time monitoring of farm conditions, and provide solutions to more sustainable practices. This could involve using irrigation systems only when the detected soil moisture level is low or stop when the plant reaches a sufficient level of soil moisture content. These improvements to efficiency and ease of use come with added risks to security and privacy. Cyber attacks in large coordinated manner can disrupt economy of agriculture-dependent nations. To the sensors in the system, an attack may appear as anomalous behaviour. In this context, there are possibilities of anomalies generated due to faulty hardware, issues in network connectivity (if present), or simply abrupt changes to the environment due to weather, human accident, or other unforeseen circumstances. To make such systems more secure, it is imperative to detect such data discrepancies, and trigger appropriate mitigation mechanisms. In this paper, we propose an anomaly detection model for Smart Farming using an unsupervised Autoencoder machine learning model. We chose to use an Autoencoder because it encodes and decodes data and attempts to ignore outliers. When it encounters anomalous data the result will be a high reconstruction loss value, signaling that this data was not like the rest. Our model was trained and tested on data collected from our designed greenhouse test-bed. Proposed Autoencoder model based anomaly detection achieved 98.98% and took 262 seconds to train and has a detection time of .0585 seconds.

How can society understand and hold accountable complex human and algorithmic decision-making systems whose systematic errors are opaque to the public? These systems routinely make decisions on individual rights and well-being, and on protecting society and the democratic process. Practical and statistical constraints on external audits--such as dimensional complexity--can lead researchers and regulators to miss important sources of error in these complex decision-making systems. In this paper, we design and implement a software-supported approach to audit studies that auto-generates audit materials and coordinates volunteer activity. We implemented this software in the case of political advertising policies enacted by Facebook and Google during the 2018 U.S. election. Guided by this software, a team of volunteers posted 477 auto-generated ads and analyzed the companies' actions, finding systematic errors in how companies enforced policies. We find that software can overcome some common constraints of audit studies, within limitations related to sample size and volunteer capacity.

In this study, we investigate the limits of the current state of the art AI system for detecting buffer overflows and compare it with current static analysis tools. To do so, we developed a code generator, s-bAbI, capable of producing an arbitrarily large number of code samples of controlled complexity. We found that the static analysis engines we examined have good precision, but poor recall on this dataset, except for a sound static analyzer that has good precision and recall. We found that the state of the art AI system, a memory network modeled after Choi et al. [1], can achieve similar performance to the static analysis engines, but requires an exhaustive amount of training data in order to do so. Our work points towards future approaches that may solve these problems; namely, using representations of code that can capture appropriate scope information and using deep learning methods that are able to perform arithmetic operations.

We introduce and tackle the problem of zero-shot object detection (ZSD), which aims to detect object classes which are not observed during training. We work with a challenging set of object classes, not restricting ourselves to similar and/or fine-grained categories as in prior works on zero-shot classification. We present a principled approach by first adapting visual-semantic embeddings for ZSD. We then discuss the problems associated with selecting a background class and motivate two background-aware approaches for learning robust detectors. One of these models uses a fixed background class and the other is based on iterative latent assignments. We also outline the challenge associated with using a limited number of training classes and propose a solution based on dense sampling of the semantic label space using auxiliary data with a large number of categories. We propose novel splits of two standard detection datasets - MSCOCO and VisualGenome, and present extensive empirical results in both the traditional and generalized zero-shot settings to highlight the benefits of the proposed methods. We provide useful insights into the algorithm and conclude by posing some open questions to encourage further research.

As we move towards large-scale object detection, it is unrealistic to expect annotated training data for all object classes at sufficient scale, and so methods capable of unseen object detection are required. We propose a novel zero-shot method based on training an end-to-end model that fuses semantic attribute prediction with visual features to propose object bounding boxes for seen and unseen classes. While we utilize semantic features during training, our method is agnostic to semantic information for unseen classes at test-time. Our method retains the efficiency and effectiveness of YOLO for objects seen during training, while improving its performance for novel and unseen objects. The ability of state-of-art detection methods to learn discriminative object features to reject background proposals also limits their performance for unseen objects. We posit that, to detect unseen objects, we must incorporate semantic information into the visual domain so that the learned visual features reflect this information and leads to improved recall rates for unseen objects. We test our method on PASCAL VOC and MS COCO dataset and observed significant improvements on the average precision of unseen classes.

北京阿比特科技有限公司