亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The increase in scale of cyber networks and the rise in sophistication of cyber-attacks have introduced several challenges in intrusion detection. The primary challenge is the requirement to detect complex multi-stage attacks in realtime by processing the immense amount of traffic produced by present-day networks. In this paper we present PRISM, a hierarchical intrusion detection architecture that uses a novel attacker behavior model-based sampling technique to minimize the realtime traffic processing overhead. PRISM has a unique multi-layered architecture that monitors network traffic distributedly to provide efficiency in processing and modularity in design. PRISM employs a Hidden Markov Model-based prediction mechanism to identify multi-stage attacks and ascertain the attack progression for a proactive response. Furthermore, PRISM introduces a stream management procedure that rectifies the issue of alert reordering when collected from distributed alert reporting systems. To evaluate the performance of PRISM, multiple metrics has been proposed, and various experiments have been conducted on a multi-stage attack dataset. The results exhibit up to 7.5x improvement in processing overhead as compared to a standard centralized IDS without the loss of prediction accuracy while demonstrating the ability to predict different attack stages promptly.

相關內容

Processing 是(shi)一門開源(yuan)編(bian)程語言和與之配套的(de)集成開發環(huan)境(IDE)的(de)名稱。Processing 在電子(zi)藝(yi)術和視覺設計社(she)區被(bei)用(yong)來教授編(bian)程基(ji)礎(chu),并運用(yong)于(yu)大(da)量的(de)新媒體和互(hu)動藝(yi)術作品中。 

Metropolitan optical networks are undergoing major transformations to continue being able to provide services that meet the requirements of the applications of the future. The arrival of the $5G$ will expand the possibilities for offering IoT applications, autonomous vehicles, and smart cities services while imposing strong pressure on the physical infrastructure currently implemented, as well as on static traffic engineering techniques that do not respond in an agile way to the dynamic and heterogeneous nature of the upcoming traffic patterns. In order to guarantee the strictest quality of service and quality of experience requirements for users, as well as meeting the providers' objectives of maintaining an acceptable trade-off between cost and performance, new architectures for metropolitan optical networks have been proposed in the literature, with a growing interest starting from $2017$. However, due to the proliferation of a dozen of new architectures in recent years, many questions need to be investigated regarding the planning, implementation, and management of these architectures, before they could be considered for practical application. This work presents a comprehensive survey of the new proposed architectures for metropolitan optical networks. Firstly, the main data transmission systems, equipment involved, and the structural organization of the new metro ecosystems are discussed. The already established and the novel architectures are presented, highlighting its characteristics and application, and comparative analysis among these architectures is carried out identifying the future technological trends. Finally, outstanding research questions are drawn to help direct future research on the field.

Significant progress has been made towards deploying Vehicle-to-Everything (V2X) technology. Integrating V2X with 5G has enabled ultra-low latency and high-reliability V2X communications. However, while communication performance has enhanced, security and privacy issues have increased. Attacks have become more aggressive, and attackers have become more strategic. Public Key Infrastructure proposed by standardization bodies cannot solely defend against these attacks. Thus, in complementary of that, sophisticated systems should be designed to detect such attacks and attackers. Machine Learning (ML) has recently emerged as a key enabler to secure our future roads. Many V2X Misbehavior Detection Systems (MDSs) have adopted this paradigm. Yet, analyzing these systems is a research gap, and developing effective ML-based MDSs is still an open issue. To this end, this paper present a comprehensive survey and classification of ML-based MDSs. We analyze and discuss them from both security and ML perspectives. Then, we give some learned lessons and recommendations helping in developing, validating, and deploying ML-based MDSs. Finally, we highlight open research and standardization issues with some future directions.

State-of-the-art machine learning models are routinely trained on large-scale distributed clusters. Crucially, such systems can be compromised when some of the computing devices exhibit abnormal (Byzantine) behavior and return arbitrary results to the parameter server (PS). This behavior may be attributed to a plethora of reasons, including system failures and orchestrated attacks. Existing work suggests robust aggregation and/or computational redundancy to alleviate the effect of distorted gradients. However, most of these schemes are ineffective when an adversary knows the task assignment and can choose the attacked workers judiciously to induce maximal damage. Our proposed method Aspis assigns gradient computations to worker nodes using a subset-based assignment which allows for multiple consistency checks on the behavior of a worker node. Examination of the calculated gradients and post-processing (clique-finding in an appropriately constructed graph) by the central node allows for efficient detection and subsequent exclusion of adversaries from the training process. We prove the Byzantine resilience and detection guarantees of Aspis under weak and strong attacks and extensively evaluate the system on various large-scale training scenarios. The principal metric for our experiments is the test accuracy, for which we demonstrate a significant improvement of about 30% compared to many state-of-the-art approaches on the CIFAR-10 dataset. The corresponding reduction of the fraction of corrupted gradients ranges from 16% to 99%.

The need for automatic design of deep neural networks has led to the emergence of neural architecture search (NAS), which has generated models outperforming manually-designed models. However, most existing NAS frameworks are designed for image processing tasks, and lack structures and operations effective for voice activity detection (VAD) tasks. To discover improved VAD models through automatic design, we present the first work that proposes a NAS framework optimized for the VAD task. The proposed NAS-VAD framework expands the existing search space with the attention mechanism while incorporating the compact macro-architecture with fewer cells. The experimental results show that the models discovered by NAS-VAD outperform the existing manually-designed VAD models in various synthetic and real-world datasets. Our code and models are available at //github.com/daniel03c1/NAS_VAD.

While the advent of Graph Neural Networks (GNNs) has greatly improved node and graph representation learning in many applications, the neighborhood aggregation scheme exposes additional vulnerabilities to adversaries seeking to extract node-level information about sensitive attributes. In this paper, we study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data. We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance. Our method creates a strong defense against inference attacks, while only suffering small loss in task performance. Theoretically, we analyze the effectiveness of our framework against a worst-case adversary, and characterize an inherent trade-off between maximizing predictive accuracy and minimizing information leakage. Experiments across multiple datasets from recommender systems, knowledge graphs and quantum chemistry demonstrate that the proposed approach provides a robust defense across various graph structures and tasks, while producing competitive GNN encoders for downstream tasks.

It has been a long time that computer architecture and systems are optimized to enable efficient execution of machine learning (ML) algorithms or models. Now, it is time to reconsider the relationship between ML and systems, and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: the improvement of designers' productivity, and the completion of the virtuous cycle. In this paper, we present a comprehensive review of work that applies ML for system design, which can be grouped into two major categories, ML-based modelling that involves predictions of performance metrics or some other criteria of interest, and ML-based design methodology that directly leverages ML as the design tool. For ML-based modelling, we discuss existing studies based on their target level of system, ranging from the circuit level to the architecture/system level. For ML-based design methodology, we follow a bottom-up path to review current work, with a scope of (micro-)architecture design (memory, branch prediction, NoC), coordination between architecture/system and workload (resource allocation and management, data center management, and security), compiler, and design automation. We further provide a future vision of opportunities and potential directions, and envision that applying ML for computer architecture and systems would thrive in the community.

Neural Architecture Search (NAS) was first proposed to achieve state-of-the-art performance through the discovery of new architecture patterns, without human intervention. An over-reliance on expert knowledge in the search space design has however led to increased performance (local optima) without significant architectural breakthroughs, thus preventing truly novel solutions from being reached. In this work we 1) are the first to investigate casting NAS as a problem of finding the optimal network generator and 2) we propose a new, hierarchical and graph-based search space capable of representing an extremely large variety of network types, yet only requiring few continuous hyper-parameters. This greatly reduces the dimensionality of the problem, enabling the effective use of Bayesian Optimisation as a search strategy. At the same time, we expand the range of valid architectures, motivating a multi-objective learning approach. We demonstrate the effectiveness of this strategy on six benchmark datasets and show that our search space generates extremely lightweight yet highly competitive models.

Deep learning has been successfully applied to solve various complex problems ranging from big data analytics to computer vision and human-level control. Deep learning advances however have also been employed to create software that can cause threats to privacy, democracy and national security. One of those deep learning-powered applications recently emerged is "deepfake". Deepfake algorithms can create fake images and videos that humans cannot distinguish them from authentic ones. The proposal of technologies that can automatically detect and assess the integrity of digital visual media is therefore indispensable. This paper presents a survey of algorithms used to create deepfakes and, more importantly, methods proposed to detect deepfakes in the literature to date. We present extensive discussions on challenges, research trends and directions related to deepfake technologies. By reviewing the background of deepfakes and state-of-the-art deepfake detection methods, this study provides a comprehensive overview of deepfake techniques and facilitates the development of new and more robust methods to deal with the increasingly challenging deepfakes.

Automatic neural architecture design has shown its potential in discovering powerful neural network architectures. Existing methods, no matter based on reinforcement learning or evolutionary algorithms (EA), conduct architecture search in a discrete space, which is highly inefficient. In this paper, we propose a simple and efficient method to automatic neural architecture design based on continuous optimization. We call this new approach neural architecture optimization (NAO). There are three key components in our proposed approach: (1) An encoder embeds/maps neural network architectures into a continuous space. (2) A predictor takes the continuous representation of a network as input and predicts its accuracy. (3) A decoder maps a continuous representation of a network back to its architecture. The performance predictor and the encoder enable us to perform gradient based optimization in the continuous space to find the embedding of a new architecture with potentially better accuracy. Such a better embedding is then decoded to a network by the decoder. Experiments show that the architecture discovered by our method is very competitive for image classification task on CIFAR-10 and language modeling task on PTB, outperforming or on par with the best results of previous architecture search methods with a significantly reduction of computational resources. Specifically we obtain $2.07\%$ test set error rate for CIFAR-10 image classification task and $55.9$ test set perplexity of PTB language modeling task. The best discovered architectures on both tasks are successfully transferred to other tasks such as CIFAR-100 and WikiText-2.

Vision-based vehicle detection approaches achieve incredible success in recent years with the development of deep convolutional neural network (CNN). However, existing CNN based algorithms suffer from the problem that the convolutional features are scale-sensitive in object detection task but it is common that traffic images and videos contain vehicles with a large variance of scales. In this paper, we delve into the source of scale sensitivity, and reveal two key issues: 1) existing RoI pooling destroys the structure of small scale objects, 2) the large intra-class distance for a large variance of scales exceeds the representation capability of a single network. Based on these findings, we present a scale-insensitive convolutional neural network (SINet) for fast detecting vehicles with a large variance of scales. First, we present a context-aware RoI pooling to maintain the contextual information and original structure of small scale objects. Second, we present a multi-branch decision network to minimize the intra-class distance of features. These lightweight techniques bring zero extra time complexity but prominent detection accuracy improvement. The proposed techniques can be equipped with any deep network architectures and keep them trained end-to-end. Our SINet achieves state-of-the-art performance in terms of accuracy and speed (up to 37 FPS) on the KITTI benchmark and a new highway dataset, which contains a large variance of scales and extremely small objects.

北京阿比特科技有限公司