亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Internet of Things (IoT) is an ongoing technological revolution. Embedded processors are the processing engines of smart IoT devices. For decades, these processors were mainly based on the Arm instruction set architecture (ISA). In recent years, the free and open RISC-V ISA standard has attracted the attention of industry and academia and is becoming the mainstream. Data security and user privacy protection are common challenges faced by all IoT devices. In order to deal with foreseeable security threats, the RISC-V community is studying security solutions aimed at achieving a root of trust (RoT) and ensuring that sensitive information on RISC-V devices is not tampered with or leaked. Many RISC-V security research projects are underway, but the academic community has not yet conducted a comprehensive survey of RISC-V security solutions. In order to fill this research gap, this paper presents an in-depth survey on RISC-V security technologies. This paper summarizes the representative security mechanisms of RISC-V hardware and architecture. Based on our survey, we predict the future research and development directions of RISC-V security. We hope that our research can inspire RISC-V researchers and developers.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · INFORMS · AIM · Integration · Networking ·
2021 年 9 月 8 日

This paper considers the use of novel technologies for mitigating attacks that aim at compromising intrusion detection systems (IDSs). Solutions based on collaborative intrusion detection networks (CIDNs) could increase the resilience against such attacks as they allow IDS nodes to gain knowledge from each other by sharing information. However, despite the vast research in this area, trust management issues still pose significant challenges and recent works investigate whether these could be addressed by relying on blockchain and related distributed ledger technologies. Towards that direction, the paper proposes the use of a trust-based blockchain in CIDNs, referred to as trust-chain, to protect the integrity of the information shared among the CIDN peers, enhance their accountability, and secure their collaboration by thwarting insider attacks. A consensus protocol is proposed for CIDNs, which is a combination of a proof-of-stake and proof-of-work protocols, to enable collaborative IDS nodes to maintain a reliable and tampered-resistant trust-chain.

The blockchain has found numerous applications in many areas with the expectation to significantly enhance their security. The Internet of things (IoT) constitutes a prominent application domain of blockchain, with a number of architectures having been proposed for improving not only security but also properties like transparency and auditability. However, many blockchain solutions suffer from inherent constraints associated with the consensus protocol used. These constraints are mostly inherited by the permissionless setting, e.g. computational power in proof-of-work, and become serious obstacles in a resource-constrained IoT environment. Moreover, consensus protocols with low throughput or high latency are not suitable for IoT networks where massive volumes of data are generated. Thus, in this paper we focus on permissioned blockchain platforms and investigate the consensus protocols used, aiming at evaluating their performance and fault tolerance as the main selection criteria for (in principle highly insecure) IoT ecosystem. The results of the paper provide new insights on the essential differences of various consensus protocols and their capacity to meet IoT needs.

Over the past few years, the relevance of the Internet of Things (IoT) has grown significantly and is now a key component of many industrial processes and even a transparent participant in various activities performed in our daily life. IoT systems are subjected to changes in the dynamic environments they operate in. These changes (e.g. variations in the bandith consumption or new devices joining/leaving) may impact the Quality of Service (QoS) of the IoT system. A number of self-adaptation strategies for IoT architectures to better deal with these changes have been proposed in the literature. Nevertheless, they focus on isolated types of changes. We lack a comprehensive view of the trade-offs of each proposal and how they could be combined to cope with dynamic situations involving simultaneous types of events. In this paper, we identify, analyze, and interpret relevant studies related to IoT adaptation and develop a comprehensive and holistic view of the interplay of different dynamic events, their consequences on the architecture QoS, and the alternatives for the adaptation. To do so, we have conducted a systematic literature review of existing scientific proposals and defined a research agenda for the near future based on the findings and weaknesses identified in the literature.

It has been a long time that computer architecture and systems are optimized to enable efficient execution of machine learning (ML) algorithms or models. Now, it is time to reconsider the relationship between ML and systems, and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: the improvement of designers' productivity, and the completion of the virtuous cycle. In this paper, we present a comprehensive review of work that applies ML for system design, which can be grouped into two major categories, ML-based modelling that involves predictions of performance metrics or some other criteria of interest, and ML-based design methodology that directly leverages ML as the design tool. For ML-based modelling, we discuss existing studies based on their target level of system, ranging from the circuit level to the architecture/system level. For ML-based design methodology, we follow a bottom-up path to review current work, with a scope of (micro-)architecture design (memory, branch prediction, NoC), coordination between architecture/system and workload (resource allocation and management, data center management, and security), compiler, and design automation. We further provide a future vision of opportunities and potential directions, and envision that applying ML for computer architecture and systems would thrive in the community.

Stream processing has been an active research field for more than 20 years, but it is now witnessing its prime time due to recent successful efforts by the research community and numerous worldwide open-source communities. This survey provides a comprehensive overview of fundamental aspects of stream processing systems and their evolution in the functional areas of out-of-order data management, state management, fault tolerance, high availability, load management, elasticity, and reconfiguration. We review noteworthy past research findings, outline the similarities and differences between early ('00-'10) and modern ('11-'18) streaming systems, and discuss recent trends and open problems.

Deep Learning (DL) is vulnerable to out-of-distribution and adversarial examples resulting in incorrect outputs. To make DL more robust, several posthoc anomaly detection techniques to detect (and discard) these anomalous samples have been proposed in the recent past. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection for DL based applications. We provide a taxonomy for existing techniques based on their underlying assumptions and adopted approaches. We discuss various techniques in each of the categories and provide the relative strengths and weaknesses of the approaches. Our goal in this survey is to provide an easier yet better understanding of the techniques belonging to different categories in which research has been done on this topic. Finally, we highlight the unsolved research challenges while applying anomaly detection techniques in DL systems and present some high-impact future research directions.

The past decade has seen a remarkable series of advances in machine learning, and in particular deep learning approaches based on artificial neural networks, to improve our abilities to build more accurate systems across a broad range of areas, including computer vision, speech recognition, language translation, and natural language understanding tasks. This paper is a companion paper to a keynote talk at the 2020 International Solid-State Circuits Conference (ISSCC) discussing some of the advances in machine learning, and their implications on the kinds of computational devices we need to build, especially in the post-Moore's Law-era. It also discusses some of the ways that machine learning may also be able to help with some aspects of the circuit design process. Finally, it provides a sketch of at least one interesting direction towards much larger-scale multi-task models that are sparsely activated and employ much more dynamic, example- and task-based routing than the machine learning models of today.

In recent years, mobile devices have gained increasingly development with stronger computation capability and larger storage. Some of the computation-intensive machine learning and deep learning tasks can now be run on mobile devices. To take advantage of the resources available on mobile devices and preserve users' privacy, the idea of mobile distributed machine learning is proposed. It uses local hardware resources and local data to solve machine learning sub-problems on mobile devices, and only uploads computation results instead of original data to contribute to the optimization of the global model. This architecture can not only relieve computation and storage burden on servers, but also protect the users' sensitive information. Another benefit is the bandwidth reduction, as various kinds of local data can now participate in the training process without being uploaded to the server. In this paper, we provide a comprehensive survey on recent studies of mobile distributed machine learning. We survey a number of widely-used mobile distributed machine learning methods. We also present an in-depth discussion on the challenges and future directions in this area. We believe that this survey can demonstrate a clear overview of mobile distributed machine learning and provide guidelines on applying mobile distributed machine learning to real applications.

Generic object detection, aiming at locating object instances from a large number of predefined categories in natural images, is one of the most fundamental and challenging problems in computer vision. Deep learning techniques have emerged in recent years as powerful methods for learning feature representations directly from data, and have led to remarkable breakthroughs in the field of generic object detection. Given this time of rapid evolution, the goal of this paper is to provide a comprehensive survey of the recent achievements in this field brought by deep learning techniques. More than 250 key contributions are included in this survey, covering many aspects of generic object detection research: leading detection frameworks and fundamental subproblems including object feature representation, object proposal generation, context information modeling and training strategies; evaluation issues, specifically benchmark datasets, evaluation metrics, and state of the art performance. We finish by identifying promising directions for future research.

Deep Learning has enabled remarkable progress over the last years on a variety of tasks, such as image recognition, speech recognition, and machine translation. One crucial aspect for this progress are novel neural architectures. Currently employed architectures have mostly been developed manually by human experts, which is a time-consuming and error-prone process. Because of this, there is growing interest in automated neural architecture search methods. We provide an overview of existing work in this field of research and categorize them according to three dimensions: search space, search strategy, and performance estimation strategy.

北京阿比特科技有限公司