亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Scalability remains one of the biggest challenges to the adoption of permissioned blockchain technologies for large-scale deployments. Permissioned blockchains typically exhibit low latencies, compared to permissionless deployments -- however at the cost of poor scalability. Various solutions were proposed to capture "the best of both worlds", targeting low latency and high scalability simultaneously, the most prominent technique being blockchain sharding. However, most existing sharding proposals exploit features of the permissionless model and are therefore restricted to cryptocurrency applications. We present MITOSIS, a novel approach to practically improve scalability of permissioned blockchains. Our system allows the dynamic creation of blockchains, as more participants join the system, to meet practical scalability requirements. Crucially, it enables the division of an existing blockchain (and its participants) into two -- reminiscent of mitosis, the biological process of cell division. MITOSIS inherits the low latency of permissioned blockchains while preserving high throughput via parallel processing. Newly created chains in our system are fully autonomous, can choose their own consensus protocol, and yet they can interact with each other to share information and assets -- meeting high levels of interoperability. We analyse the security of MITOSIS and evaluate experimentally the performance of our solution when instantiated over Hyperledger Fabric. Our results show that MITOSIS can be ported with little modifications and manageable overhead to existing permissioned blockchains, such as Hyperledger Fabric.

相關內容

區塊(kuai)鏈(lian)(Blockchain)是(shi)由節(jie)點(dian)參與的(de)(de)分布式數據庫系統(tong),它(ta)的(de)(de)特(te)(te)點(dian)是(shi)不可(ke)(ke)(ke)更改(gai),不可(ke)(ke)(ke)偽造(zao),也可(ke)(ke)(ke)以將(jiang)其理解為賬簿系統(tong)(ledger)。它(ta)是(shi)比(bi)特(te)(te)幣的(de)(de)一(yi)個重(zhong)要概念(nian),完整比(bi)特(te)(te)幣區塊(kuai)鏈(lian)的(de)(de)副本,記錄了其代幣(token)的(de)(de)每一(yi)筆(bi)交易。通過(guo)這些信息,我們可(ke)(ke)(ke)以找(zhao)到每一(yi)個地址,在歷史上(shang)任何一(yi)點(dian)所擁有的(de)(de)價(jia)值(zhi)。

知識薈萃

精品(pin)入門和進階教(jiao)程、論(lun)文和代碼(ma)整理等

更多

查看相(xiang)關VIP內容、論文(wen)、資(zi)訊等

Along with the progress of AI democratization, machine learning (ML) has been successfully applied to edge applications, such as smart phones and automated driving. Nowadays, more applications require ML on tiny devices with extremely limited resources, like implantable cardioverter defibrillator (ICD), which is known as TinyML. Unlike ML on the edge, TinyML with a limited energy supply has higher demands on low-power execution. Stochastic computing (SC) using bitstreams for data representation is promising for TinyML since it can perform the fundamental ML operations using simple logical gates, instead of the complicated binary adder and multiplier. However, SC commonly suffers from low accuracy for ML tasks due to low data precision and inaccuracy of arithmetic units. Increasing the length of the bitstream in the existing works can mitigate the precision issue but incur higher latency. In this work, we propose a novel SC architecture, namely Block-based Stochastic Computing (BSC). BSC divides inputs into blocks, such that the latency can be reduced by exploiting high data parallelism. Moreover, optimized arithmetic units and output revision (OUR) scheme are proposed to improve accuracy. On top of it, a global optimization approach is devised to determine the number of blocks, which can make a better latency-power trade-off. Experimental results show that BSC can outperform the existing designs in achieving over 10% higher accuracy on ML tasks and over 6 times power reduction.

Smart services based on the Internet of Everything (IoE) are gaining considerable popularity due to the ever-increasing demands of wireless networks. This demands the appraisal of the wireless networks with enhanced properties as next-generation communication systems. Although 5G networks show great potential to support numerous IoE based services, it is not adequate to meet the complete requirements of the new smart applications. Therefore, there is an increased demand for envisioning the 6G wireless communication systems to overcome the major limitations in the existing 5G networks. Moreover, incorporating artificial intelligence in 6G will provide solutions for very complex problems relevant to network optimization. Furthermore, to add further value to the future 6G networks, researchers are investigating new technologies, such as THz and quantum communications. The requirements of future 6G wireless communications demand to support massive data-driven applications and the increasing number of users. This paper presents recent advances in the 6G wireless networks, including the evolution from 1G to 5G communications, the research trends for 6G, enabling technologies, and state-of-the-art 6G projects.

Federated machine learning (FL) allows to collectively train models on sensitive data as only the clients' models and not their training data need to be shared. However, despite the attention that research on FL has drawn, the concept still lacks broad adoption in practice. One of the key reasons is the great challenge to implement FL systems that simultaneously achieve fairness, integrity, and privacy preservation for all participating clients. To contribute to solving this issue, our paper suggests a FL system that incorporates blockchain technology, local differential privacy, and zero-knowledge proofs. Our implementation of a proof-of-concept with multiple linear regression illustrates that these state-of-the-art technologies can be combined to a FL system that aligns economic incentives, trust, and confidentiality requirements in a scalable and transparent system.

On-chip communication infrastructure is a central component of modern systems-on-chip (SoCs), and it continues to gain importance as the number of cores, the heterogeneity of components, and the on-chip and off-chip bandwidth continue to grow. Decades of research on on-chip networks enabled cache-coherent shared-memory multiprocessors. However, communication fabrics that meet the needs of heterogeneous many-cores and accelerator-rich SoCs, which are not, or only partially, coherent, are a much less mature research area. In this work, we present a modular, topology-agnostic, high-performance on-chip communication platform. The platform includes components to build and link subnetworks with customizable bandwidth and concurrency properties and adheres to a state-of-the-art, industry-standard protocol. We discuss microarchitectural trade-offs and timing/area characteristics of our modules and show that they can be composed to build high-bandwidth (e.g., 2.5 GHz and 1024 bit data width) end-to-end on-chip communication fabrics (not only network switches but also DMA engines and memory controllers) with high degrees of concurrency. We design and implement a state-of-the-art ML training accelerator, where our communication fabric scales to 1024 cores on a die, providing 32 TB/s cross-sectional bandwidth at only 24 ns round-trip latency between any two cores.

Given a cardiac-arrest patient being monitored in the ICU (intensive care unit) for brain activity, how can we predict their health outcomes as early as possible? Early decision-making is critical in many applications, e.g. monitoring patients may assist in early intervention and improved care. On the other hand, early prediction on EEG data poses several challenges: (i) earliness-accuracy trade-off; observing more data often increases accuracy but sacrifices earliness, (ii) large-scale (for training) and streaming (online decision-making) data processing, and (iii) multi-variate (due to multiple electrodes) and multi-length (due to varying length of stay of patients) time series. Motivated by this real-world application, we present BeneFitter that infuses the incurred savings from an early prediction as well as the cost from misclassification into a unified domain-specific target called benefit. Unifying these two quantities allows us to directly estimate a single target (i.e. benefit), and importantly, dictates exactly when to output a prediction: when benefit estimate becomes positive. BeneFitter (a) is efficient and fast, with training time linear in the number of input sequences, and can operate in real-time for decision-making, (b) can handle multi-variate and variable-length time-series, suitable for patient data, and (c) is effective, providing up to 2x time-savings with equal or better accuracy as compared to competitors.

While many researchers adopt a sharding approach to design scaling blockchains, few works have studied the transaction placement problem incurred by sharding protocols. The widely-used hashing placement algorithm renders an overwhelming portion of transactions as cross-shard. In this paper, we analyze the high cost of cross-shard transactions and reveal that most Bitcoin transactions have simple dependencies and can become single-shard under a placement algorithm taking transaction dependencies into account. In addition, we perform a case study of OptChain, which is the state-of-the-art transaction placement algorithm for sharded blockchains, and find a shortcoming of it. A simple fix is proposed, and our evaluation results demonstrate that the proposed fix effectively helps OptChain overcome the shortcoming and significantly improve the system performance under a special workload. The authors of OptChain made some revisions to the algorithm description after noticing our work. Their updated algorithm does not exhibit the shortcoming under the workloads employed by this paper.

It has been a long time that computer architecture and systems are optimized to enable efficient execution of machine learning (ML) algorithms or models. Now, it is time to reconsider the relationship between ML and systems, and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: the improvement of designers' productivity, and the completion of the virtuous cycle. In this paper, we present a comprehensive review of work that applies ML for system design, which can be grouped into two major categories, ML-based modelling that involves predictions of performance metrics or some other criteria of interest, and ML-based design methodology that directly leverages ML as the design tool. For ML-based modelling, we discuss existing studies based on their target level of system, ranging from the circuit level to the architecture/system level. For ML-based design methodology, we follow a bottom-up path to review current work, with a scope of (micro-)architecture design (memory, branch prediction, NoC), coordination between architecture/system and workload (resource allocation and management, data center management, and security), compiler, and design automation. We further provide a future vision of opportunities and potential directions, and envision that applying ML for computer architecture and systems would thrive in the community.

The growing energy and performance costs of deep learning have driven the community to reduce the size of neural networks by selectively pruning components. Similarly to their biological counterparts, sparse networks generalize just as well, if not better than, the original dense networks. Sparsity can reduce the memory footprint of regular networks to fit mobile devices, as well as shorten training time for ever growing networks. In this paper, we survey prior work on sparsity in deep learning and provide an extensive tutorial of sparsification for both inference and training. We describe approaches to remove and add elements of neural networks, different training strategies to achieve model sparsity, and mechanisms to exploit sparsity in practice. Our work distills ideas from more than 300 research papers and provides guidance to practitioners who wish to utilize sparsity today, as well as to researchers whose goal is to push the frontier forward. We include the necessary background on mathematical methods in sparsification, describe phenomena such as early structure adaptation, the intricate relations between sparsity and the training process, and show techniques for achieving acceleration on real hardware. We also define a metric of pruned parameter efficiency that could serve as a baseline for comparison of different sparse networks. We close by speculating on how sparsity can improve future workloads and outline major open problems in the field.

The concept of smart grid has been introduced as a new vision of the conventional power grid to figure out an efficient way of integrating green and renewable energy technologies. In this way, Internet-connected smart grid, also called energy Internet, is also emerging as an innovative approach to ensure the energy from anywhere at any time. The ultimate goal of these developments is to build a sustainable society. However, integrating and coordinating a large number of growing connections can be a challenging issue for the traditional centralized grid system. Consequently, the smart grid is undergoing a transformation to the decentralized topology from its centralized form. On the other hand, blockchain has some excellent features which make it a promising application for smart grid paradigm. In this paper, we have an aim to provide a comprehensive survey on application of blockchain in smart grid. As such, we identify the significant security challenges of smart grid scenarios that can be addressed by blockchain. Then, we present a number of blockchain-based recent research works presented in different literatures addressing security issues in the area of smart grid. We also summarize several related practical projects, trials, and products that have been emerged recently. Finally, we discuss essential research challenges and future directions of applying blockchain to smart grid security issues.

We survey research on self-driving cars published in the literature focusing on autonomous cars developed since the DARPA challenges, which are equipped with an autonomy system that can be categorized as SAE level 3 or higher. The architecture of the autonomy system of self-driving cars is typically organized into the perception system and the decision-making system. The perception system is generally divided into many subsystems responsible for tasks such as self-driving-car localization, static obstacles mapping, moving obstacles detection and tracking, road mapping, traffic signalization detection and recognition, among others. The decision-making system is commonly partitioned as well into many subsystems responsible for tasks such as route planning, path planning, behavior selection, motion planning, and control. In this survey, we present the typical architecture of the autonomy system of self-driving cars. We also review research on relevant methods for perception and decision making. Furthermore, we present a detailed description of the architecture of the autonomy system of the UFES's car, IARA. Finally, we list prominent autonomous research cars developed by technology companies and reported in the media.

北京阿比特科技有限公司