亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The combination between innovative topics and emerging technologies lets researchers define new processes and models. New needs regard the definition of modular and scalable approaches, with society and environment in mind. An important topic to focus on is the smart city one. The use of emerging technologies lets smart cities develop new processes to improve services offered from various actors, either industries or government. Smart cities were born to improve quality of life for citizens. To reach this goal, various approaches have been proposed, but they lack on a common interface to let each stakeholder communicate in a simple and fast way. This paper shows the proposal of an architecture to overcome the actual limitations of smart cities: it uses Blockchain technology as a distributed database to let everyone join the network and feel part of a community. Blockchain can improve processes development for smart cities. Scalability is granted thanks to a context-aware approach: applications do not need to know about the back-end implementation, they just need to adapt to an interface. With Blockchain, it is possible to collect data anonymously to make some statistical analysis, to access public records to ensure security in the city and to guarantee the origin of products and energy.

相關內容

智慧城市(英語:Smart City)是指利用各種信息技術或創新意念,集成城市的組成系統和服務,以提升資源運用的效率,優化城市管理和服務,以及改善市民生活質量。智慧城市把新一代信息技術充分運用在城市的各行各業之中的基于知識社會下一代創新(創新2.0)的城市信息化高級形態,實現信息化、工業化與城鎮化深度融合,有助于緩解“大城市病”,提高城鎮化質量,實現精細化和動態管理,并提升城市管理成效和改善市民生活質量。關于智慧城市的具體定義比較廣泛,目前在國際上被廣泛認同的定義是,智慧城市是新一代信息技術支撐、知識社會下一代創新(創新2.0)環境下的城市形態,強調智慧城市不僅僅是物聯網、云計算等新一代信息技術的應用,更重要的是通過面向知識社會的創新2.0的方法論應用,構建用戶創新、開放創新、大眾創新、協同創新為特征的城市可持續創新生態。

R is a language and environment for statistical computing and graphics, which provides a wide variety of statistical tools (modeling, statistical testing, time series analysis, classification problems, machine learning, ...), together with amazing graphical techniques and the great advantage that it is highly extensible. Nowadays, there is no doubt that it is the software par excellence in statistical courses for any level, for theoretical and applied subjects alike. Besides, it has become an almost essential tool for every research work that involves any kind of analysis or data visualization. Furthermore, it is one of the most employed programming languages for general purposes. The goal of this work is helping to share ideas and resources to improve teaching and/or research using the statistical software R. We will cover its benefits, show how to get started and where to locate specific resources, and will make interesting recommendations for using R, according to our experience. For the classroom we will develop a curricular and assessment infrastructure to support both dissemination and evaluation, while for research we will offer a broader approach to quantitative studies that provides an excellent support for work in science and technology.

Recently, Blockchain-Enabled Federated Learning (BCFL), an innovative approach that combines the advantages of Federated Learning and Blockchain technology, is receiving great attention. Federated Learning (FL) allows multiple participants to jointly train machine learning models in a decentralized manner while maintaining data privacy and security. This paper proposes a reference architecture for blockchain-enabled federated learning, which enables multiple entities to collaboratively train machine learning models while preserving data privacy and security. A critical component of this architecture is the implementation of a decentralized identifier (DID)-based access system. DID introduces a decentralized, self-sovereign identity (ID) management system that allows participants to manage their IDs independently of central authorities. Within this proposed architecture, participants can authenticate and gain access to the federated learning platform via their DIDs, which are securely stored on the blockchain. The access system administers access control and permissions through the execution of smart contracts, further enhancing the security and decentralization of the system. This approach, integrating blockchain-enabled federated learning with a DID access system, offers a robust solution for collaborative machine learning in a distributed and secure manner. As a result, participants can contribute to global model training while maintaining data privacy and identity control without the need to share local data. These DIDs are stored on the blockchain and the access system uses smart contracts to manage access control and permissions. The source code will be available to the public soon.

Trends in hardware, the prevalence of the cloud, and the rise of highly demanding applications have ushered an era of specialization that quickly changes how data is processed at scale. These changes are likely to continue and accelerate in the next years as new technologies are adopted and deployed: smart NICs, smart storage, smart memory, disaggregated storage, disaggregated memory, specialized accelerators (GPUS, TPUs, FPGAs), and a wealth of ASICs specifically created to deal with computationally expensive tasks (e.g., cryptography or compression). In this tutorial, we focus on data processing on FPGAs, a technology that has received less attention than, e.g., TPUs or GPUs but that is, however, increasingly being deployed in the cloud for data processing tasks due to the architectural flexibility of FPGAs, along with their ability to process data at line rate, something not possible with other types of processors or accelerators. In the tutorial, we will cover what FPGAs are, their characteristics, their advantages and disadvantages, as well as examples from deployments in the industry and how they are used in various data processing tasks. We will introduce FPGA programming with high-level languages and describe hardware and software resources available to researchers. The tutorial includes case studies borrowed from research done in collaboration with companies that illustrate the potential of FPGAs in data processing and how software and hardware are evolving to take advantage of the possibilities offered by FPGAs. The use cases include: (1) approximated nearest neighbor search, which is relevant to databases and machine learning, (2) remote disaggregated memory, showing how the cloud architecture is evolving and demonstrating the potential for operator offloading and line rate data processing, and (3) recommendation system as an application with tight latency constraints.

The analysis of multivariate time series data is challenging due to the various frequencies of signal changes that can occur over both short and long terms. Furthermore, standard deep learning models are often unsuitable for such datasets, as signals are typically sampled at different rates. To address these issues, we introduce MultiWave, a novel framework that enhances deep learning time series models by incorporating components that operate at the intrinsic frequencies of signals. MultiWave uses wavelets to decompose each signal into subsignals of varying frequencies and groups them into frequency bands. Each frequency band is handled by a different component of our model. A gating mechanism combines the output of the components to produce sparse models that use only specific signals at specific frequencies. Our experiments demonstrate that MultiWave accurately identifies informative frequency bands and improves the performance of various deep learning models, including LSTM, Transformer, and CNN-based models, for a wide range of applications. It attains top performance in stress and affect detection from wearables. It also increases the AUC of the best-performing model by 5% for in-hospital COVID-19 mortality prediction from patient blood samples and for human activity recognition from accelerometer and gyroscope data. We show that MultiWave consistently identifies critical features and their frequency components, thus providing valuable insights into the applications studied.

Multivariate sequential data collected in practice often exhibit temporal irregularities, including nonuniform time intervals and component misalignment. However, if uneven spacing and asynchrony are endogenous characteristics of the data rather than a result of insufficient observation, the information content of these irregularities plays a defining role in characterizing the multivariate dependence structure. Existing approaches for probabilistic forecasting either overlook the resulting statistical heterogeneities, are susceptible to imputation biases, or impose parametric assumptions on the data distribution. This paper proposes an end-to-end solution that overcomes these limitations by allowing the observation arrival times to play the central role of model construction, which is at the core of temporal irregularities. To acknowledge temporal irregularities, we first enable unique hidden states for components so that the arrival times can dictate when, how, and which hidden states to update. We then develop a conditional flow representation to non-parametrically represent the data distribution, which is typically non-Gaussian, and supervise this representation by carefully factorizing the log-likelihood objective to select conditional information that facilitates capturing time variation and path dependency. The broad applicability and superiority of the proposed solution are confirmed by comparing it with existing approaches through ablation studies and testing on real-world datasets.

Blockchain is an emerging decentralized data collection, sharing and storage technology, which have provided abundant transparent, secure, tamper-proof, secure and robust ledger services for various real-world use cases. Recent years have witnessed notable developments of blockchain technology itself as well as blockchain-adopting applications. Most existing surveys limit the scopes on several particular issues of blockchain or applications, which are hard to depict the general picture of current giant blockchain ecosystem. In this paper, we investigate recent advances of both blockchain technology and its most active research topics in real-world applications. We first review the recent developments of consensus mechanisms and storage mechanisms in general blockchain systems. Then extensive literature is conducted on blockchain enabled IoT, edge computing, federated learning and several emerging applications including healthcare, COVID-19 pandemic, social network and supply chain, where detailed specific research topics are discussed in each. Finally, we discuss the future directions, challenges and opportunities in both academia and industry.

It has been a long time that computer architecture and systems are optimized to enable efficient execution of machine learning (ML) algorithms or models. Now, it is time to reconsider the relationship between ML and systems, and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: the improvement of designers' productivity, and the completion of the virtuous cycle. In this paper, we present a comprehensive review of work that applies ML for system design, which can be grouped into two major categories, ML-based modelling that involves predictions of performance metrics or some other criteria of interest, and ML-based design methodology that directly leverages ML as the design tool. For ML-based modelling, we discuss existing studies based on their target level of system, ranging from the circuit level to the architecture/system level. For ML-based design methodology, we follow a bottom-up path to review current work, with a scope of (micro-)architecture design (memory, branch prediction, NoC), coordination between architecture/system and workload (resource allocation and management, data center management, and security), compiler, and design automation. We further provide a future vision of opportunities and potential directions, and envision that applying ML for computer architecture and systems would thrive in the community.

The concept of smart grid has been introduced as a new vision of the conventional power grid to figure out an efficient way of integrating green and renewable energy technologies. In this way, Internet-connected smart grid, also called energy Internet, is also emerging as an innovative approach to ensure the energy from anywhere at any time. The ultimate goal of these developments is to build a sustainable society. However, integrating and coordinating a large number of growing connections can be a challenging issue for the traditional centralized grid system. Consequently, the smart grid is undergoing a transformation to the decentralized topology from its centralized form. On the other hand, blockchain has some excellent features which make it a promising application for smart grid paradigm. In this paper, we have an aim to provide a comprehensive survey on application of blockchain in smart grid. As such, we identify the significant security challenges of smart grid scenarios that can be addressed by blockchain. Then, we present a number of blockchain-based recent research works presented in different literatures addressing security issues in the area of smart grid. We also summarize several related practical projects, trials, and products that have been emerged recently. Finally, we discuss essential research challenges and future directions of applying blockchain to smart grid security issues.

Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-of-the-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By training 8,730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.

Deep Convolutional Neural Networks (CNNs) are a special type of Neural Networks, which have shown state-of-the-art results on various competitive benchmarks. The powerful learning ability of deep CNN is largely achieved with the use of multiple non-linear feature extraction stages that can automatically learn hierarchical representation from the data. Availability of a large amount of data and improvements in the hardware processing units have accelerated the research in CNNs and recently very interesting deep CNN architectures are reported. The recent race in deep CNN architectures for achieving high performance on the challenging benchmarks has shown that the innovative architectural ideas, as well as parameter optimization, can improve the CNN performance on various vision-related tasks. In this regard, different ideas in the CNN design have been explored such as use of different activation and loss functions, parameter optimization, regularization, and restructuring of processing units. However, the major improvement in representational capacity is achieved by the restructuring of the processing units. Especially, the idea of using a block as a structural unit instead of a layer is gaining substantial appreciation. This survey thus focuses on the intrinsic taxonomy present in the recently reported CNN architectures and consequently, classifies the recent innovations in CNN architectures into seven different categories. These seven categories are based on spatial exploitation, depth, multi-path, width, feature map exploitation, channel boosting and attention. Additionally, it covers the elementary understanding of the CNN components and sheds light on the current challenges and applications of CNNs.

北京阿比特科技有限公司