Data protection is a severe constraint in the heterogeneous IoT era. This article presents a Hardware-Software Co-Simulation of AES-128 bit encryption and decryption for IoT Edge devices using the Xilinx System Generator (XSG). VHDL implementation of AES-128 bit algorithm is done with ECB and CTR mode using loop unrolled and FSM-based architecture. It is found that AES-CTR and FSM architecture performance is better than loop unrolled architecture with lesser power consumption and area. For performing the Hardware-Software Co-Simulation on Zedboard and Kintex-Ultra scale KCU105 Evaluation Platform, Xilinx Vivado 2016.2 and MATLAB 2015b is used. Hardware emulation is done for grey images successfully. To give a practical example of the usage of proposed framework, we have applied it for Biomedical Images (CTScan Image) as a case study. Security analysis in terms of the histogram, correlation, information entropy analysis, and keyspace analysis using exhaustive search and key sensitivity tests is also done to encrypt and decrypt images successfully.
Time series (TS) are present in many fields of knowledge, research, and engineering. The processing and analysis of TS are essential in order to extract knowledge from the data and to tackle forecasting or predictive maintenance tasks among others The modeling of TS is a challenging task, requiring high statistical expertise as well as outstanding knowledge about the application of Data Mining(DM) and Machine Learning (ML) methods. The overall work with TS is not limited to the linear application of several techniques, but is composed of an open workflow of methods and tests. These workflow, developed mainly on programming languages, are complicated to execute and run effectively on different systems, including Cloud Computing (CC) environments. The adoption of CC can facilitate the integration and portability of services allowing to adopt solutions towards services Internet Technologies (IT) industrialization. The definition and description of workflow services for TS open up a new set of possibilities regarding the reduction of complexity in the deployment of this type of issues in CC environments. In this sense, we have designed an effective proposal based on semantic modeling (or vocabulary) that provides the full description of workflow for Time Series modeling as a CC service. Our proposal includes a broad spectrum of the most extended operations, accommodating any workflow applied to classification, regression, or clustering problems for Time Series, as well as including evaluation measures, information, tests, or machine learning algorithms among others.
Sparsity is an intrinsic property of neural network(NN). Many software researchers have attempted to improve sparsity through pruning, for reduction on weight storage and computation workload, while hardware architects are working on how to skip redundant computations for higher energy efciency, but there always exists overhead, causing many architectures suffering from only minor proft. Therefrom, systolic array becomes a promising candidate for the advantages of low fanout and high throughput. However, sparsity is irregular, making it tricky to ft in with the rigid systolic tempo. Thus, this paper proposed a systolic-based architecture, called Sense, for both sparse input feature map(IFM) and weight processing, achieving large performance improvement with relatively small resource and power consumption. Meanwhile, we applied channel rearrangement to gather IFMs with approximate sparsity and co-designed an adaptive weight training method to keep the sparsity ratio(zero element percentage) of each kernel at 1/2, with little accuracy loss. This treatment can effectively reduce the irregularity of sparsity and help better ft with systolic dataflow. Additionally, a dataflow called Partition Reuse is mapped to our architecture, enhancing data reuse, lowering 1.9x-2.6x DRAM access reduction compared with Eyeriss and further reducing system energy consumption. The whole design is implemented on ZynqZCU102 and performs at a peak throughput of 409.6 GOP/s, with power consumption of 11.2W; compared with previous sparse NN accelerators based on FPGA, Sense takes up 1/5 less LUTs and 3/4 less BRAMs, reaches 2.1x peak energy efciency and achieves 1.15x-1.49x speedup.
The concept of traditional farming is changing rapidly with the introduction of smart technologies like the Internet of Things (IoT). Under the concept of smart agriculture, precision agriculture is gaining popularity to enable Decision Support System (DSS)-based farming management that utilizes widespread IoT sensors and wireless connectivity to enable automated detection and optimization of resources. Undoubtedly the success of the system would be impacted on crop productivity, where failure would impact severely. Like many other cyber-physical systems, one of the growing challenges to avoid system adversity is to ensure the system's security, privacy, and trust. But what are the vulnerabilities, threats, and security issues we should consider while deploying precision agriculture? This paper has conducted a holistic threat modeling on component levels of precision agriculture's standard infrastructure using popular threat intelligence tools STRIDE to identify common security issues. Our modeling identifies a noticing of fifty-eight potential security threats to consider. This presentation systematically presented them and advised general mitigation suggestions to support cyber security in precision agriculture.
Decentralized systems have been widely developed and applied to address security and privacy issues in centralized systems, especially since the advancement of distributed ledger technology. However, it is challenging to ensure their correct functioning with respect to their designs and minimize the technical risk before the delivery. Although formal methods have made significant progress over the past decades, a feasible solution based on formal methods from a development process perspective has not been well developed. In this paper, we formulate an iterative and incremental development process, named formalism-driven development (FDD), for developing provably correct decentralized systems under the guidance of formal methods. We also present a framework named Seniz, to practicalize FDD with a new modeling language and scaffolds. Furthermore, we conduct case studies to demonstrate the effectiveness of FDD in practice with the support of Seniz.
With the global Internet of Things IoT market size predicted to grow to over 1 trillion dollars in the next 5 years, many large corporations are scrambling to solidify their product line as the defacto device suite for consumers. This has led to each corporation developing their devices in a siloed environment with unique protocols and runtime frameworks that explicitly exclude the ability to work with the competitions devices. This development silo has created problems with programming complexity for application developers as well as concurrency and scalability limitations for applications that involve a network of IoT devices. The Constellation project is a distributed IoT runtime system that attempts to address these challenges by creating an operating system layer that decouples applications from devices. This layer provides mechanisms designed to allow applications to interface with an underlying substrate of IoT devices while abstracting away the complexities of application concurrency, device interoperability, and system scalability. This paper provides an overview of the Constellation system as well as details four new project expansions to improve system scalability.
FPGAs are now used in public clouds to accelerate a wide range of applications, including many that operate on sensitive data such as financial and medical records. We present ShEF, a trusted execution environment (TEE) for cloud-based reconfigurable accelerators. ShEF is independent from CPU-based TEEs and allows secure execution under a threat model where the adversary can control all software running on the CPU connected to the FPGA, has physical access to the FPGA, and can compromise the FPGA interface logic of the cloud provider. ShEF provides a secure boot and remote attestation process that relies solely on existing FPGA mechanisms for root of trust. It also includes a Shield component that provides secure access to data while the accelerator is in use. The Shield is highly customizable and extensible, allowing users to craft a bespoke security solution that fits their accelerator's memory access patterns, bandwidth, and security requirements at minimum performance and area overheads. We describe a prototype implementation of ShEF for existing cloud FPGAs, map ShEF to a performant and secure storage application, and measure the performance benefits of customizable security using five additional accelerators.
Utilizing Visualization-oriented Natural Language Interfaces (V-NLI) as a complementary input modality to direct manipulation for visual analytics can provide an engaging user experience. It enables users to focus on their tasks rather than worrying about operating the interface to visualization tools. In the past two decades, leveraging advanced natural language processing technologies, numerous V-NLI systems have been developed both within academic research and commercial software, especially in recent years. In this article, we conduct a comprehensive review of the existing V-NLIs. In order to classify each paper, we develop categorical dimensions based on a classic information visualization pipeline with the extension of a V-NLI layer. The following seven stages are used: query understanding, data transformation, visual mapping, view transformation, human interaction, context management, and presentation. Finally, we also shed light on several promising directions for future work in the community.
Stream processing has been an active research field for more than 20 years, but it is now witnessing its prime time due to recent successful efforts by the research community and numerous worldwide open-source communities. This survey provides a comprehensive overview of fundamental aspects of stream processing systems and their evolution in the functional areas of out-of-order data management, state management, fault tolerance, high availability, load management, elasticity, and reconfiguration. We review noteworthy past research findings, outline the similarities and differences between early ('00-'10) and modern ('11-'18) streaming systems, and discuss recent trends and open problems.
Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-of-the-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By training 8,730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.
It is becoming increasingly easy to automatically replace a face of one person in a video with the face of another person by using a pre-trained generative adversarial network (GAN). Recent public scandals, e.g., the faces of celebrities being swapped onto pornographic videos, call for automated ways to detect these Deepfake videos. To help developing such methods, in this paper, we present the first publicly available set of Deepfake videos generated from videos of VidTIMIT database. We used open source software based on GANs to create the Deepfakes, and we emphasize that training and blending parameters can significantly impact the quality of the resulted videos. To demonstrate this impact, we generated videos with low and high visual quality (320 videos each) using differently tuned parameter sets. We showed that the state of the art face recognition systems based on VGG and Facenet neural networks are vulnerable to Deepfake videos, with 85.62% and 95.00% false acceptance rates respectively, which means methods for detecting Deepfake videos are necessary. By considering several baseline approaches, we found that audio-visual approach based on lip-sync inconsistency detection was not able to distinguish Deepfake videos. The best performing method, which is based on visual quality metrics and is often used in presentation attack detection domain, resulted in 8.97% equal error rate on high quality Deepfakes. Our experiments demonstrate that GAN-generated Deepfake videos are challenging for both face recognition systems and existing detection methods, and the further development of face swapping technology will make it even more so.