This paper highlights the necessity to use modern blockchain technology in traditional banking sector to reduce frauds and enable high-security transactions on a permanent blockchain ledger. Reviewing different channels through which the traditional banking servers could integrate blockchain use, it is signified how a huge anti-fraud stand can be taken against bank servers allowing fraudulent transactions daily. Usage of a blockchain-based ledger is highly impactful in terms of security of a banking organization. Blockchain-based currency tokens, also referred to as Cryptocurrencies are not regulated by the government, highly volatile, and anonymous to use. Furthermore, there is no security for any funds invested in a cryptocurrency market. However, the integration of a blockchain ledger in a traditional banking organization would strengthen the security to provide more stability and confidence to its customers and at the same time, make blockchain a more reliable method to consider due to being trusted by large financial organizations.
Blockchain network deployment and evaluation have become prevalent due to the demand for private blockchains by enterprises, governments, and edge computing systems. Whilst a blockchain network's deployment and evaluation are driven by its architecture, practitioners still need to learn and carry out many repetitive and error-prone activities to transform architecture into an operational blockchain network and evaluate it. Greater efficiency could be gained if practitioners focus solely on the architecture design, a valuable and hard-to-automate activity, and leave the implementation steps to an automation framework. This paper proposes an automation framework called NVAL (Network Deployment and Evaluation Framework), which can deploy and evaluate blockchain networks based on their architecture specifications. The key idea of NVAL is reusing and combining the existing automation scripts and utilities of various blockchain types to deploy and evaluate incoming blockchain network architectures. We propose a novel meta-model to capture blockchain network architectures as computer-readable artefacts and employ a state-space search approach to plan and conduct their deployment and evaluation. An evaluative case study shows that NVAL successfully combines seven deployment and evaluation procedures to deploy 65 networks with 12 different architectures and generate 295 evaluation datasets whilst incurring a negligible processing time overhead.
We propose application-layer coding schemes to recover lost data in delay-sensitive uplink (sensor-to-gateway) communications in the Internet of Things. Built on an approach that combines retransmissions and forward erasure correction, the proposed schemes' salient features include low computational complexity and the ability to exploit sporadic receiver feedback for efficient data recovery. Reduced complexity is achieved by keeping the number of coded transmissions as low as possible and by devising a mechanism to compute the optimal degree of a coded packet in O(1). Our major contributions are: (a) An enhancement to an existing scheme called windowed coding, whose complexity is greatly reduced and data recovery performance is improved by our proposed approach. (b) A technique that combines elements of windowed coding with a new feedback structure to further reduce the coding complexity and improve data recovery. (c) A coded forwarding scheme in which a relay node provides further resilience against packet loss by overhearing source-to-destination communications and making forwarding decisions based on overheard information.
Distributed data analytics platforms (i.e., Apache Spark, Hadoop) enable cost-effective storage and processing by distributing data and computation to multiple nodes. Since these frameworks' design was primarily motivated by performance and usability, most were assumed to operate in non-malicious settings. Hence, they allow users to execute arbitrary code to analyze the data. To make the situation worse, they do not support fine-grained access control inherently or offer any plugin mechanism to enable it - which makes them risky to be used in multi-tier organizational settings. There have been attempts to build "add-on" solutions to enable fine-grained access control for distributed data analytics platforms. In this paper, we show that by knowing the nature of the solution, an attacker can evade the access control by maliciously using the platform-provided APIs. Specifically, we crafted several attack vectors to evade such solutions. Next, we systematically analyze the threats and potentially risky APIs and propose a two-layered (i.e., proactive and reactive) defense to protect against those attacks. Our proactive security layer utilizes state-of-the-art program analysis to detect potentially malicious user code. The reactive security layer consists of binary integrity checking, instrumentation-based runtime checks, and sandboxed execution. Finally, Using this solution, we provide a secure implementation of a new framework-agnostic fine-grained attribute-based access control framework named SecureDL for Apache Spark. To the best of our knowledge, this is the first work that provides secure fine-grained attribute-based access control distributed data analytics platforms that allow arbitrary code execution. Performance evaluation showed that the overhead due to added security is low.
Learning controllers from data for stabilizing dynamical systems typically follows a two step process of first identifying a model and then constructing a controller based on the identified model. However, learning models means identifying generic descriptions of the dynamics of systems, which can require large amounts of data and extracting information that are unnecessary for the specific task of stabilization. The contribution of this work is to show that if a linear dynamical system has dimension (McMillan degree) $n$, then there always exist $n$ states from which a stabilizing feedback controller can be constructed, independent of the dimension of the representation of the observed states and the number of inputs. By building on previous work, this finding implies that any linear dynamical system can be stabilized from fewer observed states than the minimal number of states required for learning a model of the dynamics. The theoretical findings are demonstrated with numerical experiments that show the stabilization of the flow behind a cylinder from less data than necessary for learning a model.
Since 2016, sharding has become an auspicious solution to tackle the scalability issue in legacy blockchain systems. Despite its potential to strongly boost the blockchain throughput, sharding comes with its own security issues. To ease the process of deciding which shard to place transactions, existing sharding protocols use a hash-based transaction sharding in which the hash value of a transaction determines its output shard. Unfortunately, we show that this mechanism opens up a loophole that could be exploited to conduct a single-shard flooding attack, a type of Denial-of-Service (DoS) attack, to overwhelm a single shard that ends up reducing the performance of the system as a whole. To counter the single-shard flooding attack, we propose a countermeasure that essentially eliminates the loophole by rejecting the use of hash-based transaction sharding. The countermeasure leverages the Trusted Execution Environment (TEE) to let blockchain's validators securely execute a transaction sharding algorithm with a negligible overhead. We provide a formal specification for the countermeasure and analyze its security properties in the Universal Composability (UC) framework. Finally, a proof-of-concept is developed to demonstrate the feasibility and practicality of our solution.
The capacity sharing problem in Radio Access Network (RAN) slicing deals with the distribution of the capacity available in each RAN node among various RAN slices to satisfy their traffic demands and efficiently use the radio resources. While several capacity sharing algorithmic solutions have been proposed in the literature, their practical implementation still remains as a gap. In this paper, the implementation of a Reinforcement Learning-based capacity sharing algorithm over the O-RAN architecture is discussed, providing insights into the operation of the involved interfaces and the containerization of the solution. Moreover, the description of the testbed implemented to validate the solution is included and some performance and validation results are presented.
While vaccinations continue to be rolled out to curb the ongoing COVID-19 pandemic, their verification is becoming a requirement for the re-incorporation of individuals into many social activities or travel. Blockchain technology has been widely proposed to manage vaccination records and their verification in many politically-bound regions. However, the high contagiousness of COVID-19 calls for a global vaccination campaign. Therefore, a blockchain for vaccination management must scale up to support such a campaign and be adaptable to the requirements of different countries. While there have been many proposals of blockchain frameworks that balance the access and immutability of vaccination records, their scalability, a critical feature, has not yet been addressed. In this paper, we propose a scalable and cooperative Global Immunization Information Blockchain-based System (GEOS) that leverages the global interoperability of immunization information systems. We model GEOS and describe its requirements, features, and operation. We analyze the communications and the delays incurred by the national and international consensus processes and blockchain interoperability in GEOS. Such communications are pivotal in enabling global-scale interoperability and access to electronic vaccination records for verification. We show that GEOS ably keeps up with the global vaccination rates of COVID-19 as an example of its scalability.
5G radio access network (RAN) with network slicing methodology plays a key role in the development of the next-generation network system. RAN slicing focuses on splitting the substrate's resources into a set of self-contained programmable RAN slices. Leveraged by network function virtualization (NFV), a RAN slice is constituted by various virtual network functions (VNFs) and virtual links that are embedded as instances on substrate nodes. In this work, we focus on the following fundamental tasks: i) establishing the theoretical foundation for constructing a VNF mapping plan for RAN slice recovery optimization and ii) developing algorithms needed to map/embed VNFs efficiently. In particular, we propose four efficient algorithms, including Resource-based Algorithm (RBA), Connectivity-based Algorithm (CBA), Group-based Algorithm (GBA), and Group-Connectivity-based Algorithm (GCBA) to solve the resource allocation and VNF mapping problem. Extensive experiments are also conducted to validate the robustness of RAN slicing via the proposed algorithms.
Deep neural networks (DNNs) have achieved unprecedented success in the field of artificial intelligence (AI), including computer vision, natural language processing and speech recognition. However, their superior performance comes at the considerable cost of computational complexity, which greatly hinders their applications in many resource-constrained devices, such as mobile phones and Internet of Things (IoT) devices. Therefore, methods and techniques that are able to lift the efficiency bottleneck while preserving the high accuracy of DNNs are in great demand in order to enable numerous edge AI applications. This paper provides an overview of efficient deep learning methods, systems and applications. We start from introducing popular model compression methods, including pruning, factorization, quantization as well as compact model design. To reduce the large design cost of these manual solutions, we discuss the AutoML framework for each of them, such as neural architecture search (NAS) and automated pruning and quantization. We then cover efficient on-device training to enable user customization based on the local data on mobile devices. Apart from general acceleration techniques, we also showcase several task-specific accelerations for point cloud, video and natural language processing by exploiting their spatial sparsity and temporal/token redundancy. Finally, to support all these algorithmic advancements, we introduce the efficient deep learning system design from both software and hardware perspectives.
In the past decade, we have witnessed the rise of deep learning to dominate the field of artificial intelligence. Advances in artificial neural networks alongside corresponding advances in hardware accelerators with large memory capacity, together with the availability of large datasets enabled researchers and practitioners alike to train and deploy sophisticated neural network models that achieve state-of-the-art performance on tasks across several fields spanning computer vision, natural language processing, and reinforcement learning. However, as these neural networks become bigger, more complex, and more widely used, fundamental problems with current deep learning models become more apparent. State-of-the-art deep learning models are known to suffer from issues that range from poor robustness, inability to adapt to novel task settings, to requiring rigid and inflexible configuration assumptions. Ideas from collective intelligence, in particular concepts from complex systems such as self-organization, emergent behavior, swarm optimization, and cellular systems tend to produce solutions that are robust, adaptable, and have less rigid assumptions about the environment configuration. It is therefore natural to see these ideas incorporated into newer deep learning methods. In this review, we will provide a historical context of neural network research's involvement with complex systems, and highlight several active areas in modern deep learning research that incorporate the principles of collective intelligence to advance its current capabilities. To facilitate a bi-directional flow of ideas, we also discuss work that utilize modern deep learning models to help advance complex systems research. We hope this review can serve as a bridge between complex systems and deep learning communities to facilitate the cross pollination of ideas and foster new collaborations across disciplines.