Aiming to support a cross-sector and cross-border eGovernance paradigm for sharing common public services, this paper introduces an AI-enhanced solution that enables beneficiaries to participate in a decenntralized network for effective big data exchange and service delivery that promotes the once-only priority and is by design digital, efficient, cost-effective, interoperable and secure. The solution comprises (i) a reliable and efficient decentralized mechanism for data sharing, capable of addressing the complexity of the processes and their high demand of resources; (ii) an ecosystem for delivering mobile services tailored to the needs of stakeholders; (iii) a single sign-on Wallet mechanism to manage the transactions with multiple services; and (iv) an intercommunication layer, responsible for the secure exchange of information among existing eGovernment systems with newly developed ones. An indicative application scenario showcases the potential of our approach.
Today's large-scale data management systems need to address distributed applications' confidentiality and scalability requirements among a set of collaborative enterprises. In this paper, we present Qanaat, a scalable multi-enterprise permissioned blockchain system that guarantees confidentiality. Qanaat consists of multiple enterprises where each enterprise partitions its data into multiple shards and replicates a data shard on a cluster of nodes to provide fault tolerance. Qanaat presents data collections that preserve the confidentiality of transactions and a transaction ordering schema that enforces only the necessary and sufficient constraints to guarantee data consistency. Furthermore, Qanaat supports both data consistency and confidentiality across collaboration workflows where an enterprise can participate in different collaboration workflows with different sets of enterprises. Finally, Qanaat presents a suite of centralized and decentralized consensus protocols to support different types of intra-shard and cross-shard transactions within or across enterprises. The experimental results reveal the efficiency of Qanaat in processing multi-shard and multi-enterprise transactions.
In this work, we analyze the privacy guarantees of Zigbee protocol, an energy-efficient wireless IoT protocol that is increasingly being deployed in smart home settings. Specifically, we devise two passive inference techniques to demonstrate how a passive eavesdropper, located outside the smart home, can reliably identify in-home devices or events from the encrypted wireless Zigbee traffic by 1) inferring a single application layer (APL) command in the event's traffic burst, and 2) exploiting the device's periodic reporting pattern and interval. This enables an attacker to infer user's habits or determine if the smart home is vulnerable to unauthorized entry. We evaluated our techniques on 19 unique Zigbee devices across several categories and 5 popular smart hubs in three different scenarios: i) controlled shield, ii) living smart-home IoT lab, and iii) third-party Zigbee captures. Our results indicate over 85% accuracy in determining events and devices using the command inference approach, without the need of a-priori device signatures, and 99.8% accuracy in determining known devices using the periodic reporting approach. In addition, we identified APL commands in a third party capture file with 90.6% accuracy. Through this work, we highlight the trade-off between designing a low-power, low-cost wireless network and achieving privacy guarantees.
The Internet of Things (IoT) is seen as a novel technical paradigm aimed at enabling connectivity between billions of interconnected devices all around the world. This IoT is being served in various domains, such as smart healthcare, traffic surveillance, smart homes, smart cities, and various industries. IoT's main functionality includes sensing the surrounding environment, collecting data from the surrounding, and transmitting those data to the remote data centers or the cloud. This sharing of vast volumes of data between billions of IoT devices generates a large energy demand and increases energy wastage in the form of heat. The Green IoT envisages reducing the energy consumption of IoT devices and keeping the environment safe and clean. Inspired by achieving a sustainable next-generation IoT ecosystem and guiding us toward making a healthy green planet, we first offer an overview of Green IoT (GIoT), and then the challenges and the future directions regarding the GIoT are presented in our study.
The evolution of the Internet of Things (IoT) has increased the connection of personal devices, mainly taking into account the habits and behavior of their owners. These environments demand access control mechanisms to protect them against intruders, like Sybil attacks. that can compromise data privacy or disrupt the network operation. The Social IoT paradigm enables access control systems to aggregate community context and sociability information from devices to enhance robustness and security. This work introduces the ELECTRON mechanism to control access in IoT networks based on social trust between devices to protect the network from Sybil attackers. ELECTRON groups IoT devices into communities by their social similarity and evaluates their social trust, strengthening the reliability between legitimate devices and their resilience against the interaction of Sybil attackers. NS-3 Simulations show the ELECTRON performance under Sybil attacks on several IoT communities so that it has gotten to detect more than 90% of attackers in a scenario with 150 nodes into offices, schools, gyms, and~parks communities, and in other scenarios for same communities it achieved around of 90\% of detection. Furthermore, it provided high accuracy, over 90-95%, and false positive rates closer to zero.
We present a low-overhead mechanism for self-sovereign identification and communication of IoT agents in constrained networks. Our main contribution is to enable native use of Decentralized Identifiers (DIDs) and DID-based secure communication on constrained networks, whereas previous works either did not consider the issue or relied on proxy-based architectures. We propose a new extension to DIDs along with a more concise serialization method for DID metadata. Moreover, in order to reduce the security overhead over transmitted messages, we adopted a binary message envelope. We implemented these proposals within the context of Swarm Computing, an approach for decentralized IoT. Results showed that our proposal reduces the size of identity metadata in almost four times and security overhead up to five times. We observed that both techniques are required to enable operation on constrained networks.
In this article we present hygiea, an end-to-end blockchain-based solution for the Covid-19 pandemic. hygiea has two main objectives. The first is to allow governments to issue Covid-19 related certificates to citizens that can be verified by designated verifiers to ensure safer workplaces. The second is to provide the necessary tools to experts and decision makers to better understand the impact of the pandemic through statistical models built on top of the data collected by the platform. This work covers all steps of the certificate issuance, verification and revocation cycles with well-defined roles for all stakeholders. We also propose a governance model that is implemented via smart contracts ensuring security, transparency and auditability. Finally, we propose techniques for deriving statistical models that can be used by decision makers.
In the given technology-driven era, smart cities are the next frontier of technology, aiming at improving the quality of people's lives. Many research works focus on future smart cities with a holistic approach towards smart city development. In this paper, we introduce such future smart cities that leverage blockchain technology in areas like data security, energy and waste management, governance, transport, supply chain, including emergency events, and environmental monitoring. Blockchain, being a decentralized immutable ledger, has the potential to promote the development of smart cities by guaranteeing transparency, data security, interoperability, and privacy. Particularly, using blockchain in emergency events will provide interoperability between many parties involved in the response, will increase timeliness of services, and establish transparency. In that case, if a current fee-based or first-come-first-serve-based processing is used, emergency events may get delayed in being processed due to competition, and thus, threatening people's lives. Thus, there is a need for transaction prioritization based on the priority of information and quick creation of blocks (variable interval block creation mechanism). Also, since the leaders ensure transaction prioritization while generating blocks, leader rotation and proper election procedure become important for the transaction prioritization process to take place honestly and efficiently. In our consensus protocol, we deploy a machine learning (ML) algorithm to achieve efficient leader election and design a novel dynamic block creation algorithm. Also, to ensure honest assessment from the followers on the blocks generated by the leaders, a peer-prediction-based verification mechanism is proposed. Both security analysis and simulation experiments are carried out to demonstrate the robustness and accuracy of our proposed scheme.
The demand for artificial intelligence has grown significantly over the last decade and this growth has been fueled by advances in machine learning techniques and the ability to leverage hardware acceleration. However, in order to increase the quality of predictions and render machine learning solutions feasible for more complex applications, a substantial amount of training data is required. Although small machine learning models can be trained with modest amounts of data, the input for training larger models such as neural networks grows exponentially with the number of parameters. Since the demand for processing training data has outpaced the increase in computation power of computing machinery, there is a need for distributing the machine learning workload across multiple machines, and turning the centralized into a distributed system. These distributed systems present new challenges, first and foremost the efficient parallelization of the training process and the creation of a coherent model. This article provides an extensive overview of the current state-of-the-art in the field by outlining the challenges and opportunities of distributed machine learning over conventional (centralized) machine learning, discussing the techniques used for distributed machine learning, and providing an overview of the systems that are available.
Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.
The field of Multi-Agent System (MAS) is an active area of research within Artificial Intelligence, with an increasingly important impact in industrial and other real-world applications. Within a MAS, autonomous agents interact to pursue personal interests and/or to achieve common objectives. Distributed Constraint Optimization Problems (DCOPs) have emerged as one of the prominent agent architectures to govern the agents' autonomous behavior, where both algorithms and communication models are driven by the structure of the specific problem. During the last decade, several extensions to the DCOP model have enabled them to support MAS in complex, real-time, and uncertain environments. This survey aims at providing an overview of the DCOP model, giving a classification of its multiple extensions and addressing both resolution methods and applications that find a natural mapping within each class of DCOPs. The proposed classification suggests several future perspectives for DCOP extensions, and identifies challenges in the design of efficient resolution algorithms, possibly through the adaptation of strategies from different areas.