Industrial Internet of Things (IIoT) opens up a challenging research area towards improving secure data sharing which currently has several limitations. Primarily, the lack of inbuilt guarantees of honest behavior of participating, such as end-users or cloud behaving maliciously may result in disputes. Given such challenges, we propose a fair, accountable, and secure data sharing scheme, $\textit{FairShare}$ for IIoT. In this scheme, data collected from IoT devices are processed and stored in cloud servers with intermediate fog nodes facilitating computation. Authorized clients can access this data against some fee to make strategic decisions for improving the operational services of the IIoT system. By enabling blockchain, $\textit{FairShare}$ prevents fraudulent activities and thereby achieves fairness such that each party gets their rightful outcome in terms of data or penalty/rewards while simultaneously ensuring accountability of the services provided by the parties. Additionally, smart contracts are designed to act as a mediator during any dispute by enforcing payment settlement. Further, security and privacy of data are ensured by suitably applying cryptographic techniques like proxy re-encryption. We prove $\textit{FairShare}$ to be secure as long as at least one of the parties is honest. We validate $\textit{FairShare}$ with a theoretical overhead analysis. We also build a prototype in Ethereum to estimate performance and justify comparable results with a state-of-the-art scheme both via simulation and a realistic testbed setup. We observe an additional communication overhead of 256 bytes and a cost of deployment of 1.01 USD in Ethereum which are constant irrespective of file size.
The energy consumption and related carbon emissions of cryptocurrencies such as Bitcoin are subject to extensive discussion in public, academia, and industry. As cryptocurrencies continue their journey into mainstream finance, incentives to participate in the networks and consume energy to do so remain significant. First guidance on how to allocate the carbon footprint of the Bitcoin network to single investors exist, however a holistic framework capturing a wider range of cryptocurrencies and tokens remains absent. This white paper explores different approaches of how to allocate emissions caused by cryptocurrencies and tokens. Based on our analysis of the strengths and limitations of potential approaches, we propose a framework that combines key drivers of emissions in Proof of Work and Proof of Stake networks.
Future astronauts living and working on the Moon will face extreme environmental conditions impeding their operational safety and performance. While it has been suggested that Augmented Reality (AR) Head-Up Displays (HUDs) could potentially help mitigate some of these adversities, the applicability of AR in the unique lunar context remains underexplored. To address this limitation, we have produced an accurate representation of the lunar setting in virtual reality (VR) which then formed our testbed for the exploration of prospective operational scenarios with aerospace experts. Herein we present findings based on qualitative reflections made by the first 6 study participants. AR was found instrumental in several use cases, including the support of navigation and risk awareness. Major design challenges were likewise identified, including the importance of redundancy and contextual appropriateness. Drawing on these findings, we conclude by outlining directions for future research aimed at developing AR-based assistive solutions tailored to the lunar setting.
Connected and Automated Vehicles (CAVs) are one of the emerging technologies in the automotive domain that has the potential to alleviate the issues of accidents, traffic congestion, and pollutant emissions, leading to a safe, efficient, and sustainable transportation system. Machine learning-based methods are widely used in CAVs for crucial tasks like perception, motion planning, and motion control, where machine learning models in CAVs are solely trained using the local vehicle data, and the performance is not certain when exposed to new environments or unseen conditions. Federated learning (FL) is an effective solution for CAVs that enables a collaborative model development with multiple vehicles in a distributed learning framework. FL enables CAVs to learn from a wide range of driving environments and improve their overall performance while ensuring the privacy and security of local vehicle data. In this paper, we review the progress accomplished by researchers in applying FL to CAVs. A broader view of the various data modalities and algorithms that have been implemented on CAVs is provided. Specific applications of FL are reviewed in detail, and an analysis of the challenges and future scope of research are presented.
Technology, especially the smartphone, is villainized for taking meaning and time away from in-person interactions and secluding people into "digital bubbles". We believe this is not an intrinsic property of digital gadgets, but evidence of a lack of imagination in technology design. Leveraging augmented reality (AR) toward this end allows us to create experiences for multiple people, their pets, and their environments. In this work, we explore the design of AR technology that "piggybacks" on everyday leisure to foster co-located interactions among close ties (with other people and pets. We designed, developed, and deployed three such AR applications, and evaluated them through a 41-participant and 19-pet user study. We gained key insights about the ability of AR to spur and enrich interaction in new channels, the importance of customization, and the challenges of designing for the physical aspects of AR devices (e.g., holding smartphones). These insights guide design implications for the novel research space of co-located AR.
In the present academic landscape, the process of collecting data is slow, and the lax infrastructures for data collaborations lead to significant delays in coming up with and disseminating conclusive findings. Therefore, there is an increasing need for a secure, scalable, and trustworthy data-sharing ecosystem that promotes and rewards collaborative data-sharing efforts among researchers, and a robust incentive mechanism is required to achieve this objective. Reputation-based incentives, such as the h-index, have historically played a pivotal role in the academic community. However, the h-index suffers from several limitations. This paper introduces the SCIENCE-index, a blockchain-based metric measuring a researcher's scientific contributions. Utilizing the Microsoft Academic Graph and machine learning techniques, the SCIENCE-index predicts the progress made by a researcher over their career and provides a soft incentive for sharing their datasets with peer researchers. To incentivize researchers to share their data, the SCIENCE-index is augmented to include a data-sharing parameter. DataCite, a database of openly available datasets, proxies this parameter, which is further enhanced by including a researcher's data-sharing activity. Our model is evaluated by comparing the distribution of its output for geographically diverse researchers to that of the h-index. We observe that it results in a much more even spread of evaluations. The SCIENCE-index is a crucial component in constructing a decentralized protocol that promotes trust-based data sharing, addressing the current inequity in dataset sharing. The work outlined in this paper provides the foundation for assessing scientific contributions in future data-sharing spaces powered by decentralized applications.
The Metaverse play-to-earn games have been gaining popularity as they enable players to earn in-game tokens which can be translated to real-world profits. With the advancements in augmented reality (AR) technologies, users can play AR games in the Metaverse. However, these high-resolution games are compute-intensive, and in-game graphical scenes need to be offloaded from mobile devices to an edge server for computation. In this work, we consider an optimization problem where the Metaverse Service Provider (MSP)'s objective is to reduce downlink transmission latency of in-game graphics, the latency of uplink data transmission, and the worst-case (greatest) battery charge expenditure of user equipments (UEs), while maximizing the worst-case (lowest) UE resolution-influenced in-game earning potential through optimizing the downlink UE-Metaverse Base Station (UE-MBS) assignment and the uplink transmission power selection. The downlink and uplink transmissions are then executed asynchronously. We propose a multi-agent, loss-sharing (MALS) reinforcement learning model to tackle the asynchronous and asymmetric problem. We then compare the MALS model with other baseline models and show its superiority over other methods. Finally, we conduct multi-variable optimization weighting analyses and show the viability of using our proposed MALS algorithm to tackle joint optimization problems.
While Reinforcement Learning (RL) achieves tremendous success in sequential decision-making problems of many domains, it still faces key challenges of data inefficiency and the lack of interpretability. Interestingly, many researchers have leveraged insights from the causality literature recently, bringing forth flourishing works to unify the merits of causality and address well the challenges from RL. As such, it is of great necessity and significance to collate these Causal Reinforcement Learning (CRL) works, offer a review of CRL methods, and investigate the potential functionality from causality toward RL. In particular, we divide existing CRL approaches into two categories according to whether their causality-based information is given in advance or not. We further analyze each category in terms of the formalization of different models, ranging from the Markov Decision Process (MDP), Partially Observed Markov Decision Process (POMDP), Multi-Arm Bandits (MAB), and Dynamic Treatment Regime (DTR). Moreover, we summarize the evaluation matrices and open sources while we discuss emerging applications, along with promising prospects for the future development of CRL.
Artificial Intelligence (AI) is rapidly becoming integrated into military Command and Control (C2) systems as a strategic priority for many defence forces. The successful implementation of AI is promising to herald a significant leap in C2 agility through automation. However, realistic expectations need to be set on what AI can achieve in the foreseeable future. This paper will argue that AI could lead to a fragility trap, whereby the delegation of C2 functions to an AI could increase the fragility of C2, resulting in catastrophic strategic failures. This calls for a new framework for AI in C2 to avoid this trap. We will argue that antifragility along with agility should form the core design principles for AI-enabled C2 systems. This duality is termed Agile, Antifragile, AI-Enabled Command and Control (A3IC2). An A3IC2 system continuously improves its capacity to perform in the face of shocks and surprises through overcompensation from feedback during the C2 decision-making cycle. An A3IC2 system will not only be able to survive within a complex operational environment, it will also thrive, benefiting from the inevitable shocks and volatility of war.
With the advances of data-driven machine learning research, a wide variety of prediction problems have been tackled. It has become critical to explore how machine learning and specifically deep learning methods can be exploited to analyse healthcare data. A major limitation of existing methods has been the focus on grid-like data; however, the structure of physiological recordings are often irregular and unordered which makes it difficult to conceptualise them as a matrix. As such, graph neural networks have attracted significant attention by exploiting implicit information that resides in a biological system, with interactive nodes connected by edges whose weights can be either temporal associations or anatomical junctions. In this survey, we thoroughly review the different types of graph architectures and their applications in healthcare. We provide an overview of these methods in a systematic manner, organized by their domain of application including functional connectivity, anatomical structure and electrical-based analysis. We also outline the limitations of existing techniques and discuss potential directions for future research.