This paper presents SigVM, a novel blockchain virtual machine that supports an event-driven execution model, enabling developers to build autonomous smart contracts. Contracts in SigVM can emit signal events, on which other contracts can listen. Once an event is triggered, corresponding handler functions are automatically executed as signal transactions. We build an end-to-end blockchain platform SigChain and a contract language compiler SigSolid to realize the potential of SigVM. Experimental results show that our benchmark applications can be reimplemented with SigVM in an autonomous way, eliminating the dependency on unreliable mechanisms like off-chain relay servers. The development effort of reimplementing these contracts with SigVM is small, i.e., we modified on average 2.6% of the contract code.
Adding renewable energy sources and storage units to an electric grid has led to a change in the way energy is generated and billed. This shift cannot be managed without a unified view of energy systems and their components. This unified view is captured within the idea of a Smart Local Energy System (SLES). Currently, various isolated control and market elements are proposed to resolve network constraints, demand side response and utility optimisation. They rely on topology estimations, forecasting and fault detection methods to complete their tasks. This disjointed design has led to most systems being capable of fulfilling only a single role or being resistant to change and extensions in functionality. By allocating roles, functional responsibilities and technical requirements to bounded systems a more unified view of energy systems can be achieved. This is made possible by representing an energy system as a distributed peer-to-peer (P2P) environment where each individual demand energy resource (DER) on the consumer's side of the meter is responsible for their portion of the network and can facilitate trade with numerous entities including the grid. Advances in control engineering, markets and services such as forecasting, topology identification and cyber-security can enable such trading and communication to be done securely and robustly. To enable this advantage however, we need to redefine how we view the design of the sub-systems and interconnections within smart local energy systems (SLES). In this paper we describe a way in which whole system design could be achieved by integrating control, markets and analytics into each system. We propose the use of physical, control, market and service layers to create system of systems representation.
Cross-Blockchain communication has gained traction due to the increasing fragmentation of blockchain networks and scalability solutions such as side-chaining and sharding. With SmartSync, we propose a novel concept for cross-blockchain smart contract interactions that creates client contracts on arbitrary blockchain networks supporting the same execution environment. Client contracts mirror the logic and state of the original instance and enable seamless on-chain function executions providing recent states. Synchronized contracts supply instant read-only function calls to other applications hosted on the target blockchain. Hereby, current limitations in cross-chain communication are alleviated and new forms of contract interactions are enabled. State updates are transmitted in a verifiable manner using Merkle proofs and do not require trusted intermediaries. To permit lightweight synchronizations, we introduce transition confirmations that facilitate the application of verifiable state transitions without re-executing transactions of the source blockchain. We prove the concept's soundness by providing a prototypical implementation that enables smart contract forks, state synchronizations, and on-chain validation on EVM-compatible blockchains. Our evaluation demonstrates SmartSync's applicability for presented use cases providing access to recent states to third-party contracts on the target blockchain. Execution costs scale sub-linearly with the number of value updates and depend on the depth and index of corresponding Merkle proofs.
A real-time motion training system for skydiving is proposed. Aerial maneuvers are performed by changing the body posture and thus deflecting the surrounding airflow. The natural learning process is extremely slow due to unfamiliar free-fall dynamics, stress induced blocking of kinesthetic feedback, and complexity of the required movements. The key idea is to augment the learner with an automatic control system that would be able to perform the trained activity if it had direct access to the learner's body as an actuator. The aiding system will supply the following visual cues to the learner: 1. Feedback of the current body posture; 2. The body posture that would bring the body to perform the desired maneuver; 3. Prediction of the future inertial position and orientation if the body retains its present posture. The system will enable novices to maintain stability in free-fall and perceive the unfamiliar environmental dynamics, thus accelerating the initial stages of skill acquisition. This paper presents results of a Proof-of-Concept experiment, whereby humans controlled a virtual skydiver free-falling in a computer simulation, by the means of their bodies. This task was impossible without the aiding system, enabling all participants to complete the task at the first attempt.
Autonomous driving has been at the forefront of public interest, and a pivotal debate to widespread concerns is safety in the transportation system. Deep reinforcement learning (DRL) has been applied to autonomous driving to provide solutions for obstacle avoidance. However, in a road traffic junction scenario, the vehicle typically receives partial observations from the transportation environment, while DRL needs to rely on long-term rewards to train a reliable model by maximising the cumulative rewards, which may take the risk when exploring new actions and returning either a positive reward or a penalty in the case of collisions. Although safety concerns are usually considered in the design of a reward function, they are not fully considered as the critical metric to directly evaluate the effectiveness of DRL algorithms in autonomous driving. In this study, we evaluated the safety performance of three baseline DRL models (DQN, A2C, and PPO) and proposed a self-awareness module from an attention mechanism for DRL to improve the safety evaluation for an anomalous vehicle in a complex road traffic junction environment, such as intersection and roundabout scenarios, based on four metrics: collision rate, success rate, freezing rate, and total reward. Our two experimental results in the training and testing phases revealed the baseline DRL with poor safety performance, while our proposed self-awareness attention-DQN can significantly improve the safety performance in intersection and roundabout scenarios.
The core issue of cyberspace detecting and mapping is to accurately identify and dynamically track devices. However, with the development of anonymization technology, devices can have multiple IP addresses and MAC addresses, and it is difficult to map multiple virtual attributes to the same physical device by existing detecting and mapping technologies. In this paper, we propose a detailed PUF-based detecting and mapping framework which can actively detect physical resources in cyberspace, construct resource portraits based on physical fingerprints, and dynamically track devices. We present a new method to implement a rowhammer DRAM PUF on a general PC equipped with DDR4 memory. The PUF performance evaluation shows that the extracted rowhammer PUF response is unique and reliable on PC, which can be treated as the device's unique physical fingerprint. The results of detecting and mapping experiments show that the framework we proposed can accurately identify the target devices. Even if the device modifies its MAC address, IP address, and operating system, by constructing a physical fingerprint database for device matching, the identification accuracy is close to the ideal value of 100%.
Many applications from the robotics domain can benefit from FPGA acceleration. A corresponding key question is how to integrate hardware accelerators into software-centric robotics programming environments. Recently, several approaches have demonstrated hardware acceleration for the robot operating system (ROS), the dominant programming environment in robotics. ROS is a middleware layer that features the composition of complex robotics applications as a set of nodes that communicate via mechanisms such as publish/subscribe, and distributes them over several compute platforms. In this paper, we present a novel approach for event-based programming of robotics applications that leverages ReconROS, a framework for flexibly mapping ROS 2 nodes to either software or reconfigurable hardware. The ReconROS executor schedules callbacks of ROS 2 nodes and utilizes a reconfigurable slot model and partial runtime reconfiguration to load hardware-based callbacks on demand. We describe the ReconROS executor approach, give design examples, and experimentally evaluate its functionality with examples.
Deep Reinforcement Learning (DRL) solutions are becoming pervasive at the edge of the network as they enable autonomous decision-making in a dynamic environment. However, to be able to adapt to the ever-changing environment, the DRL solution implemented on an embedded device has to continue to occasionally take exploratory actions even after initial convergence. In other words, the device has to occasionally take random actions and update the value function, i.e., re-train the Artificial Neural Network (ANN), to ensure its performance remains optimal. Unfortunately, embedded devices often lack processing power and energy required to train the ANN. The energy aspect is particularly challenging when the edge device is powered only by a means of Energy Harvesting (EH). To overcome this problem, we propose a two-part algorithm in which the DRL process is trained at the sink. Then the weights of the fully trained underlying ANN are periodically transferred to the EH-powered embedded device taking actions. Using an EH-powered sensor, real-world measurements dataset, and optimizing for Age of Information (AoI) metric, we demonstrate that such a DRL solution can operate without any degradation in the performance, with only a few ANN updates per day.
Active inference is a unifying theory for perception and action resting upon the idea that the brain maintains an internal model of the world by minimizing free energy. From a behavioral perspective, active inference agents can be seen as self-evidencing beings that act to fulfill their optimistic predictions, namely preferred outcomes or goals. In contrast, reinforcement learning requires human-designed rewards to accomplish any desired outcome. Although active inference could provide a more natural self-supervised objective for control, its applicability has been limited because of the shortcomings in scaling the approach to complex environments. In this work, we propose a contrastive objective for active inference that strongly reduces the computational burden in learning the agent's generative model and planning future actions. Our method performs notably better than likelihood-based active inference in image-based tasks, while also being computationally cheaper and easier to train. We compare to reinforcement learning agents that have access to human-designed reward functions, showing that our approach closely matches their performance. Finally, we also show that contrastive methods perform significantly better in the case of distractors in the environment and that our method is able to generalize goals to variations in the background.
Online multi-object tracking (MOT) is extremely important for high-level spatial reasoning and path planning for autonomous and highly-automated vehicles. In this paper, we present a modular framework for tracking multiple objects (vehicles), capable of accepting object proposals from different sensor modalities (vision and range) and a variable number of sensors, to produce continuous object tracks. This work is inspired by traditional tracking-by-detection approaches in computer vision, with some key differences - First, we track objects across multiple cameras and across different sensor modalities. This is done by fusing object proposals across sensors accurately and efficiently. Second, the objects of interest (targets) are tracked directly in the real world. This is a departure from traditional techniques where objects are simply tracked in the image plane. Doing so allows the tracks to be readily used by an autonomous agent for navigation and related tasks. To verify the effectiveness of our approach, we test it on real world highway data collected from a heavily sensorized testbed capable of capturing full-surround information. We demonstrate that our framework is well-suited to track objects through entire maneuvers around the ego-vehicle, some of which take more than a few minutes to complete. We also leverage the modularity of our approach by comparing the effects of including/excluding different sensors, changing the total number of sensors, and the quality of object proposals on the final tracking result.
TraQuad is an autonomous tracking quadcopter capable of tracking any moving (or static) object like cars, humans, other drones or any other object on-the-go. This article describes the applications and advantages of TraQuad and the reduction in cost (to about 250$) that has been achieved so far using the hardware and software capabilities and our custom algorithms wherever needed. This description is backed by strong data and the research analyses which have been drawn out of extant information or conducted on own when necessary. This also describes the development of completely autonomous (even GPS is optional) low-cost drone which can act as a major platform for further developments in automation, transportation, reconnaissance and more. We describe our ROS Gazebo simulator and our STATUS algorithms which form the core of our development of our object tracking drone for generic purposes.