Vehicle-to-Vehicle (V2V) communication is intended to improve road safety through distributed information sharing; however, this type of system faces a design challenge: it is difficult to predict and optimize how human agents will respond to the introduction of this information. Bayesian games are a standard approach for modeling such scenarios; in a Bayesian game, agents probabilistically adopt various types on the basis of a fixed, known distribution. Agents in such models ostensibly perform Bayesian inference, which may not be a reasonable cognitive demand for most humans. To complicate matters, the information provided to agents is often implicitly dependent on agent behavior, meaning that the distribution of agent types is a function of the behavior of agents (i.e., the type distribution is endogenous). In this paper, we study an existing model of V2V communication, but relax it along two dimensions: first, we pose a behavior model which does not require human agents to perform Bayesian inference; second, we pose an equilibrium model which avoids the challenging endogenous recursion. Surprisingly, we show that the simplified non-Bayesian behavior model yields the exact same equilibrium behavior as the original Bayesian model, which may lend credibility to Bayesian models. However, we also show that the original endogenous equilibrium model is strictly necessary to obtain certain informational paradoxes; these paradoxes do not appear in the simpler exogenous model. This suggests that standard Bayesian game models with fixed type distributions are not sufficient to express certain important phenomena.
Walking-in-place (WIP) is a locomotion technique that enables users to "walk infinitely" through vast virtual environments using walking-like gestures within a limited physical space. This paper investigates alternative interaction schemes for WIP, addressing successively the control, input, and output of WIP. First, we introduce a novel height-based control to increase advanced speed. Second, we introduce a novel input system for WIP based on elastic and passive strips. Third, we introduce the use of pseudo-haptic feedback as a novel output for WIP meant to alter walking sensations. The results of a series of user studies show that height and frequency based control of WIP can facilitate higher virtual speed with greater efficacy and ease than in frequency-based WIP. Second, using an upward elastic input system can result in a stable virtual speed control, although excessively strong elastic forces may impact the usability and user experience. Finally, using a pseudo-haptic approach can improve the perceived realism of virtual slopes. Taken together, our results suggest that, for future VR applications, there is value in further research into the use of alternative interaction schemes for walking-in-place.
New 3+1D high-resolution radar sensors are gaining importance for 3D object detection in the automotive domain due to their relative affordability and improved detection compared to classic low-resolution radar sensors. One limitation of high-resolution radar sensors, compared to lidar sensors, is the sparsity of the generated point cloud. This sparsity could be partially overcome by accumulating radar point clouds of subsequent time steps. This contribution analyzes limitations of accumulating radar point clouds on the View-of-Delft dataset. By employing different ego-motion estimation approaches, the dataset's inherent constraints, and possible solutions are analyzed. Additionally, a learning-based instance motion estimation approach is deployed to investigate the influence of dynamic motion on the accumulated point cloud for object detection. Experiments document an improved object detection performance by applying an ego-motion estimation and dynamic motion correction approach.
The Gaussian Mechanism (GM), which consists in adding Gaussian noise to a vector-valued query before releasing it, is a standard privacy protection mechanism. In particular, given that the query respects some L2 sensitivity property (the L2 distance between outputs on any two neighboring inputs is bounded), GM guarantees R\'enyi Differential Privacy (RDP). Unfortunately, precisely bounding the L2 sensitivity can be hard, thus leading to loose privacy bounds. In this work, we consider a Relative L2 sensitivity assumption, in which the bound on the distance between two query outputs may also depend on their norm. Leveraging this assumption, we introduce the Relative Gaussian Mechanism (RGM), in which the variance of the noise depends on the norm of the output. We prove tight bounds on the RDP parameters under relative L2 sensitivity, and characterize the privacy loss incurred by using output-dependent noise. In particular, we show that RGM naturally adapts to a latent variable that would control the norm of the output. Finally, we instantiate our framework to show tight guarantees for Private Gradient Descent, a problem that naturally fits our relative L2 sensitivity assumption.
Self-adaptation is a crucial feature of autonomous systems that must cope with uncertainties in, e.g., their environment and their internal state. Self-adaptive systems are often modelled as two-layered systems with a managed subsystem handling the domain concerns and a managing subsystem implementing the adaptation logic. We consider a case study of a self-adaptive robotic system; more concretely, an autonomous underwater vehicle (AUV) used for pipeline inspection. In this paper, we model and analyse it with the feature-aware probabilistic model checker ProFeat. The functionalities of the AUV are modelled in a feature model, capturing the AUV's variability. This allows us to model the managed subsystem of the AUV as a family of systems, where each family member corresponds to a valid feature configuration of the AUV. The managing subsystem of the AUV is modelled as a control layer capable of dynamically switching between such valid feature configurations, depending both on environmental and internal conditions. We use this model to analyse probabilistic reward and safety properties for the AUV.
In autonomous driving, the end-to-end (E2E) driving approach that predicts vehicle control signals directly from sensor data is rapidly gaining attention. To learn a safe E2E driving system, one needs an extensive amount of driving data and human intervention. Vehicle control data is constructed by many hours of human driving, and it is challenging to construct large vehicle control datasets. Often, publicly available driving datasets are collected with limited driving scenes, and collecting vehicle control data is only available by vehicle manufacturers. To address these challenges, this paper proposes the first self-supervised learning framework, self-supervised imitation learning (SSIL), that can learn E2E driving networks without using driving command data. To construct pseudo steering angle data, proposed SSIL predicts a pseudo target from the vehicle's poses at the current and previous time points that are estimated with light detection and ranging sensors. Our numerical experiments demonstrate that the proposed SSIL framework achieves comparable E2E driving accuracy with the supervised learning counterpart. In addition, our qualitative analyses using a conventional visual explanation tool show that trained NNs by proposed SSIL and the supervision counterpart attend similar objects in making predictions.
Intelligent reflecting surfaces (IRSs) were introduced to enhance the performance of wireless communication systems. However, from a service provider's viewpoint, a concern with the use of an IRS is its effect on out-of-band (OOB) quality of service. Specifically, if two operators, say X and Y, provide services in a given geographical area using non-overlapping frequency bands, and if operator X uses an IRS to enhance the spectral efficiency (SE) of its users, does it degrade the performance of users served by operator Y? We answer this question by analyzing the average and instantaneous performances of the OOB operator considering both sub-6 GHz and mmWave bands, accounting for their corresponding channel characteristics. Specifically, we derive the ergodic sum-spectral efficiency achieved by the operators under round-robin scheduling. We also derive the outage probability and analyze the change in the SNR witnessed by an OOB user in the presence of the IRS using stochastic dominance theory. Surprisingly, even though the IRS is randomly configured from operator Y's point of view, the OOB operator still benefits from the presence of the IRS, witnessing a performance enhancement for free, in both sub-6 GHz and mmWave bands. This is because the IRS introduces additional paths between the transmitter and receiver, increasing the overall signal power arriving at the receiver and providing diversity benefits. We numerically illustrate our findings and conclude that an IRS is always beneficial to every operator, even when the IRS is deployed and controlled by only one operator to serve its own users.
Validating the safety of Autonomous Vehicles (AVs) operating in open-ended, dynamic environments is challenging as vehicles will eventually encounter safety-critical situations for which there is not representative training data. By increasing the coverage of different road and traffic conditions and by including corner cases in simulation-based scenario testing, the safety of AVs can be improved. However, the creation of corner case scenarios including multiple agents is non-trivial. Our approach allows engineers to generate novel, realistic corner cases based on historic traffic data and to explain why situations were safety-critical. In this paper, we introduce Probabilistic Lane Graphs (PLGs) to describe a finite set of lane positions and directions in which vehicles might travel. The structure of PLGs is learnt directly from spatio-temporal traffic data. The graph model represents the actions of the drivers in response to a given state in the form of a probabilistic policy. We use reinforcement learning techniques to modify this policy and to generate realistic and explainable corner case scenarios which can be used for assessing the safety of AVs.
Internet of Things (IoT) is defined as the connection between places and physical objects (i.e., things) over the internet/network via smart computing devices. We observed that IoT software developers share solutions to programming questions as code examples on three Stack Exchange Q&A sites: Stack Overflow (SO), Arduino, and Raspberry Pi. Previous research studies found vulnerabilities/weaknesses in C/C++ code examples shared in Stack Overflow. However, the studies did not investigate C/C++ code examples related to IoT. The studies investigated SO code examples only. In this paper, we conduct a large-scale empirical study of all IoT C/C++ code examples shared in the three Stack Exchange sites, i.e., SO, Arduino, and Raspberry Pi. From the 11,329 obtained code snippets from the three sites, we identify 29 distinct CWE (Common Weakness Enumeration) types in 609 snippets. These CWE types can be categorized into 8 general weakness categories, and we observe that evaluation, memory, and initialization related weaknesses are the most common to be introduced by users when posting programming solutions. Furthermore, we find that 39.58% of the vulnerable code snippets contain instances of CWE types that can be mapped to real-world occurrences of those CWE types (i.e. CVE instances). The most number vulnerable IoT code examples was found in Arduino, followed by SO, and Raspberry Pi. Memory type vulnerabilities are on the rise in the sites. For example, from the 3595 mapped CVE instances, we find that 28.99% result in Denial of Service (DoS) errors, which is particularly harmful for network reliant IoT devices such as smart cars. Our study results can guide various IoT stakeholders to be aware of such vulnerable IoT code examples and to inform IoT researchers during their development of tools that can help prevent developers the sharing of such vulnerable code examples in the sites. [Abridged].
Graph neural networks (GNNs) have demonstrated a significant boost in prediction performance on graph data. At the same time, the predictions made by these models are often hard to interpret. In that regard, many efforts have been made to explain the prediction mechanisms of these models from perspectives such as GNNExplainer, XGNN and PGExplainer. Although such works present systematic frameworks to interpret GNNs, a holistic review for explainable GNNs is unavailable. In this survey, we present a comprehensive review of explainability techniques developed for GNNs. We focus on explainable graph neural networks and categorize them based on the use of explainable methods. We further provide the common performance metrics for GNNs explanations and point out several future research directions.
Australia is a leading AI nation with strong allies and partnerships. Australia has prioritised robotics, AI, and autonomous systems to develop sovereign capability for the military. Australia commits to Article 36 reviews of all new means and methods of warfare to ensure weapons and weapons systems are operated within acceptable systems of control. Additionally, Australia has undergone significant reviews of the risks of AI to human rights and within intelligence organisations and has committed to producing ethics guidelines and frameworks in Security and Defence. Australia is committed to OECD's values-based principles for the responsible stewardship of trustworthy AI as well as adopting a set of National AI ethics principles. While Australia has not adopted an AI governance framework specifically for Defence; Defence Science has published 'A Method for Ethical AI in Defence' (MEAID) technical report which includes a framework and pragmatic tools for managing ethical and legal risks for military applications of AI.