The recent worldwide introduction of RemoteID (RID) regulations forces all Unmanned Aircrafts (UAs), a.k.a. drones, to broadcast in plaintext on the wireless channel their identity and real-time location, for accounting and monitoring purposes. Although improving drones' monitoring and situational awareness, the RID rule also generates significant privacy concerns for UAs' operators, threatened by the ease of tracking of UAs and related confidentiality and privacy concerns connected with the broadcasting of plaintext identity information. In this paper, we propose $A^2RID$, a protocol suite for anonymous direct authentication and remote identification of heterogeneous commercial UAs. $A^2RID$ integrates and adapts protocols for anonymous message signing to work in the UA domain, coping with the constraints of commercial drones and the tight real-time requirements imposed by the RID regulation. Overall, the protocols in the $A^2RID$ suite allow a UA manufacturer to pick the configuration that best suits the capabilities and constraints of the drone, i.e., either a processing-intensive but memory-lightweight solution (namely, $CS-A^2RID$) or a computationally-friendly but memory-hungry approach (namely, $DS-A^2RID$). Besides formally defining the protocols and formally proving their security in our setting, we also implement and test them on real heterogeneous hardware platforms, i.e., the Holybro X-500 and the ESPcopter, releasing open-source the produced code. For all the protocols, we demonstrated experimentally the capability of generating anonymous RemoteID messages well below the time bound of $1$ second required by RID, while at the same time having quite a limited impact on the energy budget of the drone.
We study the best-arm identification problem in multi-armed bandits with stochastic, potentially private rewards, when the goal is to identify the arm with the highest quantile at a fixed, prescribed level. First, we propose a (non-private) successive elimination algorithm for strictly optimal best-arm identification, we show that our algorithm is $\delta$-PAC and we characterize its sample complexity. Further, we provide a lower bound on the expected number of pulls, showing that the proposed algorithm is essentially optimal up to logarithmic factors. Both upper and lower complexity bounds depend on a special definition of the associated suboptimality gap, designed in particular for the quantile bandit problem, as we show when the gap approaches zero, best-arm identification is impossible. Second, motivated by applications where the rewards are private, we provide a differentially private successive elimination algorithm whose sample complexity is finite even for distributions with infinite support-size, and we characterize its sample complexity. Our algorithms do not require prior knowledge of either the suboptimality gap or other statistical information related to the bandit problem at hand.
The advent of the Internet of Things (IoT) era, where billions of devices and sensors are becoming more and more connected and ubiquitous, is putting a strain on traditional terrestrial networks, that may no longer be able to fulfill service requirements efficiently. This issue is further complicated in rural and remote areas with scarce and low-quality cellular coverage. To fill this gap, the research community is focusing on non-terrestrial networks (NTNs), where Unmanned Aerial Vehicles (UAVs), High Altitude Platforms (HAPs) and satellites can serve as aerial/space gateways to aggregate, process, and relay the IoT traffic. In this paper we demonstrate this paradigm, and evaluate how common Low-Power Wide Area Network (LPWAN) technologies, designed and developed to operate for IoT systems, work in NTNs. We then formalize an optimization problem to decide whether and how IoT traffic can be offloaded to LEO satellites to reduce the burden on terrestrial gateways.
We study efficient estimation of an interventional mean associated with a point exposure treatment under a causal graphical model represented by a directed acyclic graph without hidden variables. Under such a model, it may happen that a subset of the variables are uninformative in that failure to measure them neither precludes identification of the interventional mean nor changes the semiparametric variance bound for regular estimators of it. We develop a set of graphical criteria that are sound and complete for eliminating all the uninformative variables so that the cost of measuring them can be saved without sacrificing estimation efficiency, which could be useful when designing a planned observational or randomized study. Further, we construct a reduced directed acyclic graph on the set of informative variables only. We show that the interventional mean is identified from the marginal law by the g-formula (Robins, 1986) associated with the reduced graph, and the semiparametric variance bounds for estimating the interventional mean under the original and the reduced graphical model agree. This g-formula is an irreducible, efficient identifying formula in the sense that the nonparametric estimator of the formula, under regularity conditions, is asymptotically efficient under the original causal graphical model, and no formula with such property exists that only depends on a strict subset of the variables.
Mainstream methods for clinical trial design do not yet use prior probabilities of clinical hypotheses, mainly due to a concern that poor priors may lead to weak designs. To address this concern, we illustrate a conservative approach to trial design ensuring that the frequentist operational characteristics of the primary trial outcome are stronger than the design prior. Compared to current approaches to Bayesian design, we focus on defining a sample size cost commensurate to the prior to ensure against the possibility of prior-data conflict. Our approach is ethical, in that it calls for quantification of the level of clinical equipoise at design stage and requires the design to be appropriate to disturb this initial equipoise by a pre-specified amount. Four examples are discussed, illustrating the design of phase II-III trials with binary or time to event endpoints. Sample sizes are shown to be conductive to strong levels of overall evidence, whether positive or negative, increasing the conclusiveness of the design and associated trial outcome. Levels of negative evidence provided by standard group sequential designs are found negligible, underscoring the importance of complementing traditional efficacy boundaries with futility rules.
The introductory programming sequence has been the focus of much research in computing education. The recent advent of several viable and freely-available AI-driven code generation tools present several immediate opportunities and challenges in this domain. In this position paper we argue that the community needs to act quickly in deciding what possible opportunities can and should be leveraged and how, while also working on how to overcome or otherwise mitigate the possible challenges. Assuming that the effectiveness and proliferation of these tools will continue to progress rapidly, without quick, deliberate, and concerted efforts, educators will lose advantage in helping shape what opportunities come to be, and what challenges will endure. With this paper we aim to seed this discussion within the computing education community.
Causal phenomena associated with rare events occur across a wide range of engineering problems, such as risk-sensitive safety analysis, accident analysis and prevention, and extreme value theory. However, current methods for causal discovery are often unable to uncover causal links, between random variables in a dynamic setting, that manifest only when the variables first experience low-probability realizations. To address this issue, we introduce a novel statistical independence test on data collected from time-invariant dynamical systems in which rare but consequential events occur. In particular, we exploit the time-invariance of the underlying data to construct a superimposed dataset of the system state before rare events happen at different timesteps. We then design a conditional independence test on the reorganized data. We provide non-asymptotic sample complexity bounds for the consistency of our method, and validate its performance across various simulated and real-world datasets, including incident data collected from the Caltrans Performance Measurement System (PeMS). Code containing the datasets and experiments is publicly available.
Blockchain is an emerging decentralized data collection, sharing and storage technology, which have provided abundant transparent, secure, tamper-proof, secure and robust ledger services for various real-world use cases. Recent years have witnessed notable developments of blockchain technology itself as well as blockchain-adopting applications. Most existing surveys limit the scopes on several particular issues of blockchain or applications, which are hard to depict the general picture of current giant blockchain ecosystem. In this paper, we investigate recent advances of both blockchain technology and its most active research topics in real-world applications. We first review the recent developments of consensus mechanisms and storage mechanisms in general blockchain systems. Then extensive literature is conducted on blockchain enabled IoT, edge computing, federated learning and several emerging applications including healthcare, COVID-19 pandemic, social network and supply chain, where detailed specific research topics are discussed in each. Finally, we discuss the future directions, challenges and opportunities in both academia and industry.
Artificial intelligence (AI) has become a part of everyday conversation and our lives. It is considered as the new electricity that is revolutionizing the world. AI is heavily invested in both industry and academy. However, there is also a lot of hype in the current AI debate. AI based on so-called deep learning has achieved impressive results in many problems, but its limits are already visible. AI has been under research since the 1940s, and the industry has seen many ups and downs due to over-expectations and related disappointments that have followed. The purpose of this book is to give a realistic picture of AI, its history, its potential and limitations. We believe that AI is a helper, not a ruler of humans. We begin by describing what AI is and how it has evolved over the decades. After fundamentals, we explain the importance of massive data for the current mainstream of artificial intelligence. The most common representations for AI, methods, and machine learning are covered. In addition, the main application areas are introduced. Computer vision has been central to the development of AI. The book provides a general introduction to computer vision, and includes an exposure to the results and applications of our own research. Emotions are central to human intelligence, but little use has been made in AI. We present the basics of emotional intelligence and our own research on the topic. We discuss super-intelligence that transcends human understanding, explaining why such achievement seems impossible on the basis of present knowledge,and how AI could be improved. Finally, a summary is made of the current state of AI and what to do in the future. In the appendix, we look at the development of AI education, especially from the perspective of contents at our own university.
Games and simulators can be a valuable platform to execute complex multi-agent, multiplayer, imperfect information scenarios with significant parallels to military applications: multiple participants manage resources and make decisions that command assets to secure specific areas of a map or neutralize opposing forces. These characteristics have attracted the artificial intelligence (AI) community by supporting development of algorithms with complex benchmarks and the capability to rapidly iterate over new ideas. The success of artificial intelligence algorithms in real-time strategy games such as StarCraft II have also attracted the attention of the military research community aiming to explore similar techniques in military counterpart scenarios. Aiming to bridge the connection between games and military applications, this work discusses past and current efforts on how games and simulators, together with the artificial intelligence algorithms, have been adapted to simulate certain aspects of military missions and how they might impact the future battlefield. This paper also investigates how advances in virtual reality and visual augmentation systems open new possibilities in human interfaces with gaming platforms and their military parallels.
Artificial Intelligence (AI) is rapidly becoming integrated into military Command and Control (C2) systems as a strategic priority for many defence forces. The successful implementation of AI is promising to herald a significant leap in C2 agility through automation. However, realistic expectations need to be set on what AI can achieve in the foreseeable future. This paper will argue that AI could lead to a fragility trap, whereby the delegation of C2 functions to an AI could increase the fragility of C2, resulting in catastrophic strategic failures. This calls for a new framework for AI in C2 to avoid this trap. We will argue that antifragility along with agility should form the core design principles for AI-enabled C2 systems. This duality is termed Agile, Antifragile, AI-Enabled Command and Control (A3IC2). An A3IC2 system continuously improves its capacity to perform in the face of shocks and surprises through overcompensation from feedback during the C2 decision-making cycle. An A3IC2 system will not only be able to survive within a complex operational environment, it will also thrive, benefiting from the inevitable shocks and volatility of war.