The trade-off of secrecy is the difficulty of verification. This trade-off means that contracts must be kept private, yet their compliance needs to be verified, which we call the secrecy-verifiability paradox. However, the existing smart contracts are not designed to provide secrecy in this context without sacrificing verifiability. Without a trusted third party for notarization, the protocol for the verification of smart contracts has to be built on cryptographic primitives. We propose a blockchain-based solution that overcomes this challenge by storing the verifiable evidence as accessible data on a blockchain in an appropriate manner. This solution allows for cryptographic data verification but not revealing the data itself. In addition, with our proposal, it is possible to verify contracts whose form of existence has been destroyed as long as the contract is real and the people involved remember it.
Since the traffic administration at road intersections determines the capacity bottleneck of modern transportation systems, intelligent cooperative coordination for connected autonomous vehicles (CAVs) has shown to be an effective solution. In this paper, we try to formulate a Bi-Level CAV intersection coordination framework, where coordinators from High and Low levels are tightly coupled. In the High-Level coordinator where vehicles from multiple roads are involved, we take various metrics including throughput, safety, fairness and comfort into consideration. Motivated by the time consuming space-time resource allocation framework in [1], we try to give a low complexity solution by transforming the complicated original problem into a sequential linear programming one. Based on the "feasible tunnels" (FT) generated from the High-Level coordinator, we then propose a rapid gradient-based trajectory optimization strategy in the Low-Level planner, to effectively avoid collisions beyond High-level considerations, such as the pedestrian or bicycles. Simulation results and laboratory experiments show that our proposed method outperforms existing strategies. Moreover, the most impressive advantage is that the proposed strategy can plan vehicle trajectory in milliseconds, which is promising in realworld deployments. A detailed description include the coordination framework and experiment demo could be found at the supplement materials, or online at //youtu.be/MuhjhKfNIOg.
Multiway data analysis is aimed at inferring patterns from data represented as a multi-dimensional array. Estimating covariance from multiway data is a fundamental statistical task, however, the intrinsic high dimensionality poses significant statistical and computational challenges. Recently, several factorized covariance models, paired with estimation algorithms, have been proposed to circumvent these obstacles. Despite several promising results on the algorithmic front, it remains under-explored whether and when such a model is valid. To address this question, we define the notion of Kronecker-separable multiway covariance, which can be written as a sum of $r$ tensor products of mode-wise covariances. The question of whether a given covariance can be represented as a separable multiway covariance is then reduced to an equivalent question about separability of quantum states. Using this equivalence, it follows directly that a generic multiway covariance tends to be non-separable (even if $r \to \infty$), and moreover, finding its best separable approximation is NP-hard. These observations imply that factorized covariance models are restrictive and should be used only when there is a compelling rationale for such a model.
Most of the existing signcryption schemes generate pseudonym by key generation center (KGC) and usually choose bilinear pairing to construct authentication schemes. The drawback is that these schemes not only consume heavy computation and communication costs during information exchange, but also can not eliminate security risks due to not updating pseudonym, which do not work well for resource-constrained smart terminals in cyber-physical power systems (CPPSs). The main objective of this paper is to propose a novel efficient signcryption scheme for resource-constrained smart terminals. First, a dynamical pseudonym self-generation mechanism (DPSGM) is explored to achieve privacy preservation and avoid the source being linked. Second, combined with DPSGM, an efficient signcryption scheme based on certificateless cryptography (CLC) and elliptic curve cryptography (ECC) is designed, which reduces importantly computation and communication burden. Furthermore, under random oracle model (ROM), the confidentiality and non-repudiation of the proposed signcryption scheme are transformed into elliptic curve discrete logarithm and computational Diffie-Hellman problems that cannot be solved in polynomial time, which guarantees the security. Finally, the effectiveness and feasibility of the proposed signcryption scheme are confirmed by experimental analyses.
Recently, test-time adaptation (TTA) has been proposed as a promising solution for addressing distribution shifts. It allows a base model to adapt to an unforeseen distribution during inference by leveraging the information from the batch of (unlabeled) test data. However, we uncover a novel security vulnerability of TTA based on the insight that predictions on benign samples can be impacted by malicious samples in the same batch. To exploit this vulnerability, we propose Distribution Invading Attack (DIA), which injects a small fraction of malicious data into the test batch. DIA causes models using TTA to misclassify benign and unperturbed test data, providing an entirely new capability for adversaries that is infeasible in canonical machine learning pipelines. Through comprehensive evaluations, we demonstrate the high effectiveness of our attack on multiple benchmarks across six TTA methods. In response, we investigate two countermeasures to robustify the existing insecure TTA implementations, following the principle of "security by design". Together, we hope our findings can make the community aware of the utility-security tradeoffs in deploying TTA and provide valuable insights for developing robust TTA approaches.
BACKGROUND: In 1967, Frederick Lord posed a conundrum that has confused scientists for over 50-years. Subsequently named Lord's 'paradox', the puzzle centres on the observation that two common approach to analyses of 'change' between two time-points can produce radically different results. Approach 1 involves analysing the follow-up minus baseline (i.e., 'change score') and Approach 2 involves analysing the follow-up conditional on baseline. METHODS: At the heart of Lord's 'paradox' lies another puzzle concerning the use of 'change scores' in observational data. Using directed acyclic graphs and data simulations, we introduce, explore, and explain the 'paradox', consider the philosophy of change, and discuss the warnings and lessons of this 50-year puzzle. RESULTS: Understanding Lord's 'paradox' starts with recognising that a variable may change for three reasons: (A) 'endogenous change', which represents simple changes in scale, (B) 'random change', which represents change due to random processes, and (C) 'exogenous change', which represents all non-endogenous, non-random change. Unfortunately, in observational data, neither Approach 1 nor Approach 2 are able to reliably estimate the causes of 'exogenous change'. Approach 1 evaluates obscure estimands with little, if any, real-world interpretation. Approach 2 is susceptible to mediator-outcome confounding and cannot distinguish exogenous change from random change. Valid and precise estimates of a useful causal estimand instead require appropriate multivariate methods (such as g-methods) and more than two measures of the outcome. CONCLUSION: Lord's 'paradox' reiterates the dangers of analysing change scores in observational data and highlights the importance of considering causal questions within a causal framework.
"AI as a Service" (AIaaS) is a rapidly growing market, offering various plug-and-play AI services and tools. AIaaS enables its customers (users) - who may lack the expertise, data, and/or resources to develop their own systems - to easily build and integrate AI capabilities into their applications. Yet, it is known that AI systems can encapsulate biases and inequalities that can have societal impact. This paper argues that the context-sensitive nature of fairness is often incompatible with AIaaS' 'one-size-fits-all' approach, leading to issues and tensions. Specifically, we review and systematise the AIaaS space by proposing a taxonomy of AI services based on the levels of autonomy afforded to the user. We then critically examine the different categories of AIaaS, outlining how these services can lead to biases or be otherwise harmful in the context of end-user applications. In doing so, we seek to draw research attention to the challenges of this emerging area.
Human agency and autonomy have always been fundamental concepts in HCI. New developments, including ubiquitous AI and the growing integration of technologies into our lives, make these issues ever pressing, as technologies increase their ability to influence our behaviours and values. However, in HCI understandings of autonomy and agency remain ambiguous. Both concepts are used to describe a wide range of phenomena pertaining to sense-of-control, material independence, and identity. It is unclear to what degree these understandings are compatible, and how they support the development of research programs and practical interventions. We address this by reviewing 30 years of HCI research on autonomy and agency to identify current understandings, open issues, and future directions. From this analysis, we identify ethical issues, and outline key themes to guide future work. We also articulate avenues for advancing clarity and specificity around these concepts, and for coordinating integrative work across different HCI communities.
Artificial intelligence (AI) has become a part of everyday conversation and our lives. It is considered as the new electricity that is revolutionizing the world. AI is heavily invested in both industry and academy. However, there is also a lot of hype in the current AI debate. AI based on so-called deep learning has achieved impressive results in many problems, but its limits are already visible. AI has been under research since the 1940s, and the industry has seen many ups and downs due to over-expectations and related disappointments that have followed. The purpose of this book is to give a realistic picture of AI, its history, its potential and limitations. We believe that AI is a helper, not a ruler of humans. We begin by describing what AI is and how it has evolved over the decades. After fundamentals, we explain the importance of massive data for the current mainstream of artificial intelligence. The most common representations for AI, methods, and machine learning are covered. In addition, the main application areas are introduced. Computer vision has been central to the development of AI. The book provides a general introduction to computer vision, and includes an exposure to the results and applications of our own research. Emotions are central to human intelligence, but little use has been made in AI. We present the basics of emotional intelligence and our own research on the topic. We discuss super-intelligence that transcends human understanding, explaining why such achievement seems impossible on the basis of present knowledge,and how AI could be improved. Finally, a summary is made of the current state of AI and what to do in the future. In the appendix, we look at the development of AI education, especially from the perspective of contents at our own university.
Games and simulators can be a valuable platform to execute complex multi-agent, multiplayer, imperfect information scenarios with significant parallels to military applications: multiple participants manage resources and make decisions that command assets to secure specific areas of a map or neutralize opposing forces. These characteristics have attracted the artificial intelligence (AI) community by supporting development of algorithms with complex benchmarks and the capability to rapidly iterate over new ideas. The success of artificial intelligence algorithms in real-time strategy games such as StarCraft II have also attracted the attention of the military research community aiming to explore similar techniques in military counterpart scenarios. Aiming to bridge the connection between games and military applications, this work discusses past and current efforts on how games and simulators, together with the artificial intelligence algorithms, have been adapted to simulate certain aspects of military missions and how they might impact the future battlefield. This paper also investigates how advances in virtual reality and visual augmentation systems open new possibilities in human interfaces with gaming platforms and their military parallels.
In the last years, Artificial Intelligence (AI) has achieved a notable momentum that may deliver the best of expectations over many application sectors across the field. For this to occur, the entire community stands in front of the barrier of explainability, an inherent problem of AI techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI. Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is acknowledged as a crucial feature for the practical deployment of AI models. This overview examines the existing literature in the field of XAI, including a prospect toward what is yet to be reached. We summarize previous efforts to define explainability in Machine Learning, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought. We then propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at Deep Learning methods for which a second taxonomy is built. This literature analysis serves as the background for a series of challenges faced by XAI, such as the crossroads between data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to XAI with a reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.