The introduction of the European Union's (EU) set of comprehensive regulations relating to technology, the General Data Protection Regulation, grants EU citizens the right to explanations for automated decisions that have significant effects on their life. This poses a substantial challenge, as many of today's state-of-the-art algorithms are generally unexplainable black boxes. Simultaneously, we have seen an emergence of the fields of quantum computation and quantum AI. Due to the fickle nature of quantum information, the problem of explainability is amplified, as measuring a quantum system destroys the information. As a result, there is a need for post-hoc explanations for quantum AI algorithms. In the classical context, the cooperative game theory concept of the Shapley value has been adapted for post-hoc explanations. However, this approach does not translate to use in quantum computing trivially and can be exponentially difficult to implement if not handled with care. We propose a novel algorithm which reduces the problem of accurately estimating the Shapley values of a quantum algorithm into a far simpler problem of estimating the true average of a binomial distribution in polynomial time.
We show that a topological quantum computer based on the evaluation of a Witten-Reshetikhin-Turaev TQFT invariant of knots can always be arranged so that the knot diagrams with which one computes are diagrams of hyperbolic knots. The diagrams can even be arranged to have additional nice properties, such as being alternating with minimal crossing number. Moreover, the reduction is polynomially uniform in the self-braiding exponent of the coloring object. Various complexity-theoretic hardness results regarding the calculation of quantum invariants of knots follow as corollaries. In particular, we argue that the hyperbolic geometry of knots is unlikely to be useful for topological quantum computation.
Convex splitting is a powerful technique in quantum information theory used in proving the achievability of numerous information-processing protocols such as quantum state redistribution and quantum network channel coding. In this work, we establish a one-shot error exponent and a one-shot strong converse for convex splitting with trace distance as an error criterion. Our results show that the derived error exponent (strong converse exponent) is positive if and only if the rate is in (outside) the achievable region. This leads to new one-shot exponent results in various tasks such as communication over quantum wiretap channels, secret key distillation, one-way quantum message compression, quantum measurement simulation, and quantum channel coding with side information at the transmitter. We also establish a near-optimal one-shot characterization of the sample complexity for convex splitting, which yields matched second-order asymptotics. This then leads to stronger one-shot analysis in many quantum information-theoretic tasks.
Quantum networks constitute a major part of quantum technologies. They will boost distributed quantum computing drastically by providing a scalable modular architecture of quantum chips, or by establishing an infrastructure for measurement based quantum computing. Moreover, they will provide the backbone of the future quantum internet, allowing for high margins of security. Interestingly, the advantages that the quantum networks would provide for communications, rely on entanglement distribution, which suffers from high latency in protocols based on Bell pair distribution and bipartite entanglement swapping. Moreover, the designed algorithms for multipartite entanglement routing suffer from intractability issues making them unsolvable exactly in polynomial time. In this paper, we investigate a new approach for graph states distribution in quantum networks relying inherently on local quantum coding -- LQC -- isometries and on multipartite states transfer. Additionally, single-shot bounds for stabilizer states distribution are provided. Analogously to network coding, these bounds are shown to be achievable if appropriate isometries/stabilizer codes in relay nodes are chosen, which induces a lower latency entanglement distribution. As a matter of fact, the advantages of the protocol for different figures of merit of the network are provided.
Accurately estimating the probability of failure for safety-critical systems is important for certification. Estimation is often challenging due to high-dimensional input spaces, dangerous test scenarios, and computationally expensive simulators; thus, efficient estimation techniques are important to study. This work reframes the problem of black-box safety validation as a Bayesian optimization problem and introduces an algorithm, Bayesian safety validation, that iteratively fits a probabilistic surrogate model to efficiently predict failures. The algorithm is designed to search for failures, compute the most-likely failure, and estimate the failure probability over an operating domain using importance sampling. We introduce a set of three acquisition functions that focus on reducing uncertainty by covering the design space, optimizing the analytically derived failure boundaries, and sampling the predicted failure regions. Mainly concerned with systems that only output a binary indication of failure, we show that our method also works well in cases where more output information is available. Results show that Bayesian safety validation achieves a better estimate of the probability of failure using orders of magnitude fewer samples and performs well across various safety validation metrics. We demonstrate the algorithm on three test problems with access to ground truth and on a real-world safety-critical subsystem common in autonomous flight: a neural network-based runway detection system. This work is open sourced and currently being used to supplement the FAA certification process of the machine learning components for an autonomous cargo aircraft.
Collective decision-making is crucial to information and communication systems. Decision conflicts among agents hinder the maximization of potential utilities of the entire system. Quantum processes can realize conflict-free joint decisions among two agents using the entanglement of photons or quantum interference of orbital angular momentum (OAM). However, previous studies have always presented symmetric resultant joint decisions. Although this property helps maintain and preserve equality, it cannot resolve disparities. Global challenges, such as ethics and equity, are recognized in the field of responsible artificial intelligence as responsible research and innovation paradigm. Thus, decision-making systems must not only preserve existing equality but also tackle disparities. This study theoretically and numerically investigates asymmetric collective decision-making using quantum interference of photons carrying OAM or entangled photons. Although asymmetry is successfully realized, a photon loss is inevitable in the proposed models. The available range of asymmetry and method for obtaining the desired degree of asymmetry are analytically formulated.
SARSA, a classical on-policy control algorithm for reinforcement learning, is known to chatter when combined with linear function approximation: SARSA does not diverge but oscillates in a bounded region. However, little is known about how fast SARSA converges to that region and how large the region is. In this paper, we make progress towards this open problem by showing the convergence rate of projected SARSA to a bounded region. Importantly, the region is much smaller than the region that we project into, provided that the magnitude of the reward is not too large. Existing works regarding the convergence of linear SARSA to a fixed point all require the Lipschitz constant of SARSA's policy improvement operator to be sufficiently small; our analysis instead applies to arbitrary Lipschitz constants and thus characterizes the behavior of linear SARSA for a new regime.
The qubit routing problem, also known as the swap minimization problem, is a (classical) combinatorial optimization problem that arises in the design of compilers of quantum programs. We study the qubit routing problem from the viewpoint of theoretical computer science, while most of the existing studies investigated the practical aspects. We concentrate on the linear nearest neighbor (LNN) architectures of quantum computers, in which the graph topology is a path. Our results are three-fold. (1) We prove that the qubit routing problem is NP-hard. (2) We give a fixed-parameter algorithm when the number of two-qubit gates is a parameter. (3) We give a polynomial-time algorithm when each qubit is involved in at most one two-qubit gate.
Courcelle's theorem and its adaptations to cliquewidth have shaped the field of exact parameterized algorithms and are widely considered the archetype of algorithmic meta-theorems. In the past decade, there has been growing interest in developing parameterized approximation algorithms for problems which are not captured by Courcelle's theorem and, in particular, are considered not fixed-parameter tractable under the associated widths. We develop a generalization of Courcelle's theorem that yields efficient approximation schemes for any problem that can be captured by an expanded logic we call Blocked CMSO, capable of making logical statements about the sizes of set variables via so-called weight comparisons. The logic controls weight comparisons via the quantifier-alternation depth of the involved variables, allowing full comparisons for zero-alternation variables and limited comparisons for one-alternation variables. We show that the developed framework threads the very needle of tractability: on one hand it can describe a broad range of approximable problems, while on the other hand we show that the restrictions of our logic cannot be relaxed under well-established complexity assumptions. The running time of our approximation scheme is polynomial in $1/\varepsilon$, allowing us to fully interpolate between faster approximate algorithms and slower exact algorithms. This provides a unified framework to explain the tractability landscape of graph problems parameterized by treewidth and cliquewidth, as well as classical non-graph problems such as Subset Sum and Knapsack.
Safety and performance are key enablers for autonomous driving: on the one hand we want our autonomous vehicles (AVs) to be safe, while at the same time their performance (e.g., comfort or progression) is key to adoption. To effectively walk the tight-rope between safety and performance, AVs need to be risk-averse, but not entirely risk-avoidant. To facilitate safe-yet-performant driving, in this paper, we develop a task-aware risk estimator that assesses the risk a perception failure poses to the AV's motion plan. If the failure has no bearing on the safety of the AV's motion plan, then regardless of how egregious the perception failure is, our task-aware risk estimator considers the failure to have a low risk; on the other hand, if a seemingly benign perception failure severely impacts the motion plan, then our estimator considers it to have a high risk. In this paper, we propose a task-aware risk estimator to decide whether a safety maneuver needs to be triggered. To estimate the task-aware risk, first, we leverage the perception failure - detected by a perception monitor - to synthesize an alternative plausible model for the vehicle's surroundings. The risk due to the perception failure is then formalized as the "relative" risk to the AV's motion plan between the perceived and the alternative plausible scenario. We employ a statistical tool called copula, which models tail dependencies between distributions, to estimate this risk. The theoretical properties of the copula allow us to compute probably approximately correct (PAC) estimates of the risk. We evaluate our task-aware risk estimator using NuPlan and compare it with established baselines, showing that the proposed risk estimator achieves the best F1-score (doubling the score of the best baseline) and exhibits a good balance between recall and precision, i.e., a good balance of safety and performance.
Deep learning-based approaches have produced models with good insect classification accuracy; Most of these models are conducive for application in controlled environmental conditions. One of the primary emphasis of researchers is to implement identification and classification models in the real agriculture fields, which is challenging because input images that are wildly out of the distribution (e.g., images like vehicles, animals, humans, or a blurred image of an insect or insect class that is not yet trained on) can produce an incorrect insect classification. Out-of-distribution (OOD) detection algorithms provide an exciting avenue to overcome these challenge as it ensures that a model abstains from making incorrect classification prediction of non-insect and/or untrained insect class images. We generate and evaluate the performance of state-of-the-art OOD algorithms on insect detection classifiers. These algorithms represent a diversity of methods for addressing an OOD problem. Specifically, we focus on extrusive algorithms, i.e., algorithms that wrap around a well-trained classifier without the need for additional co-training. We compared three OOD detection algorithms: (i) Maximum Softmax Probability, which uses the softmax value as a confidence score, (ii) Mahalanobis distance-based algorithm, which uses a generative classification approach; and (iii) Energy-Based algorithm that maps the input data to a scalar value, called energy. We performed an extensive series of evaluations of these OOD algorithms across three performance axes: (a) \textit{Base model accuracy}: How does the accuracy of the classifier impact OOD performance? (b) How does the \textit{level of dissimilarity to the domain} impact OOD performance? and (c) \textit{Data imbalance}: How sensitive is OOD performance to the imbalance in per-class sample size?