We show that a topological quantum computer based on the evaluation of a Witten-Reshetikhin-Turaev TQFT invariant of knots can always be arranged so that the knot diagrams with which one computes are diagrams of hyperbolic knots. The diagrams can even be arranged to have additional nice properties, such as being alternating with minimal crossing number. Moreover, the reduction is polynomially uniform in the self-braiding exponent of the coloring object. Various complexity-theoretic hardness results regarding the calculation of quantum invariants of knots follow as corollaries. In particular, we argue that the hyperbolic geometry of knots is unlikely to be useful for topological quantum computation.
Despite numerous advances in the field and a seemingly ever-increasing amount of investment, we are still some years away from seeing a production quantum computer in action. However, it is possible to make some educated guesses about the operational difficulties and challenges that may be encountered in practice. We can be reasonably confident that the early machines will be hybrid, with the quantum devices used in an apparently similar way to current accelerators such as FPGAs or GPUs. Compilers, libraries and the other tools relied upon currently for development of software will have to evolve/be reinvented to support the new technology, and training courses will have to be rethought completely rather than ``just'' updated alongside them. The workloads we are likely to see making best use of these hybrid machines will initially be few, before rapidly increasing in diversity as we saw with the uptake of GPUs and other new technologies in the past. This will again be helped by the increase in the number of supporting libraries and development tools, and by the gradual re-development of existing software, to make use of the new quantum devices. Unfortunately, at present the problem of error correction is still largely unsolved, although there have been many advances. Quantum computation is very sensitive to noise, leading to frequent errors during execution. Quantum calculations, although asymptotically faster than their equivalents in ``traditional'' HPC, still take time, and while the profiling tools and programming approaches will have to change drastically, many of the skills honed in the current HPC industry will not suddenly become obsolete, but continue to be useful in the quantum era.
Networks of atom-centered coordination octahedra commonly occur in inorganic and hybrid solid-state materials. Characterizing their spatial arrangements and characteristics is crucial for relating structures to properties for many materials families. The traditional method using case-by-case inspection becomes prohibitive for discovering trends and similarities in large datasets. Here, we operationalize chemical intuition to automate the geometric parsing, quantification, and classification of coordination octahedral networks. We find axis-resolved tilting trends in ABO$_{3}$ perovskite polymorphs, which assist in detecting oxidation state changes. Moreover, we develop a scale-invariant encoding scheme to represent these networks, which, combined with human-assisted unsupervised machine learning, allows us to taxonomize the inorganic framework polytypes in hybrid iodoplumbates (A$_x$Pb$_y$I$_z$). Consequently, we uncover a violation of Pauling's third rule and the design principles underpinning their topological diversity. Our results offer a glimpse into the vast design space of atomic octahedral networks and inform high-throughput, targeted screening of specific structure types.
Partial differential equations (PDEs) are ubiquitous in science and engineering. Prior quantum algorithms for solving the system of linear algebraic equations obtained from discretizing a PDE have a computational complexity that scales at least linearly with the condition number $\kappa$ of the matrices involved in the computation. For many practical applications, $\kappa$ scales polynomially with the size $N$ of the matrices, rendering a polynomial-in-$N$ complexity for these algorithms. Here we present a quantum algorithm with a complexity that is polylogarithmic in $N$ but is independent of $\kappa$ for a large class of PDEs. Our algorithm generates a quantum state that enables extracting features of the solution. Central to our methodology is using a wavelet basis as an auxiliary system of coordinates in which the condition number of associated matrices is independent of $N$ by a simple diagonal preconditioner. We present numerical simulations showing the effect of the wavelet preconditioner for several differential equations. Our work could provide a practical way to boost the performance of quantum-simulation algorithms where standard methods are used for discretization.
We present a new approach to semiparametric inference using corrected posterior distributions. The method allows us to leverage the adaptivity, regularization and predictive power of nonparametric Bayesian procedures to estimate low-dimensional functionals of interest without being restricted by the holistic Bayesian formalism. Starting from a conventional nonparametric posterior, we target the functional of interest by transforming the entire distribution with a Bayesian bootstrap correction. We provide conditions for the resulting $\textit{one-step posterior}$ to possess calibrated frequentist properties and specialize the results for several canonical examples: the integrated squared density, the mean of a missing-at-random outcome, and the average causal treatment effect on the treated. The procedure is computationally attractive, requiring only a simple, efficient post-processing step that can be attached onto any arbitrary posterior sampling algorithm. Using the ACIC 2016 causal data analysis competition, we illustrate that our approach can outperform the existing state-of-the-art through the propagation of Bayesian uncertainty.
A central challenge in the verification of quantum computers is benchmarking their performance as a whole and demonstrating their computational capabilities. In this work, we find a model of quantum computation, Bell sampling, that can be used for both of those tasks and thus provides an ideal stepping stone towards fault-tolerance. In Bell sampling, we measure two copies of a state prepared by a quantum circuit in the transversal Bell basis. We show that the Bell samples are classically intractable to produce and at the same time constitute what we call a circuit shadow: from the Bell samples we can efficiently extract information about the quantum circuit preparing the state, as well as diagnose circuit errors. In addition to known properties that can be efficiently extracted from Bell samples, we give two new and efficient protocols, a test for the depth of the circuit and an algorithm to estimate a lower bound to the number of T gates in the circuit. With some additional measurements, our algorithm learns a full description of states prepared by circuits with low T-count.
It has been shown that, for even $n$, evolving $n$ qubits according to a Hamiltonian that is the sum of pairwise interactions between the particles, can be used to exactly implement an $(n+1)$-qubit fanout gate using a particular constant-depth circuit [arXiv:quant-ph/0309163]. However, the coupling coefficients in the Hamiltonian considered in that paper are assumed to be all equal. In this paper, we generalize these results and show that for all $n$, including odd $n$, one can exactly implement an $(n+1)$-qubit parity gate and hence, equivalently in constant depth an $(n+1)$-qubit fanout gate, using a similar Hamiltonian but with unequal couplings, and we give an exact characterization of which couplings are adequate to implement fanout via the same circuit. We also investigate pairwise couplings that satisfy an inverse square law, giving necessary and sufficient criteria for implementing fanout given spatial arrangements of identical qubits in two and three dimensions subject to this law. We use our criteria to give planar arrangements of four qubits that (together with a target qubit) are adequate to implement $5$-qubit fanout.
We propose a novel measure of statistical depth, the metric spatial depth, for data residing in an arbitrary metric space. The measure assigns high (low) values for points located near (far away from) the bulk of the data distribution, allowing quantifying their centrality/outlyingness. This depth measure is shown to have highly interpretable geometric properties, making it appealing in object data analysis where standard descriptive statistics are difficult to compute. The proposed measure reduces to the classical spatial depth in a Euclidean space. In addition to studying its theoretical properties, to provide intuition on the concept, we explicitly compute metric spatial depths in several different metric spaces. Finally, we showcase the practical usefulness of the metric spatial depth in outlier detection, non-convex depth region estimation and classification.
In this paper we study the threshold model of \emph{geometric inhomogeneous random graphs} (GIRGs); a generative random graph model that is closely related to \emph{hyperbolic random graphs} (HRGs). These models have been observed to capture complex real-world networks well with respect to the structural and algorithmic properties. Following comprehensive studies regarding their \emph{connectivity}, i.e., which parts of the graphs are connected, we have a good understanding under which circumstances a \emph{giant} component (containing a constant fraction of the graph) emerges. While previous results are rather technical and challenging to work with, the goal of this paper is to provide more accessible proofs. At the same time we significantly improve the previously known probabilistic guarantees, showing that GIRGs contain a giant component with probability $1 - \exp(-\Omega(n^{(3-\tau)/2}))$ for graph size $n$ and a degree distribution with power-law exponent $\tau \in (2, 3)$. Based on that we additionally derive insights about the connectivity of certain induced subgraphs of GIRGs.
The quantum separability problem consists in deciding whether a bipartite density matrix is entangled or separable. In this work, we propose a machine learning pipeline for finding approximate solutions for this NP-hard problem in large-scale scenarios. We provide an efficient Frank-Wolfe-based algorithm to approximately seek the nearest separable density matrix and derive a systematic way for labeling density matrices as separable or entangled, allowing us to treat quantum separability as a classification problem. Our method is applicable to any two-qudit mixed states. Numerical experiments with quantum states of 3- and 7-dimensional qudits validate the efficiency of the proposed procedure, and demonstrate that it scales up to thousands of density matrices with a high quantum entanglement detection accuracy. This takes a step towards benchmarking quantum separability to support the development of more powerful entanglement detection techniques.
The reconstruction of quantum states from experimental measurements, often achieved using quantum state tomography (QST), is crucial for the verification and benchmarking of quantum devices. However, performing QST for a generic unstructured quantum state requires an enormous number of state copies that grows \emph{exponentially} with the number of individual quanta in the system, even for the most optimal measurement settings. Fortunately, many physical quantum states, such as states generated by noisy, intermediate-scale quantum computers, are usually structured. In one dimension, such states are expected to be well approximated by matrix product operators (MPOs) with a finite matrix/bond dimension independent of the number of qubits, therefore enabling efficient state representation. Nevertheless, it is still unclear whether efficient QST can be performed for these states in general. In this paper, we attempt to bridge this gap and establish theoretical guarantees for the stable recovery of MPOs using tools from compressive sensing and the theory of empirical processes. We begin by studying two types of random measurement settings: Gaussian measurements and Haar random rank-one Positive Operator Valued Measures (POVMs). We show that the information contained in an MPO with a finite bond dimension can be preserved using a number of random measurements that depends only \emph{linearly} on the number of qubits, assuming no statistical error of the measurements. We then study MPO-based QST with physical quantum measurements through Haar random rank-one POVMs that can be implemented on quantum computers. We prove that only a \emph{polynomial} number of state copies in the number of qubits is required to guarantee bounded recovery error of an MPO state.