Research on quantum technology spans multiple disciplines: physics, computer science, engineering, and mathematics. The objective of this manuscript is to provide an accessible introduction to this emerging field for economists that is centered around quantum computing and quantum money. We proceed in three steps. First, we discuss basic concepts in quantum computing and quantum communication, assuming knowledge of linear algebra and statistics, but not of computer science or physics. This covers fundamental topics, such as qubits, superposition, entanglement, quantum circuits, oracles, and the no-cloning theorem. Second, we provide an overview of quantum money, an early invention of the quantum communication literature that has recently been partially implemented in an experimental setting. One form of quantum money offers the privacy and anonymity of physical cash, the option to transact without the involvement of a third party, and the efficiency and convenience of a debit card payment. Such features cannot be achieved in combination with any other form of money. Finally, we review all existing quantum speedups that have been identified for algorithms used to solve and estimate economic models. This includes function approximation, linear systems analysis, Monte Carlo simulation, matrix inversion, principal component analysis, linear regression, interpolation, numerical differentiation, and true random number generation. We also discuss the difficulty of achieving quantum speedups and comment on common misconceptions about what is achievable with quantum computing.
Variational Quantum Algorithms (VQAs) may be a path to quantum advantage on Noisy Intermediate-Scale Quantum (NISQ) computers. A natural question is whether noise on NISQ devices places fundamental limitations on VQA performance. We rigorously prove a serious limitation for noisy VQAs, in that the noise causes the training landscape to have a barren plateau (i.e., vanishing gradient). Specifically, for the local Pauli noise considered, we prove that the gradient vanishes exponentially in the number of qubits $n$ if the depth of the ansatz grows linearly with $n$. These noise-induced barren plateaus (NIBPs) are conceptually different from noise-free barren plateaus, which are linked to random parameter initialization. Our result is formulated for a generic ansatz that includes as special cases the Quantum Alternating Operator Ansatz and the Unitary Coupled Cluster Ansatz, among others. For the former, our numerical heuristics demonstrate the NIBP phenomenon for a realistic hardware noise model.
Quantum technology has the potential to revolutionize how we acquire and process experimental data to learn about the physical world. An experimental setup that transduces data from a physical system to a stable quantum memory, and processes that data using a quantum computer, could have significant advantages over conventional experiments in which the physical system is measured and the outcomes are processed using a classical computer. We prove that, in various tasks, quantum machines can learn from exponentially fewer experiments than those required in conventional experiments. The exponential advantage holds in predicting properties of physical systems, performing quantum principal component analysis on noisy states, and learning approximate models of physical dynamics. In some tasks, the quantum processing needed to achieve the exponential advantage can be modest; for example, one can simultaneously learn about many noncommuting observables by processing only two copies of the system. Conducting experiments with up to 40 superconducting qubits and 1300 quantum gates, we demonstrate that a substantial quantum advantage can be realized using today's relatively noisy quantum processors. Our results highlight how quantum technology can enable powerful new strategies to learn about nature.
We study infinite limits of neural network quantum states ($\infty$-NNQS), which exhibit representation power through ensemble statistics, and also tractable gradient descent dynamics. Ensemble averages of Renyi entropies are expressed in terms of neural network correlators, and architectures that exhibit volume-law entanglement are presented. A general framework is developed for studying the gradient descent dynamics of neural network quantum states (NNQS), using a quantum state neural tangent kernel (QS-NTK). For $\infty$-NNQS the training dynamics is simplified, since the QS-NTK becomes deterministic and constant. An analytic solution is derived for quantum state supervised learning, which allows an $\infty$-NNQS to recover any target wavefunction. Numerical experiments on finite and infinite NNQS in the transverse field Ising model and Fermi Hubbard model demonstrate excellent agreement with theory. $\infty$-NNQS opens up new opportunities for studying entanglement and training dynamics in other physics applications, such as in finding ground states.
A proof of work (PoW) is an important cryptographic construct enabling a party to convince others that they invested some effort in solving a computational task. Arguably, its main impact has been in the setting of cryptocurrencies such as Bitcoin and its underlying blockchain protocol, which received significant attention in recent years due to its potential for various applications as well as for solving fundamental distributed computing questions in novel threat models. PoWs enable the linking of blocks in the blockchain data structure and thus the problem of interest is the feasibility of obtaining a sequence (chain) of such proofs. In this work, we examine the hardness of finding such chain of PoWs against quantum strategies. We prove that the chain of PoWs problem reduces to a problem we call multi-solution Bernoulli search, for which we establish its quantum query complexity. Effectively, this is an extension of a threshold direct product theorem to an average-case unstructured search problem. Our proof, adding to active recent efforts, simplifies and generalizes the recording technique due to Zhandry (Crypto 2019). In addition, we revisit the formal treatment of security of the core of the Bitcoin consensus protocol, called the Bitcoin backbone (Eurocrypt 2015), against quantum adversaries and show that its security holds under a quantum analogue of the ``honest majority'' assumption that we formulate. Our analysis indicates that security of the Bitcoin backbone protocol is guaranteed provided that the number of adversarial quantum queries is bounded so that each quantum query is worth $O(p^{-1/2})$ classical ones, where $p$ is the probability of success of a single classical query to the protocol's underlying hash function. Somewhat surprisingly, the wait time for safe settlement in the case of quantum adversaries matches the safe settlement time in the classical case.
Entanglement represents "the" key resource for several applications of quantum information processing, ranging from quantum communications to distributed quantum computing. Despite its fundamental importance, deterministic generation of maximally entangled qubits represents an on-going open problem. Here, we design a novel generation scheme exhibiting two attractive features, namely, i) deterministically generating genuinely multipartite entangled states, ii) without requiring any direct interaction between the qubits. Indeed, the only necessary condition is the possibility of coherently controlling -- according to the indefinite causal order framework -- the causal order among some unitaries acting on the qubits. Through the paper, we analyze and derive the conditions on the unitaries for deterministic generation, and we provide examples for unitaries practical implementation. We conclude the paper by discussing the scalability of the proposed scheme to higher dimensional GME states and by introducing some possible applications of the proposal for quantum networks.
Quantum-resistant cryptography is cryptography that aims to deliver cryptographic functions and protocols that remain secure even if large-scale fault-tolerant quantum computers are built. NIST will soon announce the first selected public-key cryptography algorithms in its Post-Quantum Cryptography (PQC) standardization which is the most important current effort in the field of quantum-resistant cryptography. This report provides an overview to security experts who do not yet have a deep understanding of quantum-resistant cryptography. It surveys the computational model of quantum computers; the quantum algorithms that affect cryptography the most; the risk of Cryptographically Relevant Quantum Computers (CRQCs) being built; the security of symmetric and public-key cryptography in the presence of CRQCs; the NIST PQC standardization effort; the migration to quantum-resistant public-key cryptography; the relevance of Quantum Key Distribution as a complement to conventional cryptography; and the relevance of Quantum Random Number Generators as a complement to current hardware Random Number Generators.
Cryptography with quantum states exhibits a number of surprising and counterintuitive features. In a 2002 work, Barnum et al. argue that these features imply that digital signatures for quantum states are impossible (Barnum et al., FOCS 2002). In this work, we ask: can all forms of signing quantum data, even in a possibly weak sense, be completely ruled out? We give two results which shed significant light on this basic question. First, we prove an impossibility result for digital signatures for quantum data, which extends the result of Barnum et al. Specifically, we show that no nontrivial combination of correctness and security requirements can be fulfilled, beyond what is achievable simply by measuring the quantum message and then signing the outcome. In other words, only classical signature schemes exist. We then show a positive result: a quantum state can be signed with the same security guarantees as classically, provided that it is also encrypted with the public key of the intended recipient. Following classical nomenclature, we call this notion quantum signcryption. Classically, signcryption is only interesting if it provides superior performance to encypt-then-sign. Quantumly, it is far more interesting: it is the only signing method available. We develop "as-strong-as-classical" security definitions for quantum signcryption and give secure constructions based on post-quantum public-key primitives. Along the way, we show that a natural hybrid method of combining classical and quantum schemes can be used to "upgrade" a secure classical scheme to the fully-quantum setting, in a wide range of cryptographic settings including signcryption, authenticated encryption, and CCA security.
Defining multivariate generalizations of the classical univariate ranks has been a long-standing open problem in statistics. Optimal transport has been shown to offer a solution by transporting data points to grid approximating a reference measure (Chernozhukov et al., 2017; Hallin, 2017; Hallin et al., 2021a). We take up this new perspective to develop and study multivariate analogues of popular correlations measures including the sign covariance, Kendall's tau and Spearman's rho. Our tests are genuinely distribution-free, hence valid irrespective of the actual (absolutely continuous) distributions of the observations. We present asymptotic distribution theory for these new statistics, providing asymptotic approximations to critical values to be used for testing independence as well as an analysis of power of the resulting tests. Interestingly, we are able to establish a multivariate elliptical Chernoff-Savage property, which guarantees that, under ellipticity, our nonparametric tests of independence when compared to Gaussian procedures enjoy an asymptotic relative efficiency of one or larger. Hence, the nonparametric tests constitute a safe replacement for procedures based on multivariate Gaussianity.
Even after decades of quantum computing development, examples of generally useful quantum algorithms with exponential speedups over classical counterparts are scarce. Recent progress in quantum algorithms for linear-algebra positioned quantum machine learning (QML) as a potential source of such useful exponential improvements. Yet, in an unexpected development, a recent series of "dequantization" results has equally rapidly removed the promise of exponential speedups for several QML algorithms. This raises the critical question whether exponential speedups of other linear-algebraic QML algorithms persist. In this paper, we study the quantum-algorithmic methods behind the algorithm for topological data analysis of Lloyd, Garnerone and Zanardi through this lens. We provide evidence that the problem solved by this algorithm is classically intractable by showing that its natural generalization is as hard as simulating the one clean qubit model -- which is widely believed to require superpolynomial time on a classical computer -- and is thus very likely immune to dequantizations. Based on this result, we provide a number of new quantum algorithms for problems such as rank estimation and complex network analysis, along with complexity-theoretic evidence for their classical intractability. Furthermore, we analyze the suitability of the proposed quantum algorithms for near-term implementations. Our results provide a number of useful applications for full-blown, and restricted quantum computers with a guaranteed exponential speedup over classical methods, recovering some of the potential for linear-algebraic QML to become one of quantum computing's killer applications.
Producing an accurate weather forecast and a reliable quantification of its uncertainty is an open scientific challenge. Ensemble forecasting is, so far, the most successful approach to produce relevant forecasts along with an estimation of their uncertainty. The main limitations of ensemble forecasting are the high computational cost and the difficulty to capture and quantify different sources of uncertainty, particularly those associated with model errors. In this work proof-of-concept model experiments are conducted to examine the performance of ANNs trained to predict a corrected state of the system and the state uncertainty using only a single deterministic forecast as input. We compare different training strategies: one based on a direct training using the mean and spread of an ensemble forecast as target, the other ones rely on an indirect training strategy using a deterministic forecast as target in which the uncertainty is implicitly learned from the data. For the last approach two alternative loss functions are proposed and evaluated, one based on the data observation likelihood and the other one based on a local estimation of the error. The performance of the networks is examined at different lead times and in scenarios with and without model errors. Experiments using the Lorenz'96 model show that the ANNs are able to emulate some of the properties of ensemble forecasts like the filtering of the most unpredictable modes and a state-dependent quantification of the forecast uncertainty. Moreover, ANNs provide a reliable estimation of the forecast uncertainty in the presence of model error.