The ultimate random number generators are those certified to be unpredictable -- including to an adversary. The use of simple quantum processes promises to provide numbers that no physical observer could predict but, in practice, unwanted noise and imperfect devices can compromise fundamental randomness and protocol security. Certified randomness protocols have been developed which remove the need for trust in devices by taking advantage of nonlocality. Here, we use a photonic platform to implement our protocol, which operates in the quantum steering scenario where one can certify randomness in a one-sided device independent framework. We demonstrate an approach for a steering-based generator of public or private randomness, and the first generation of certified random bits, with the detection loophole closed, in the steering scenario.
Computational models are widely used in decision support for energy system operation, planning and policy. A system of models is often employed, where model inputs themselves arise from other computer models, with each model being developed by different teams of experts. Gaussian Process emulators can be used to approximate the behaviour of complex, computationally intensive models; this type of emulator both provides the predictions and quantifies uncertainty about the predicted model output. This paper presents a computationally efficient framework for propagating uncertainty within a network of models with high-dimensional outputs used for energy planning. We present a case study from a UK county council, that is interested in considering low carbon technology options to transform its infrastructure. The system model employed for this case study is simple, however, the framework can be applied to larger networks of more complex models.
The development of a future, global quantum communication network (or quantum internet) will enable high rate private communication and entanglement distribution over very long distances. However, the large-scale performance of ground-based quantum networks (which employ photons as information carriers through optical-fibres) is fundamentally limited by fibre quality and link length, with the latter being a primary design factor for practical network architectures. While these fundamental limits are well established for arbitrary network topologies, the question of how to best design global architectures remains open. In this work, we introduce a large-scale quantum network model called weakly-regular architectures. Such networks are capable of idealising network connectivity, provide freedom to capture a broad class of spatial topologies and remain analytically treatable. This allows us to investigate the effectiveness of large-scale networks with consistent connective properties, and unveil critical conditions under which end-to-end rates remain optimal. Furthermore, through a strict performance comparison of ideal, ground-based quantum networks with that of realistic satellite quantum communication protocols, we establish conditions for which satellites can be used to outperform fibre-based quantum infrastructure; {rigorously proving the efficacy of satellite-based technologies for global quantum communications.
Let $G$ be a graph of a network system with vertices, $V(G)$, representing physical locations and edges, $E(G)$, representing informational connectivity. A \emph{locating-dominating (LD)} set $S$ is a subset of vertices representing detectors capable of sensing an "intruder" at precisely their location or somewhere in their open-neighborhood -- an LD set must be capable of locating an intruder anywhere in the graph. We explore three types of fault-tolerant LD sets: \emph{redundant LD} sets, which allow a detector to be removed, \emph{error-detecting LD} sets, which allow at most one false negative, and \emph{error-correcting LD} sets, which allow at most one error (false positive or negative). In particular, we determine lower and upper bounds for the minimum density of fault-tolerant locating-dominating sets in the \emph{infinite king grid}; to prove the lower bounds, we introduce a new share-discharging strategy.
We study the two inference problems of detecting and recovering an isolated community of \emph{general} structure planted in a random graph. The detection problem is formalized as a hypothesis testing problem, where under the null hypothesis, the graph is a realization of an Erd\H{o}s-R\'{e}nyi random graph $\mathcal{G}(n,q)$ with edge density $q\in(0,1)$; under the alternative, there is an unknown structure $\Gamma_k$ on $k$ nodes, planted in $\mathcal{G}(n,q)$, such that it appears as an \emph{induced subgraph}. In case of a successful detection, we are concerned with the task of recovering the corresponding structure. For these problems, we investigate the fundamental limits from both the statistical and computational perspectives. Specifically, we derive lower bounds for detecting/recovering the structure $\Gamma_k$ in terms of the parameters $(n,k,q)$, as well as certain properties of $\Gamma_k$, and exhibit computationally unbounded optimal algorithms that achieve these lower bounds. We also consider the problem of testing in polynomial-time. As is customary in many similar structured high-dimensional problems, our model undergoes an "easy-hard-impossible" phase transition and computational constraints can severely penalize the statistical performance. To provide an evidence for this phenomenon, we show that the class of low-degree polynomials algorithms match the statistical performance of the polynomial-time algorithms we develop.
Defining and accurately measuring generalization in generative models remains an ongoing challenge and a topic of active research within the machine learning community. This is in contrast to discriminative models, where there is a clear definition of generalization, i.e., the model's classification accuracy when faced with unseen data. In this work, we construct a simple and unambiguous approach to evaluate the generalization capabilities of generative models. Using the sample-based generalization metrics proposed here, any generative model, from state-of-the-art classical generative models such as GANs to quantum models such as Quantum Circuit Born Machines, can be evaluated on the same ground on a concrete well-defined framework. In contrast to other sample-based metrics for probing generalization, we leverage constrained optimization problems (e.g., cardinality constrained problems) and use these discrete datasets to define specific metrics capable of unambiguously measuring the quality of the samples and the model's generalization capabilities for generating data beyond the training set but still within the valid solution space. Additionally, our metrics can diagnose trainability issues such as mode collapse and overfitting, as we illustrate when comparing GANs to quantum-inspired models built out of tensor networks. Our simulation results show that our quantum-inspired models have up to a $68 \times$ enhancement in generating unseen unique and valid samples compared to GANs, and a ratio of 61:2 for generating samples with better quality than those observed in the training set. We foresee these metrics as valuable tools for rigorously defining practical quantum advantage in the domain of generative modeling.
Quantum machine learning has emerged as a potential practical application of near-term quantum devices. In this work, we study a two-layer hybrid classical-quantum classifier in which a first layer of quantum stochastic neurons implementing generalized linear models (QGLMs) is followed by a second classical combining layer. The input to the first, hidden, layer is obtained via amplitude encoding in order to leverage the exponential size of the fan-in of the quantum neurons in the number of qubits per neuron. To facilitate implementation of the QGLMs, all weights and activations are binary. While the state of the art on training strategies for this class of models is limited to exhaustive search and single-neuron perceptron-like bit-flip strategies, this letter introduces a stochastic variational optimization approach that enables the joint training of quantum and classical layers via stochastic gradient descent. Experiments show the advantages of the approach for a variety of activation functions implemented by QGLM neurons.
In his seminal work on recording quantum queries [Crypto 2019], Zhandry studied interactions between quantum query algorithms and the quantum oracle corresponding to random functions. Zhandry presented a framework for interpreting various states in the quantum space of the oracle as databases of the knowledge acquired by the algorithm and used that interpretation to provide security proofs in post-quantum cryptography. In this paper, we introduce a similar interpretation for the case when the oracle corresponds to random permutations instead of random functions. Because both random functions and random permutations are highly significant in security proofs, we hope that the present framework will find applications in quantum cryptography. Additionally, we show how this framework can be used to prove that the success probability for a k-query quantum algorithm that attempts to invert a random N-element permutation is at most O(k^2/N).
Quantum error mitigation (QEM) is a class of promising techniques for reducing the computational error of variational quantum algorithms. In general, the computational error reduction comes at the cost of a sampling overhead due to the variance-boosting effect caused by the channel inversion operation, which ultimately limits the applicability of QEM. Existing sampling overhead analysis of QEM typically assumes exact channel inversion, which is unrealistic in practical scenarios. In this treatise, we consider a practical channel inversion strategy based on Monte Carlo sampling, which introduces additional computational error that in turn may be eliminated at the cost of an extra sampling overhead. In particular, we show that when the computational error is small compared to the dynamic range of the error-free results, it scales with the square root of the number of gates. By contrast, the error exhibits a linear scaling with the number of gates in the absence of QEM under the same assumptions. Hence, the error scaling of QEM remains to be preferable even without the extra sampling overhead. Our analytical results are accompanied by numerical examples.
We give a classical algorithm for linear regression analogous to the quantum matrix inversion algorithm [Harrow, Hassidim, and Lloyd, Physical Review Letters'09, arXiv:0811.3171] for low-rank matrices [Wossnig, Zhao, and Prakash, Physical Review Letters'18, arXiv:1704.06174], when the input matrix $A$ is stored in a data structure applicable for QRAM-based state preparation. Namely, suppose we are given an $A \in \mathbb{C}^{m\times n}$ with minimum non-zero singular value $\sigma$ which supports certain efficient $\ell_2$-norm importance sampling queries, along with a $b \in \mathbb{C}^m$. Then, for some $x \in \mathbb{C}^n$ satisfying $\|x - A^+b\| \leq \varepsilon\|A^+b\|$, we can output a measurement of $|x\rangle$ in the computational basis and output an entry of $x$ with classical algorithms that run in $\tilde{\mathcal{O}}\big(\frac{\|A\|_{\mathrm{F}}^6\|A\|^6}{\sigma^{12}\varepsilon^4}\big)$ and $\tilde{\mathcal{O}}\big(\frac{\|A\|_{\mathrm{F}}^6\|A\|^2}{\sigma^8\varepsilon^4}\big)$ time, respectively. This improves on previous "quantum-inspired" algorithms in this line of research by at least a factor of $\frac{\|A\|^{16}}{\sigma^{16}\varepsilon^2}$ [Chia, Gily\'en, Li, Lin, Tang and Wang, STOC'20, arXiv:1910.06151]. As a consequence, we show that quantum computers can achieve at most a factor-of-12 speedup for linear regression in this QRAM data structure setting and related settings. Our work applies techniques from sketching algorithms and optimization to the quantum-inspired literature. Unlike earlier works, this is a promising avenue that could lead to feasible implementations of classical regression in a quantum-inspired settings, for comparison against future quantum computers.
Quantum machine learning is expected to be one of the first potential general-purpose applications of near-term quantum devices. A major recent breakthrough in classical machine learning is the notion of generative adversarial training, where the gradients of a discriminator model are used to train a separate generative model. In this work and a companion paper, we extend adversarial training to the quantum domain and show how to construct generative adversarial networks using quantum circuits. Furthermore, we also show how to compute gradients -- a key element in generative adversarial network training -- using another quantum circuit. We give an example of a simple practical circuit ansatz to parametrize quantum machine learning models and perform a simple numerical experiment to demonstrate that quantum generative adversarial networks can be trained successfully.