Variational quantum algorithms rely on gradient based optimization to iteratively minimize a cost function evaluated by measuring output(s) of a quantum processor. A barren plateau is the phenomenon of exponentially vanishing gradients in sufficiently expressive parametrized quantum circuits. It has been established that the onset of a barren plateau regime depends on the cost function, although the particular behavior has been demonstrated only for certain classes of cost functions. Here we derive a lower bound on the variance of the gradient, which depends mainly on the width of the circuit causal cone of each term in the Pauli decomposition of the cost function. Our result further clarifies the conditions under which barren plateaus can occur.
A quantum circuit is a computational unit that transforms an input quantum state to an output one. A natural way to reason about its behavior is to compute explicitly the unitary matrix implemented by it. However, when the number of qubits increases, the matrix dimension grows exponentially and the computation becomes intractable. In this paper, we propose a symbolic approach to reasoning about quantum circuits. It is based on a small set of laws involving some basic manipulations on vectors and matrices. This symbolic reasoning scales better than the explicit one and is well suited to be automated in Coq, as demonstrated with some typical examples.
One of the major promises of quantum computing is the realization of SIMD (single instruction - multiple data) operations using the phenomenon of superposition. Since the dimension of the state space grows exponentially with the number of qubits, we can easily reach situations where we pay less than a single quantum gate per data point for data-processing instructions which would be rather expensive in classical computing. Formulating such instructions in terms of quantum gates, however, still remains a challenging task. Laying out the foundational functions for more advanced data-processing is therefore a subject of paramount importance for advancing the realm of quantum computing. In this paper, we introduce the formalism of encoding so called-semi-boolean polynomials. As it turns out, arithmetic $\mathbb{Z}/2^n\mathbb{Z}$ ring operations can be formulated as semi-boolean polynomial evaluations, which allows convenient generation of unsigned integer arithmetic quantum circuits. For arithmetic evaluations, the resulting algorithm has been known as Fourier-arithmetic. We extend this type of algorithm with additional features, such as ancilla-free in-place multiplication and integer coefficient polynomial evaluation. Furthermore, we introduce a tailor-made method for encoding signed integers succeeded by an encoding for arbitrary floating-point numbers. This representation of floating-point numbers and their processing can be applied to any quantum algorithm that performs unsigned modular integer arithmetic. We discuss some further performance enhancements of the semi boolean polynomial encoder and finally supply a complexity estimation. The application of our methods to a 32-bit unsigned integer multiplication demonstrated a 90\% circuit depth reduction compared to carry-ripple approaches.
Offline estimation of the dynamical model of a Markov Decision Process (MDP) is a non-trivial task that greatly depends on the data available to the learning phase. Sometimes the dynamics of the model is invariant with respect to some transformations of the current state and action. Recent works showed that an expert-guided pipeline relying on Density Estimation methods as Deep Neural Network based Normalizing Flows effectively detects this structure in deterministic environments, both categorical and continuous-valued. The acquired knowledge can be exploited to augment the original data set, leading eventually to a reduction in the distributional shift between the true and the learnt model. In this work we extend the paradigm to also tackle non deterministic MDPs, in particular 1) we propose a detection threshold in categorical environments based on statistical distances, 2) we introduce a benchmark of the distributional shift in continuous environments based on the Wilcoxon signed-rank statistical test and 3) we show that the former results lead to a performance improvement when solving the learnt MDP and then applying the optimal policy in the real environment.
Monte Carlo Tree Search (MCTS) is a sampling best-first method to search for optimal decisions. The MCTS's popularity is based on its extraordinary results in the challenging two-player based game Go, a game considered much harder than Chess and that until very recently was considered infeasible for Artificial Intelligence methods. The success of MCTS depends heavily on how the tree is built and the selection process plays a fundamental role in this. One particular selection mechanism that has proved to be reliable is based on the Upper Confidence Bounds for Trees, commonly referred as UCT. The UCT attempts to nicely balance exploration and exploitation by considering the values stored in the statistical tree of the MCTS. However, some tuning of the MCTS UCT is necessary for this to work well. In this work, we use Evolutionary Algorithms (EAs) to evolve mathematical expressions with the goal to substitute the UCT mathematical expression. We compare our proposed approach, called Evolution Strategy in MCTS (ES-MCTS) against five variants of the MCTS UCT, three variants of the star-minimax family of algorithms as well as a random controller in the Game of Carcassonne. We also use a variant of our proposed EA-based controller, dubbed ES partially integrated in MCTS. We show how the ES-MCTS controller, is able to outperform all these 10 intelligent controllers, including robust MCTS UCT controllers.
We show how to translate a subset of RISC-V machine code compiled from a subset of C to quadratic unconstrained binary optimization (QUBO) models that can be solved by a quantum annealing machine: given a bound $n$, there is input $I$ to a program $P$ such that $P$ runs into a given program state $E$ executing no more than $n$ machine instructions if and only if the QUBO model of $P$ for $n$ evaluates to 0 on $I$. Thus, with more qubits on the machine than variables in the QUBO model, quantum annealing the model reaches 0 (ground) energy in constant time with high probability on some input $I$ that is part of the ground state if and only if $P$ runs into $E$ on $I$ in no more than $n$ instructions. Translation takes $\mathcal{O}(n^2)$ time turning a quantum annealer into a polynomial-time symbolic execution engine and bounded model checker, eliminating their path and state explosion problems. Here, we take advantage of the fact that any machine instruction may only increase the size of the program state by $\mathcal{O}(w)$ bits where $w$ is machine word size. Translation time comes down to $\mathcal{O}(n)$ if memory consumption of $P$ is bounded by a constant, establishing a linear (quadratic) upper bound on quantum space, in number of qubits, in terms of algorithmic time (space) in classical computing. This result motivates a temporal and spatial metric of quantum advantage. Our prototypical open-source toolchain translates machine code that runs on real RISC-V hardware to models that can be solved by real quantum annealing hardware, as shown in our experiments.
The current push towards interoperability drives companies to collaborate through process choreographies. At the same time, they face a jungle of continuously changing regulations, e.g., due to the pandemic and developments such as the BREXIT, which strongly affect cross-organizational collaborations. Think of, for example, supply chains spanning several countries with different and maybe even conflicting COVID19 traveling restrictions. Hence, providing automatic compliance verification in process choreographies is crucial for any cross-organizational business process. A particular challenge concerns the restricted visibility of the partner processes at the presence of global compliance rules (GCR), i.e., rules that span across the process of several partners. This work deals with the question how to verify global compliance if affected tasks are not fully visible. Our idea is to decompose GCRs into so called assertions that can be checked by each affected partner whereby the decomposition is both correct and lossless. The algorithm exploits transitivity properties of the underlying rule specification, and its correctness and complexity are proven, considering advanced aspects such as loops. The algorithm is implemented in a proof-of-concept prototype, including a model checker for verifying compliance. The applicability of the approach is further demonstrated on a real-world manufacturing use case.
Arrays of quantum dots (QDs) are a promising candidate system to realize scalable, coupled qubits systems and serve as a fundamental building block for quantum computers. In such semiconductor quantum systems, devices now have tens of individual electrostatic and dynamical voltages that must be carefully set to localize the system into the single-electron regime and to realize good qubit operational performance. The mapping of requisite dot locations and charges to gate voltages presents a challenging classical control problem. With an increasing number of QD qubits, the relevant parameter space grows sufficiently to make heuristic control unfeasible. In recent years, there has been a considerable effort to automate device control that combines script-based algorithms with machine learning (ML) techniques. In this Colloquium, we present a comprehensive overview of the recent progress in the automation of QD device control, with a particular emphasis on silicon- and GaAs-based QDs formed in two-dimensional electron gases. Combining physics-based modeling with modern numerical optimization and ML has proven quite effective in yielding efficient, scalable control. Further integration of theoretical, computational, and experimental efforts with computer science and ML holds tremendous potential in advancing semiconductor and other platforms for quantum computing.
We show that Gottesman's semantics (GROUP22, 1998) for Clifford circuits based on the Heisenberg representation can be treated as a type system that can efficiently characterize a common subset of quantum programs. Our applications include (i) certifying whether auxiliary qubits can be safely disposed of, (ii) determining if a system is separable across a given bi-partition, (iii) checking the transversality of a gate with respect to a given stabilizer code, and (iv) typing post-measurement states for computational basis measurements. Further, this type system is extended to accommodate universal quantum computing by deriving types for the $T$-gate, multiply-controlled unitaries such as the Toffoli gate, and some gate injection circuits that use associated magic states. These types allow us to prove a lower bound on the number of $T$ gates necessary to perform a multiply-controlled $Z$ gate.
Sparse representation of real-life images is a very effective approach in imaging applications, such as denoising. In recent years, with the growth of computing power, data-driven strategies exploiting the redundancy within patches extracted from one or several images to increase sparsity have become more prominent. This paper presents a novel image denoising algorithm exploiting such an image-dependent basis inspired by the quantum many-body theory. Based on patch analysis, the similarity measures in a local image neighborhood are formalized through a term akin to interaction in quantum mechanics that can efficiently preserve the local structures of real images. The versatile nature of this adaptive basis extends the scope of its application to image-independent or image-dependent noise scenarios without any adjustment. We carry out a rigorous comparison with contemporary methods to demonstrate the denoising capability of the proposed algorithm regardless of the image characteristics, noise statistics and intensity. We illustrate the properties of the hyperparameters and their respective effects on the denoising performance, together with automated rules of selecting their values close to the optimal one in experimental setups with ground truth not available. Finally, we show the ability of our approach to deal with practical images denoising problems such as medical ultrasound image despeckling applications.
Quantum machine learning is expected to be one of the first potential general-purpose applications of near-term quantum devices. A major recent breakthrough in classical machine learning is the notion of generative adversarial training, where the gradients of a discriminator model are used to train a separate generative model. In this work and a companion paper, we extend adversarial training to the quantum domain and show how to construct generative adversarial networks using quantum circuits. Furthermore, we also show how to compute gradients -- a key element in generative adversarial network training -- using another quantum circuit. We give an example of a simple practical circuit ansatz to parametrize quantum machine learning models and perform a simple numerical experiment to demonstrate that quantum generative adversarial networks can be trained successfully.