We show how to translate a subset of RISC-V machine code compiled from a subset of C to quadratic unconstrained binary optimization (QUBO) models that can be solved by a quantum annealing machine: given a bound $n$, there is input $I$ to a program $P$ such that $P$ runs into a given program state $E$ executing no more than $n$ machine instructions if and only if the QUBO model of $P$ for $n$ evaluates to 0 on $I$. Thus, with more qubits on the machine than variables in the QUBO model, quantum annealing the model reaches 0 (ground) energy in constant time with high probability on some input $I$ that is part of the ground state if and only if $P$ runs into $E$ on $I$ executing no more than $n$ instructions. Translation takes $\mathcal{O}(n^2)$ time effectively turning a quantum annealer into a polynomial-time symbolic execution engine and bounded model checker, eliminating their path and state explosion problems. Here, we take advantage of the fact that any machine instruction may only increase the size of the program state by a constant amount of bits. Translation time comes down from $\mathcal{O}(n^2)$ to $\mathcal{O}(n\cdot|P|)$ if memory consumption of $P$ is bounded by a constant, establishing a linear (quadratic) upper bound on quantum space, in number of qubits on a quantum annealer, in terms of algorithmic time (space) in classical computing. Our prototypical open-source toolchain translates machine code that runs on real RISC-V hardware to models that can be solved by real quantum annealing hardware, as shown in our experiments.
We study the computational complexity of zigzag sampling algorithm for strongly log-concave distributions. The zigzag process has the advantage of not requiring time discretization for implementation, and that each proposed bouncing event requires only one evaluation of partial derivative of the potential, while its convergence rate is dimension independent. Using these properties, we prove that the zigzag sampling algorithm achieves $\varepsilon$ error in chi-square divergence with a computational cost equivalent to $O\bigl(\kappa^2 d^\frac{1}{2}(\log\frac{1}{\varepsilon})^{\frac{3}{2}}\bigr)$ gradient evaluations in the regime $\kappa \ll \frac{d}{\log d}$ under a warm start assumption, where $\kappa$ is the condition number and $d$ is the dimension.
Quantum computing is evolving so quickly that forces us to revisit, rewrite, and update the basis of the theory. Basic Quantum Algorithms revisits the first quantum algorithms. It started in 1995 with Deutsch trying to evaluate a function at two domain points simultaneously. Then, Deutsch and Jozsa created in 1992 a quantum algorithm that determines whether a Boolean function is constant or balanced. In the next year, Bernstein and Vazirani realized that the same algorithm can be used to find a specific Boolean function in the set of linear Boolean functions. In 1994, Simon presented a new quantum algorithm that determines whether a function is one-to-one or two-to-one exponentially faster than any classical algorithm for the same problem. In the same year, Shor created two new quantum algorithms for factoring integers and calculating discrete logarithms, threatening the cryptography methods widely used nowadays. In 1995, Kitaev described an alternative version for Shor's algorithms that proved useful in many other applications. In the following year, Grover created a quantum search algorithm quadratically faster than its classical counterpart. In this work, all those remarkable algorithms are described in detail with a focus on the circuit model.
Fitting geometric models onto outlier contaminated data is provably intractable. Many computer vision systems rely on random sampling heuristics to solve robust fitting, which do not provide optimality guarantees and error bounds. It is therefore critical to develop novel approaches that can bridge the gap between exact solutions that are costly, and fast heuristics that offer no quality assurances. In this paper, we propose a hybrid quantum-classical algorithm for robust fitting. Our core contribution is a novel robust fitting formulation that solves a sequence of integer programs and terminates with a global solution or an error bound. The combinatorial subproblems are amenable to a quantum annealer, which helps to tighten the bound efficiently. While our usage of quantum computing does not surmount the fundamental intractability of robust fitting, by providing error bounds our algorithm is a practical improvement over randomised heuristics. Moreover, our work represents a concrete application of quantum computing in computer vision. We present results obtained using an actual quantum computer (D-Wave Advantage) and via simulation. Source code: //github.com/dadung/HQC-robust-fitting
One possibility of defining a quantum R\'enyi $\alpha$-divergence of two quantum states is to optimize the classical R\'enyi $\alpha$-divergence of their post-measurement probability distributions over all possible measurements (measured R\'enyi divergence), and maybe regularize these quantities over multiple copies of the two states (regularized measured R\'enyi $\alpha$-divergence). A key observation behind the theorem for the strong converse exponent of asymptotic binary quantum state discrimination is that the regularized measured R\'enyi $\alpha$-divergence coincides with the sandwiched R\'enyi $\alpha$-divergence when $\alpha>1$. Moreover, it also follows from the same theorem that to achieve this, it is sufficient to consider $2$-outcome measurements (tests) for any number of copies (this is somewhat surprising, as achieving the measured R\'enyi $\alpha$-divergence for $n$ copies might require a number of measurement outcomes that diverges in $n$, in general). In view of this, it seems natural to expect the same when $\alpha<1$; however, we show that this is not the case. In fact, we show that even for commuting states (classical case) the regularized quantity attainable using $2$-outcome measurements is in general strictly smaller than the R\'enyi $\alpha$-divergence (which is unique in the classical case). In the general quantum case this shows that the above "regularized test-measured" R\'enyi $\alpha$-divergence is not even a quantum extension of the classical R\'enyi divergence when $\alpha<1$, in sharp contrast to the $\alpha>1$ case.
We propose a series of data-centric heuristics for improving the performance of machine learning systems when applied to problems in quantum information science. In particular, we consider how systematic engineering of training sets can significantly enhance the accuracy of pre-trained neural networks used for quantum state reconstruction without altering the underlying architecture. We find that it is not always optimal to engineer training sets to exactly match the expected distribution of a target scenario, and instead, performance can be further improved by biasing the training set to be slightly more mixed than the target. This is due to the heterogeneity in the number of free variables required to describe states of different purity, and as a result, overall accuracy of the network improves when training sets of a fixed size focus on states with the least constrained free variables. For further clarity, we also include a "toy model" demonstration of how spurious correlations can inadvertently enter synthetic data sets used for training, how the performance of systems trained with these correlations can degrade dramatically, and how the inclusion of even relatively few counterexamples can effectively remedy such problems.
A core challenge for superconducting quantum computers is to scale up the number of qubits in each processor without increasing noise or cross-talk. Distributing a quantum computer across nearby small qubit arrays, known as chiplets, could solve many problems associated with size. We propose a chiplet architecture over microwave links with potential to exceed monolithic performance on near-term hardware. We model and evaluate the chiplet architecture in a way that bridges the physical and network layers. We find concrete evidence that distributed quantum computing may accelerate the path toward useful and ultimately scalable quantum computers. In the long-term, short-range networks may underlie quantum computers just as local area networks underlie classical datacenters and supercomputers today.
Strategic behavior in two-sided matching markets has been traditionally studied in a "one-sided" manipulation setting where the agent who misreports is also the intended beneficiary. Our work investigates "two-sided" manipulation of the deferred acceptance algorithm where the misreporting agent and the manipulator (or beneficiary) are on different sides. Specifically, we generalize the recently proposed accomplice manipulation model (where a man misreports on behalf of a woman) along two complementary dimensions: (a) the two for one model, with a pair of misreporting agents (man and woman) and a single beneficiary (the misreporting woman), and (b) the one for all model, with one misreporting agent (man) and a coalition of beneficiaries (all women). Our main contribution is to develop polynomial-time algorithms for finding an optimal manipulation in both settings. We obtain these results despite the fact that an optimal one for all strategy fails to be inconspicuous, while it is unclear whether an optimal two for one strategy satisfies the inconspicuousness property. We also study the conditions under which stability of the resulting matching is preserved. Experimentally, we show that two-sided manipulations are more frequently available and offer better quality matches than their one-sided counterparts.
Defining and accurately measuring generalization in generative models remains an ongoing challenge and a topic of active research within the machine learning community. This is in contrast to discriminative models, where there is a clear definition of generalization, i.e., the model's classification accuracy when faced with unseen data. In this work, we construct a simple and unambiguous approach to evaluate the generalization capabilities of generative models. Using the sample-based generalization metrics proposed here, any generative model, from state-of-the-art classical generative models such as GANs to quantum models such as Quantum Circuit Born Machines, can be evaluated on the same ground on a concrete well-defined framework. In contrast to other sample-based metrics for probing generalization, we leverage constrained optimization problems (e.g., cardinality constrained problems) and use these discrete datasets to define specific metrics capable of unambiguously measuring the quality of the samples and the model's generalization capabilities for generating data beyond the training set but still within the valid solution space. Additionally, our metrics can diagnose trainability issues such as mode collapse and overfitting, as we illustrate when comparing GANs to quantum-inspired models built out of tensor networks. Our simulation results show that our quantum-inspired models have up to a $68 \times$ enhancement in generating unseen unique and valid samples compared to GANs, and a ratio of 61:2 for generating samples with better quality than those observed in the training set. We foresee these metrics as valuable tools for rigorously defining practical quantum advantage in the domain of generative modeling.
We develop a universal model based on the classical complex matter fields that allow the optimal mapping of many real-life NP-hard combinatorial optimisation problems into the problem of minimising a spin Hamiltonian. We explicitly formulate one-to-one mapping for three famous problems: graph colouring, the travelling salesman, and the modular N-queens problem. We show that such a formulation allows for several orders of magnitude improvement in the search for the global minimum compared to the standard Ising formulation. At the same time, the amplitude dynamics escape from the local minima.
Quantum hardware and quantum-inspired algorithms are becoming increasingly popular for combinatorial optimization. However, these algorithms may require careful hyperparameter tuning for each problem instance. We use a reinforcement learning agent in conjunction with a quantum-inspired algorithm to solve the Ising energy minimization problem, which is equivalent to the Maximum Cut problem. The agent controls the algorithm by tuning one of its parameters with the goal of improving recently seen solutions. We propose a new Rescaled Ranked Reward (R3) method that enables stable single-player version of self-play training that helps the agent to escape local optima. The training on any problem instance can be accelerated by applying transfer learning from an agent trained on randomly generated problems. Our approach allows sampling high-quality solutions to the Ising problem with high probability and outperforms both baseline heuristics and a black-box hyperparameter optimization approach.