It is well known that an arbitrary $n$-qubit quantum state $|\Psi\rangle$ can be prepared with $\Theta(2^n)$ two-qubit gates. In this work, we investigate the task in a "straddling gates" scenario: consider $n$ qubits divided equally into two sets and gates within each set are free; what is the least cost of two-qubit gates straddling the sets (also known as the "binding complexity") for preparing an arbitrary quantum state, assuming no ancilla qubits allowed? In this work, we give an algorithm that fulfills the task with $O(n^2 2^{n/2})$ straddling gates, which nearly matches the lower bound to a lower order factor. We then prove any $U(2^n)$ decomposition requires no more than $O(2^{n})$ straddling gates. This resolves an open problem posed by Vijay Balasubramanian, who was motivated by the "Complexity=Volume" conjecture in AdS/CFT theory. Furthermore, we extend our discussion to multi-partite systems, define a novel binding complexity class, the "Schmidt decomposable" states, and give a circuit construction explanation for its unique property. Lastly, we reveal binding complexity's significance, comparing it to Von Neumann entropy as an entanglement measure.
We show how to translate a subset of RISC-V machine code compiled from a subset of C to quadratic unconstrained binary optimization (QUBO) models that may be solved by a quantum annealing machine: given a bound $n$, there is input $I$ to a program $P$ such that $P$ runs into a given program state $E$ executing no more than $n$ machine instructions if and only if the QUBO model of $P$ for $n$ evaluates to 0 on $I$. Thus, with more qubits on the machine than variables in the QUBO model, quantum annealing the model reaches 0 (ground) energy in constant time with high probability on some input $I$ that is part of the ground state if and only if $P$ runs into $E$ on $I$ executing no more than $n$ instructions. Translation takes $\mathcal{O}(n^2)$ time effectively turning a quantum annealer into a polynomial-time symbolic execution engine and bounded model checker, eliminating their path and state explosion problems. Here, we take advantage of the fact that any machine instruction may only increase the size of the program state by a constant amount of bits. Translation time comes down from $\mathcal{O}(n^2)$ to $\mathcal{O}(n\cdot|P|)$ if memory consumption of $P$ is bounded by a constant, establishing a linear (quadratic) upper bound on quantum space, in number of qubits on a quantum annealer, in terms of algorithmic time (space) in classical computing. The construction provides a non-relativizing argument for $NP\subseteq BQP$, without violating the optimality of Grover's algorithm, also on gate-model quantum machines, and motivates a temporal and spatial metric of quantum advantage. Our prototypical open-source toolchain translates machine code that runs on real RISC-V hardware to models that can be solved by real quantum annealing hardware, as shown in our experiments.
The Solovay-Kitaev algorithm is a fundamental result in quantum computation. It gives an algorithm for efficiently compiling arbitrary unitaries using universal gate sets: any unitary can be approximated by short gates sequences, whose length scales merely poly-logarithmically with accuracy. As a consequence, the choice of gate set is typically unimportant in quantum computing. However, the Solovay-Kitaev algorithm requires the gate set to be inverse-closed. It has been a longstanding open question if efficient algorithmic compilation is possible without this condition. In this work, we provide the first inverse-free Solovay-Kitaev algorithm, which makes no assumption on the structure within a gate set beyond universality, answering this problem in the affirmative, and providing an efficient compilation algorithm in the absence of inverses for both $\text{SU}(d)$ and $\text{SL}(d, \mathbb{C})$. The algorithm works by showing that approximate gate implementations of the generalized Pauli group can self-correct their errors.
Until recently, applications of neural networks in machine learning have almost exclusively relied on real-valued networks. It was recently observed, however, that complex-valued neural networks (CVNNs) exhibit superior performance in applications in which the input is naturally complex-valued, such as MRI fingerprinting. While the mathematical theory of real-valued networks has, by now, reached some level of maturity, this is far from true for complex-valued networks. In this paper, we analyze the expressivity of complex-valued networks by providing explicit quantitative error bounds for approximating $C^n$ functions on compact subsets of $\mathbb{C}^d$ by complex-valued neural networks that employ the modReLU activation function, given by $\sigma(z) = \mathrm{ReLU}(|z| - 1) \, \mathrm{sgn} (z)$, which is one of the most popular complex activation functions used in practice. We show that the derived approximation rates are optimal (up to log factors) in the class of modReLU networks with weights of moderate growth.
In multiuser communication systems, user scheduling and beamforming design are two fundamental problems, which are usually investigated separately in the existing literature. In this work, we focus on the joint optimization of user scheduling and beamforming design with the goal of maximizing the set cardinality of scheduled users. Observing that this problem is computationally challenging due to the non-convex objective function and coupled constraints in continuous and binary variables. To tackle these difficulties, we first propose an iterative optimization algorithm (IOA) relying on the successive convex approximation and uplink-downlink duality theory. Then, motivated by IOA and graph neural networks, a joint user scheduling and power allocation network (JEEPON) is developed to address the investigated problem in an unsupervised manner. The effectiveness of IOA and JEEPON is verified by various numerical results, and the latter achieves a close performance but lower complexity compared with IOA and the greedy-based algorithm. Remarkably, the proposed JEEPON is also competitive in terms of the generalization ability in dynamic wireless network scenarios.
We consider reinforcement learning (RL) in Markov Decision Processes in which an agent repeatedly interacts with an environment that is modeled by a controlled Markov process. At each time step $t$, it earns a reward, and also incurs a cost-vector consisting of $M$ costs. We design learning algorithms that maximize the cumulative reward earned over a time horizon of $T$ time-steps, while simultaneously ensuring that the average values of the $M$ cost expenditures are bounded by agent-specified thresholds $c^{ub}_i,i=1,2,\ldots,M$. The considerations on the cumulative cost expenditures departs from the existing literature, in that the agent now additionally needs to balance the cost expenses in an online manner, while simultaneously performing the exploration-exploitation trade-off that is typically encountered in RL tasks. In order to measure the performance of a reinforcement learning algorithm that satisfies the average cost constraints, we define an $M+1$ dimensional regret vector that is composed of its reward regret, and $M$ cost regrets. The reward regret measures the sub-optimality in the cumulative reward, while the $i$-th component of the cost regret vector is the difference between its $i$-th cumulative cost expense and the expected cost expenditures $Tc^{ub}_i$. We prove that with a high probablity, the regret vector of UCRL-CMDP is upper-bounded as $O\left( S\sqrt{AT^{1.5}\log(T)}\right)$, where $S$ is the number of states, $A$ is the number of actions, and $T$ is the time horizon. We further show how to reduce the regret of a desired subset of the $M$ costs, at the expense of increasing the regrets of rewards and the remaining costs. To the best of our knowledge, ours is the only work that considers non-episodic RL under average cost constraints, and derive algorithms that can~\emph{tune the regret vector} according to the agent's requirements on its cost regrets.
Demonstrating quantum advantage requires experimental implementation of a computational task that is hard to achieve using state-of-the-art classical systems. One approach is to perform sampling from a probability distribution associated with a class of highly entangled many-body wavefunctions. It has been suggested that this approach can be certified with the Linear Cross-Entropy Benchmark (XEB). We critically examine this notion. First, in a "benign" setting where an honest implementation of noisy quantum circuits is assumed, we characterize the conditions under which the XEB approximates the fidelity. Second, in an "adversarial" setting where all possible classical algorithms are considered for comparison, we show that achieving relatively high XEB values does not imply faithful simulation of quantum dynamics. We present an efficient classical algorithm that, with 1 GPU within 2s, yields high XEB values, namely 2-12% of those obtained in experiments. By identifying and exploiting several vulnerabilities of the XEB, we achieve high XEB values without full simulation of quantum circuits. Remarkably, our algorithm features better scaling with the system size than noisy quantum devices for commonly studied random circuit ensembles. To quantitatively explain the success of our algorithm and the limitations of the XEB, we use a theoretical framework in which the average XEB and fidelity are mapped to statistical models. We illustrate the relation between the XEB and the fidelity for quantum circuits in various architectures, with different gate choices, and in the presence of noise. Our results show that XEB's utility as a proxy for fidelity hinges on several conditions, which must be checked in the benign setting but cannot be assumed in the adversarial setting. Thus, the XEB alone has limited utility as a benchmark for quantum advantage. We discuss ways to overcome these limitations.
A matrix formalism for the determination of the best estimator in certain simulation-based parameter estimation problems will be presented and discussed. The equations, termed as the Linear Template Fit, combine a linear regression with a least square method and its optimization. The Linear Template Fit employs only predictions that are calculated beforehand and which are provided for a few values of the parameter of interest. Therefore, the Linear Template Fit is particularly suited for parameter estimation with computationally intensive simulations that are otherwise often limited in their usability for statistical inference, or for performance critical applications. Equations for error propagation are discussed, and the analytic form provides comprehensive insights into the parameter estimation problem. Furthermore, the quickly-converging algorithm of the Quadratic Template Fit will be presented, which is suitable for a non-linear dependence on the parameters. As an example application, a determination of the strong coupling constant, $\alpha_s(m_Z)$, from inclusive jet cross section data at the CERN Large Hadron Collider is studied and compared with previously published results.
The problem of Approximate Nearest Neighbor (ANN) search is fundamental in computer science and has benefited from significant progress in the past couple of decades. However, most work has been devoted to pointsets whereas complex shapes have not been sufficiently treated. Here, we focus on distance functions between discretized curves in Euclidean space: they appear in a wide range of applications, from road segments to time-series in general dimension. For $\ell_p$-products of Euclidean metrics, for any $p$, we design simple and efficient data structures for ANN, based on randomized projections, which are of independent interest. They serve to solve proximity problems under a notion of distance between discretized curves, which generalizes both discrete Fr\'echet and Dynamic Time Warping distances. These are the most popular and practical approaches to comparing such curves. We offer the first data structures and query algorithms for ANN with arbitrarily good approximation factor, at the expense of increasing space usage and preprocessing time over existing methods. Query time complexity is comparable or significantly improved by our algorithms, our algorithm is especially efficient when the length of the curves is bounded.
This paper proposes a model-free Reinforcement Learning (RL) algorithm to synthesise policies for an unknown Markov Decision Process (MDP), such that a linear time property is satisfied. We convert the given property into a Limit Deterministic Buchi Automaton (LDBA), then construct a synchronized MDP between the automaton and the original MDP. According to the resulting LDBA, a reward function is then defined over the state-action pairs of the product MDP. With this reward function, our algorithm synthesises a policy whose traces satisfies the linear time property: as such, the policy synthesis procedure is "constrained" by the given specification. Additionally, we show that the RL procedure sets up an online value iteration method to calculate the maximum probability of satisfying the given property, at any given state of the MDP - a convergence proof for the procedure is provided. Finally, the performance of the algorithm is evaluated via a set of numerical examples. We observe an improvement of one order of magnitude in the number of iterations required for the synthesis compared to existing approaches.
Quantum machine learning is expected to be one of the first potential general-purpose applications of near-term quantum devices. A major recent breakthrough in classical machine learning is the notion of generative adversarial training, where the gradients of a discriminator model are used to train a separate generative model. In this work and a companion paper, we extend adversarial training to the quantum domain and show how to construct generative adversarial networks using quantum circuits. Furthermore, we also show how to compute gradients -- a key element in generative adversarial network training -- using another quantum circuit. We give an example of a simple practical circuit ansatz to parametrize quantum machine learning models and perform a simple numerical experiment to demonstrate that quantum generative adversarial networks can be trained successfully.