Quantum cloud computing is a promising paradigm for efficiently provisioning quantum resources (i.e., qubits) to users. In quantum cloud computing, quantum cloud providers provision quantum resources in reservation and on-demand plans for users. Literally, the cost of quantum resources in the reservation plan is expected to be cheaper than the cost of quantum resources in the on-demand plan. However, quantum resources in the reservation plan have to be reserved in advance without information about the requirement of quantum circuits beforehand, and consequently, the resources are insufficient, i.e., under-reservation. Hence, quantum resources in the on-demand plan can be used to compensate for the unsatisfied quantum resources required. To end this, we propose a quantum resource allocation for the quantum cloud computing system in which quantum resources and the minimum waiting time of quantum circuits are jointly optimized. Particularly, the objective is to minimize the total costs of quantum circuits under uncertainties regarding qubit requirement and minimum waiting time of quantum circuits. In experiments, practical circuits of quantum Fourier transform are applied to evaluate the proposed qubit resource allocation. The results illustrate that the proposed qubit resource allocation can achieve the optimal total costs.
We present an end-to-end framework to learn partial differential equations that brings together initial data production, selection of boundary conditions, and the use of physics-informed neural operators to solve partial differential equations that are ubiquitous in the study and modeling of physics phenomena. We first demonstrate that our methods reproduce the accuracy and performance of other neural operators published elsewhere in the literature to learn the 1D wave equation and the 1D Burgers equation. Thereafter, we apply our physics-informed neural operators to learn new types of equations, including the 2D Burgers equation in the scalar, inviscid and vector types. Finally, we show that our approach is also applicable to learn the physics of the 2D linear and nonlinear shallow water equations, which involve three coupled partial differential equations. We release our artificial intelligence surrogates and scientific software to produce initial data and boundary conditions to study a broad range of physically motivated scenarios. We provide the source code, an interactive website to visualize the predictions of our physics informed neural operators, and a tutorial for their use at the Data and Learning Hub for Science.
In this work, we study the numerical solution of inverse eigenvalue problems from a machine learning perspective. Two different problems are considered: the inverse Strum-Liouville eigenvalue problem for symmetric potentials and the inverse transmission eigenvalue problem for spherically symmetric refractive indices. Firstly, we solve the corresponding direct problems to produce the required eigenvalues datasets in order to train the machine learning algorithms. Next, we consider several examples of inverse problems and compare the performance of each model to predict the unknown potentials and refractive indices respectively, from a given small set of the lowest eigenvalues. The supervised regression models we use are k-Nearest Neighbours, Random Forests and Multi-Layer Perceptron. Our experiments show that these machine learning methods, under appropriate tuning on their parameters, can numerically solve the examined inverse eigenvalue problems.
Two contrasting algorithmic paradigms for constraint satisfaction problems are successive local explorations of neighboring configurations versus producing new configurations using global information about the problem (e.g. approximating the marginals of the probability distribution which is uniform over satisfying configurations). This paper presents new algorithms for the latter framework, ultimately producing estimates for satisfying configurations using methods from Boolean Fourier analysis. The approach is broadly inspired by the quantum amplitude amplification algorithm in that it maximally increases the amplitude of the approximation function over satisfying configurations given sequential refinements. We demonstrate that satisfying solutions may be retrieved in a process analogous to quantum measurement made efficient by sparsity in the Fourier domain, and present a complete solver construction using this novel approximation. Freedom in the refinement strategy invites further opportunities to design solvers in an evolutionary computing framework. Results demonstrate competitive performance against local solvers for the Boolean satisfiability (SAT) problem, encouraging future work in understanding the connections between Boolean Fourier analysis and constraint satisfaction.
The question "What is real?" can be traced back to the shadows in Plato's cave. Two thousand years later, Rene Descartes lacked knowledge about arguing against an evil deceiver feeding us the illusion of sensation. Descartes' epistemological concept later led to various theories of sensory experiences. The concept of "illusionism", proposing that even the very conscious experience we have is an illusion, is not only a red-pill scenario found in the 1999 science fiction movie "The Matrix" but is also a philosophical concept promoted by modern tinkers, most prominently by Daniel Dennett. Reflection upon a possible simulation and our perceived reality was beautifully visualized in "The Matrix", bringing the old ideas of Descartes to coffee houses around the world. Irish philosopher Bishop Berkeley was the father of what was later coined as "subjective idealism", basically stating that "what you perceive is real". With the advent of quantum technologies based on the control of individual fundamental particles, the question of whether our universe is a simulation isn't just intriguing. Our ever-advancing understanding of fundamental physical processes will likely lead us to build quantum computers utilizing quantum effects for simulating nature quantum-mechanically in all complexity, as famously envisioned by Richard Feynman. In this article, we outline constraints on the limits of computability and predictability in/of the universe, which we then use to design experiments allowing for first conclusions as to whether we participate in a simulation chain. Eventually, in a simulation in which the computer simulating a universe is governed by the same physical laws as the simulation, the exhaustion of computational resources will halt all simulations down the simulation chain unless an external programmer intervenes, which we may be able to observe.
We study the problem of finding elements in the intersection of an arbitrary conic variety in $\mathbb{F}^n$ with a given linear subspace (where $\mathbb{F}$ can be the real or complex field). This problem captures a rich family of algorithmic problems under different choices of the variety. The special case of the variety consisting of rank-1 matrices already has strong connections to central problems in different areas like quantum information theory and tensor decompositions. This problem is known to be NP-hard in the worst-case, even for the variety of rank-1 matrices. Surprisingly, despite these hardness results we give efficient algorithms that solve this problem for "typical" subspaces. Here, the subspace $U \subseteq \mathbb{F}^n$ is chosen generically of a certain dimension, potentially with some generic elements of the variety contained in it. Our main algorithmic result is a polynomial time algorithm that recovers all the elements of $U$ that lie in the variety, under some mild non-degeneracy assumptions on the variety. As corollaries, we obtain the following results: $\bullet$ Uniqueness results and polynomial time algorithms for generic instances of a broad class of low-rank decomposition problems that go beyond tensor decompositions. Here, we recover a decomposition of the form $\sum_{i=1}^R v_i \otimes w_i$, where the $v_i$ are elements of the given variety $X$. This implies new algorithmic results even in the special case of tensor decompositions. $\bullet$ Polynomial time algorithms for several entangled subspaces problems in quantum entanglement, including determining $r$-entanglement, complete entanglement, and genuine entanglement of a subspace. While all of these problems are NP-hard in the worst case, our algorithm solves them in polynomial time for generic subspaces of dimension up to a constant multiple of the maximum possible.
The application of autonomous UAVs to infrastructure inspection tasks provides benefits in terms of operation time reduction, safety, and cost-effectiveness. This paper presents trajectory planning for three-dimensional autonomous multi-UAV volume coverage and visual inspection of infrastructure based on the Heat Equation Driven Area Coverage (HEDAC) algorithm. The method generates trajectories using a potential field and implements distance fields to prevent collisions and to determine UAVs' camera orientation. It successfully achieves coverage during the visual inspection of complex structures such as a wind turbine and a bridge, outperforming a state-of-the-art method by allowing more surface area to be inspected under the same conditions. The presented trajectory planning method offers flexibility in various setup parameters and is applicable to real-world inspection tasks. Conclusively, the proposed methodology could potentially be applied to different autonomous UAV tasks, or even utilized as a UAV motion control method if its computational efficiency is improved.
Many quantum algorithms for numerical linear algebra assume black-box access to a block-encoding of the matrix of interest, which is a strong assumption when the matrix is not sparse. Kernel matrices, which arise from discretizing a kernel function $k(x,x')$, have a variety of applications in mathematics and engineering. They are generally dense and full-rank. Classically, the celebrated fast multipole method performs matrix multiplication on kernel matrices of dimension $N$ in time almost linear in $N$ by using the linear algebraic framework of hierarchical matrices. In light of this success, we propose a block-encoding scheme of the hierarchical matrix structure on a quantum computer. When applied to many physical kernel matrices, our method can improve the runtime of solving quantum linear systems of dimension $N$ to $O(\kappa \operatorname{polylog}(\frac{N}{\varepsilon}))$, where $\kappa$ and $\varepsilon$ are the condition number and error bound of the matrix operation. This runtime is near-optimal and, in terms of $N$, exponentially improves over prior quantum linear systems algorithms in the case of dense and full-rank kernel matrices. We discuss possible applications of our methodology in solving integral equations and accelerating computations in N-body problems.
We study the problem of designing worst-case to average-case reductions for quantum algorithms. For all linear problems, we provide an explicit and efficient transformation of quantum algorithms that are only correct on a small (even sub-constant) fraction of their inputs into ones that are correct on all inputs. This stands in contrast to the classical setting, where such results are only known for a small number of specific problems or restricted computational models. En route, we obtain a tight $\Omega(n^2)$ lower bound on the average-case quantum query complexity of the Matrix-Vector Multiplication problem. Our techniques strengthen and generalise the recently introduced additive combinatorics framework for classical worst-case to average-case reductions (STOC 2022) to the quantum setting. We rely on quantum singular value transformations to construct quantum algorithms for linear verification in superposition and learning Bogolyubov subspaces from noisy quantum oracles. We use these tools to prove a quantum local correction lemma, which lies at the heart of our reductions, based on a noise-robust probabilistic generalisation of Bogolyubov's lemma from additive combinatorics.
In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.
Deep reinforcement learning algorithms can perform poorly in real-world tasks due to the discrepancy between source and target environments. This discrepancy is commonly viewed as the disturbance in transition dynamics. Many existing algorithms learn robust policies by modeling the disturbance and applying it to source environments during training, which usually requires prior knowledge about the disturbance and control of simulators. However, these algorithms can fail in scenarios where the disturbance from target environments is unknown or is intractable to model in simulators. To tackle this problem, we propose a novel model-free actor-critic algorithm -- namely, state-conservative policy optimization (SCPO) -- to learn robust policies without modeling the disturbance in advance. Specifically, SCPO reduces the disturbance in transition dynamics to that in state space and then approximates it by a simple gradient-based regularizer. The appealing features of SCPO include that it is simple to implement and does not require additional knowledge about the disturbance or specially designed simulators. Experiments in several robot control tasks demonstrate that SCPO learns robust policies against the disturbance in transition dynamics.