After spending 9 years in Quantum Computing and given the impending timeline of developing good quality quantum processing units, it is the moment to rethink the approach to advance quantum computing research. Rather than waiting for quantum hardware technologies to mature, we need to start assessing in tandem the impact of the occurrence of quantum computing in various scientific fields. However, for this purpose, we need to use a complementary but quite different approach than proposed by the NISQ vision, which is heavily focused on and burdened by the engineering challenges. That is why we propose and advocate the PISQ-approach: Perfect Intermediate-Scale Quantum computing based on the already known concept of perfect qubits. This will allow researchers to focus much more on the development of new applications by defining the algorithms in terms of perfect qubits and evaluating them on quantum computing simulators that are executed on supercomputers. It is not a long-term solution but it will allow universities to currently develop research on quantum logic and algorithms and companies can already start developing their internal know-how on quantum solutions.
The Covid-19 pandemic has caused impressive damages and disruptions in social, economic, and health systems (among others), and posed unprecedented challenges to public health and policy/decision-makers concerning the design and implementation of measures to mitigate its strong negative impacts. The Portuguese health authorities are currently using some decision analysis-like techniques to assess the impact of this pandemic and implementing measures for each county, region, or the whole country. Such decision tools led to some criticism and many stakeholders asked for novel approaches, in particular those having in consideration dynamical changes in the pandemic behavior arising, e.g., from new virus variants or vaccines. A multidisciplinary team formed by researchers of the Covid-19 Committee of Instituto Superior T\'ecnico at Universidade de Lisboa (CCIST analysts team) and medical doctors from the Crisis Office of the Portuguese Medical Association (GCOM experts team) gathered efforts and worked together in order to propose a new tool to help politicians and decision-makers in the combat of the pandemic. This paper presents the main steps and elements, which led to the construction of a pandemic impact assessment composite indicator, applied to the particular case of {\sc{Covid-19}} in Portugal. A multiple criteria approach based on an additive multi-attribute value theory (MAVT) aggregation model was used to construct the pandemic assessment composite indicator (PACI). The parameters of the additive model were built through a sociotechnical co-constructive interactive process between CCIST and GCOM team members. The deck of cards method was the technical tool adopted to help in building the value functions and the assessment of the criteria weights.
Solving the time-dependent Schr\"odinger equation is an important application area for quantum algorithms. We consider Schr\"odinger's equation in the semi-classical regime. Here the solutions exhibit strong multiple-scale behavior due to a small parameter $\hbar$, in the sense that the dynamics of the quantum states and the induced observables can occur on different spatial and temporal scales. Such a Schr\"odinger equation finds many applications, including in Born-Oppenheimer molecular dynamics and Ehrenfest dynamics. This paper considers quantum analogues of pseudo-spectral (PS) methods on classical computers. Estimates on the gate counts in terms of $\hbar$ and the precision $\varepsilon$ are obtained. It is found that the number of required qubits, $m$, scales only logarithmically with respect to $\hbar$. When the solution has bounded derivatives up to order $\ell$, the symmetric Trotting method has gate complexity $\mathcal{O}\Big({ (\varepsilon \hbar)^{-\frac12} \mathrm{polylog}(\varepsilon^{-\frac{3}{2\ell}} \hbar^{-1-\frac{1}{2\ell}})}\Big),$ provided that the diagonal unitary operators in the pseudo-spectral methods can be implemented with $\mathrm{poly}(m)$ operations. When physical observables are the desired outcomes, however, the step size in the time integration can be chosen independently of $\hbar$. The gate complexity in this case is reduced to $\mathcal{O}\Big({\varepsilon^{-\frac12} \mathrm{polylog}( \varepsilon^{-\frac3{2\ell}} \hbar^{-1} )}\Big),$ with $\ell$ again indicating the smoothness of the solution.
Emerging quantum algorithms for problems such as element distinctness, subset sum, and closest pair demonstrate computational advantages by relying on abstract data structures. Practically realizing such an algorithm as a program for a quantum computer requires an efficient implementation of the data structure whose operations correspond to unitary operators that manipulate quantum superpositions of data. To correctly operate in superposition, an implementation must satisfy three properties -- reversibility, history independence, and bounded-time execution. Standard implementations, such as representing an abstract set as a hash table, fail these properties, calling for tools to develop specialized implementations. In this work, we present Core Tower, the first language for quantum programming with random-access memory. Core Tower enables the developer to implement data structures as pointer-based, linked data. It features a reversible semantics enabling every valid program to be translated to a unitary quantum circuit. We present Boson, the first memory allocator that supports reversible, history-independent, and constant-time dynamic memory allocation in quantum superposition. We also present Tower, a language for quantum programming with inductive data structures. Tower features a type system that bounds all recursion using classical parameters. Using Tower, we implement Ground, the first quantum library of data structures, including lists, stacks, queues, strings, and sets. We provide the first executable implementation of sets that satisfies all three mandated properties of reversibility, history independence, and bounded-time execution.
The Swap gate is a ubiquitous tool for moving information on quantum hardware, yet it can be considered a classical operation because it does not entangle product states. Genuinely quantum operations could outperform Swap for the task of permuting qubits within an architecture, which we call routing. We consider quantum routing in two models: (1) allowing arbitrary two-qubit unitaries, or (2) allowing Hamiltonians with norm-bounded interactions. We lower bound the circuit depth or time of quantum routing in terms of spectral properties of graphs representing the architecture interaction constraints, and give a generalized upper bound for all simple connected $n$-vertex graphs. In particular, we give conditions for a superpolynomial classical-quantum routing separation, which exclude graphs with a small spectral gap and graphs of bounded degree. Finally, we provide examples of a quadratic separation between gate-based and Hamiltonian routing models with a constant number of local ancillas per qubit and of an $\Omega(n)$ speedup if we also allow fast local interactions.
The CUR decomposition is a technique for low-rank approximation that selects small subsets of the columns and rows of a given matrix to use as bases for its column and rowspaces. It has recently attracted much interest, as it has several advantages over traditional low rank decompositions based on orthonormal bases. These include the preservation of properties such as sparsity or non-negativity, the ability to interpret data, and reduced storage requirements. The problem of finding the skeleton sets that minimize the norm of the residual error is known to be NP-hard, but classical pivoting schemes such as column pivoted QR work tend to work well in practice. When combined with randomized dimension reduction techniques, classical pivoting based methods become particularly effective, and have proven capable of very rapidly computing approximate CUR decompositions of large, potentially sparse, matrices. Another class of popular algorithms for computing CUR de-compositions are based on drawing the columns and rows randomly from the full index sets, using specialized probability distributions based on leverage scores. Such sampling based techniques are particularly appealing for very large scale problems, and are well supported by theoretical performance guarantees. This manuscript provides a comparative study of the various randomized algorithms for computing CUR decompositions that have recently been proposed. Additionally, it proposes some modifications and simplifications to the existing algorithms that leads to faster execution times.
A long line of research about connectivity in the Massively Parallel Computation model has culminated in the seminal works of Andoni et al. [FOCS'18] and Behnezhad et al. [FOCS'19]. They provide a randomized algorithm for low-space MPC with conjectured to be optimal round complexity $O(\log D + \log \log_{\frac m n} n)$ and $O(m)$ space, for graphs on $n$ vertices with $m$ edges and diameter $D$. Surprisingly, a recent result of Coy and Czumaj [STOC'22] shows how to achieve the same deterministically. Unfortunately, however, their algorithm suffers from large local computation time. We present a deterministic connectivity algorithm that matches all the parameters of the randomized algorithm and, in addition, significantly reduces the local computation time to nearly linear. Our derandomization method is based on reducing the amount of randomness needed to allow for a simpler efficient search. While similar randomness reduction approaches have been used before, our result is not only strikingly simpler, but it is the first to have efficient local computation. This is why we believe it to serve as a starting point for the systematic development of computation-efficient derandomization approaches in low-memory MPC.
Lensless imaging seeks to replace/remove the lens in a conventional imaging system. The earliest cameras were in fact lensless, relying on long exposure times to form images on the other end of a small aperture in a darkened room/container (camera obscura). The introduction of a lens allowed for more light throughput and therefore shorter exposure times, while retaining sharp focus. The incorporation of digital sensors readily enabled the use of computational imaging techniques to post-process and enhance raw images (e.g. via deblurring, inpainting, denoising, sharpening). Recently, imaging scientists have started leveraging computational imaging as an integral part of lensless imaging systems, allowing them to form viewable images from the highly multiplexed raw measurements of lensless cameras (see [5] and references therein for a comprehensive treatment of lensless imaging). This represents a real paradigm shift in camera system design as there is more flexibility to cater the hardware to the application at hand (e.g. lightweight or flat designs). This increased flexibility comes however at the price of a more demanding post-processing of the raw digital recordings and a tighter integration of sensing and computation, often difficult to achieve in practice due to inefficient interactions between the various communities of scientists involved. With LenslessPiCam, we provide an easily accessible hardware and software framework to enable researchers, hobbyists, and students to implement and explore practical and computational aspects of lensless imaging. We also provide detailed guides and exercises so that LenslessPiCam can be used as an educational resource, and point to results from our graduate-level signal processing course.
In more recent years, there has been increasing research interest in exploiting the use of application specific hardware for solving optimisation problems. Examples of solvers that use specialised hardware are IBM's Quantum System One and D-wave's Quantum Annealer (QA) and Fujitsu's Digital Annealer (DA). These solvers have been developed to optimise problems faster than traditional meta-heuristics implemented on general purpose machines. Previous research has shown that these solvers (can optimise many problems much quicker than exact solvers such as GUROBI and CPLEX. Such conclusions have not been made when comparing hardware solvers with classical evolutionary algorithms. Making a fair comparison between traditional evolutionary algorithms, such as Genetic Algorithm (GA), and the DA (or other similar solvers) is challenging because the later benefits from the use of application specific hardware while evolutionary algorithms are often implemented on general-purpose machines. Moreover, quantum or quantum-inspired solvers are limited to solving problems in a specific format. A common formulation used is Quadratic Unconstrained Binary Optimisation (QUBO). Many optimisation problems are however constrained and have natural representations that are non-binary. Converting such problems to QUBO can lead to more problem difficulty and/or larger search space. The question addressed in this paper is whether quantum or quantum-inspired solvers can optimise QUBO transformations of combinatorial optimisation problems faster than classical evolutionary algorithms applied to the same problems in their natural representations. We show that the DA often present better average objective function values than GA on Travelling Salesman, Quadratic Assignment and Multi-dimensional Knapsack Problem instances.
An approach is presented treating decision theory as a probabilistic theory based on quantum techniques. Accurate definitions are given and thorough analysis is accomplished for the quantum probabilities describing the choice between separate alternatives, sequential alternatives characterizing conditional quantum probabilities, and behavioral quantum probabilities taking into account rational-irrational duality of decision making. The comparison between quantum and classical probabilities is explained. The analysis demonstrates that quantum probabilities serve as an essentially more powerful tool of characterizing various decision-making situations including the influence of psychological behavioral effects.
We propose and demonstrate a new approach for fast and accurate surrogate modelling of urban drainage system hydraulics based on physics-guided machine learning. The surrogates are trained against a limited set of simulation results from a hydrodynamic (HiFi) model. Our approach reduces simulation times by one to two orders of magnitude compared to a HiFi model. It is thus slower than e.g. conceptual hydrological models, but it enables simulations of water levels, flows and surcharges in all nodes and links of a drainage network and thus largely preserves the level of detail provided by HiFi models. Comparing time series simulated by the surrogate and the HiFi model, R2 values in the order of 0.9 are achieved. Surrogate training times are currently in the order of one hour. However, they can likely be reduced through the application of transfer learning and graph neural networks. Our surrogate approach will be useful for interactive workshops in initial design phases of urban drainage systems, as well as for real time applications. In addition, our model formulation is generic and future research should investigate its application for simulating other water systems.