Many eigenvalue problems arising in practice are often of the generalized form $A\x=\lambda B\x$. One particularly important case is symmetric, namely $A, B$ are Hermitian and $B$ is positive definite. The standard algorithm for solving this class of eigenvalue problems is to reduce them to Hermitian eigenvalue problems. For a quantum computer, quantum phase estimation is a useful technique to solve Hermitian eigenvalue problems. In this work, we propose a new quantum algorithm for symmetric generalized eigenvalue problems using ordinary differential equations. The algorithm has lower complexity than the standard one based on quantum phase estimation. Moreover, it works for a wider case than symmetric: $B$ is invertible, $B^{-1}A$ is diagonalizable and all the eigenvalues are real.
We present an event-driven simulation package called QuISP for large-scale quantum networks built on top of the OMNeT++ discrete event simulation framework. Although the behavior of quantum networking devices have been revealed by recent research, it is still an open question how they will work in networks of a practical size. QuISP is designed to simulate large-scale quantum networks to investigate their behavior under realistic, noisy and heterogeneous configurations. The protocol architecture we propose enables studies of different choices for error management and other key decisions. Our confidence in the simulator is supported by comparing its output to analytic results for a small network. A key reason for simulation is to look for emergent behavior when large numbers of individually characterized devices are combined. QuISP can handle thousands of qubits in dozens of nodes on a laptop computer, preparing for full Quantum Internet simulation. This simulator promotes the development of protocols for larger and more complex quantum networks.
The Scheduled Relaxation Jacobi (SRJ) method is a linear solver algorithm which greatly improves the convergence of the Jacobi iteration through the use of judiciously chosen relaxation factors (an SRJ scheme) which attenuate the solution error. Until now, the method has primarily been used to accelerate the solution of elliptic PDEs (e.g. Laplace, Poisson's equation) as the currently available schemes are restricted to solving this class of problems. The goal of this paper is to present a methodology for constructing SRJ schemes which are suitable for solving non-elliptic PDEs (or equivalent, nonsymmetric linear systems arising from the discretization of these PDEs), thereby extending the applicability of this method to a broader class of problems. These schemes are obtained by numerically solving a constrained minimization problem which guarantees the solution error will not grow as long as the linear system has eigenvalues which lie in certain regions of the complex plane. We demonstrate that these schemes are able to accelerate the convergence of standard Jacobi iteration for the nonsymmetric linear systems arising from discretization of the 1D and 2D steady advection-diffusion equations.
Quantum noise is the key challenge in Noisy Intermediate-Scale Quantum (NISQ) computers. Previous work for mitigating noise has primarily focused on gate-level or pulse-level noise-adaptive compilation. However, limited research efforts have explored a higher level of optimization by making the quantum circuits themselves resilient to noise. We propose QuantumNAS, a comprehensive framework for noise-adaptive co-search of the variational circuit and qubit mapping. Variational quantum circuits are a promising approach for constructing QML and quantum simulation. However, finding the best variational circuit and its optimal parameters is challenging due to the large design space and parameter training cost. We propose to decouple the circuit search and parameter training by introducing a novel SuperCircuit. The SuperCircuit is constructed with multiple layers of pre-defined parameterized gates and trained by iteratively sampling and updating the parameter subsets (SubCircuits) of it. It provides an accurate estimation of SubCircuits performance trained from scratch. Then we perform an evolutionary co-search of SubCircuit and its qubit mapping. The SubCircuit performance is estimated with parameters inherited from SuperCircuit and simulated with real device noise models. Finally, we perform iterative gate pruning and finetuning to remove redundant gates. Extensively evaluated with 12 QML and VQE benchmarks on 10 quantum comput, QuantumNAS significantly outperforms baselines. For QML, QuantumNAS is the first to demonstrate over 95% 2-class, 85% 4-class, and 32% 10-class classification accuracy on real QC. It also achieves the lowest eigenvalue for VQE tasks on H2, H2O, LiH, CH4, BeH2 compared with UCCSD. We also open-source torchquantum (//github.com/mit-han-lab/pytorch-quantum) for fast training of parameterized quantum circuits to facilitate future research.
Subsampling is a general statistical method developed in the 1990s aimed at estimating the sampling distribution of a statistic $\hat \theta _n$ in order to conduct nonparametric inference such as the construction of confidence intervals and hypothesis tests. Subsampling has seen a resurgence in the Big Data era where the standard, full-resample size bootstrap can be infeasible to compute. Nevertheless, even choosing a single random subsample of size $b$ can be computationally challenging with both $b$ and the sample size $n$ being very large. In the paper at hand, we show how a set of appropriately chosen, non-random subsamples can be used to conduct effective -- and computationally feasible -- distribution estimation via subsampling. Further, we show how the same set of subsamples can be used to yield a procedure for subsampling aggregation -- also known as subagging -- that is scalable with big data. Interestingly, the scalable subagging estimator can be tuned to have the same (or better) rate of convergence as compared to $\hat \theta _n$. The paper is concluded by showing how to conduct inference, e.g., confidence intervals, based on the scalable subagging estimator instead of the original $\hat \theta _n$.
The security of code based constructions is usually assessed by Information Set Decoding (ISD) algorithms. In the quantum setting, amplitude amplification yields an asymptotic square root gain over the classical analogue. However, it is still unclear whether a real quantum circuit could yield actual improvements or suffer an enormous overhead due to its implementation. This leads to different considerations of these quantum attacks in the security analysis of code based proposals. In this work we clarify this doubt by giving the first quantum circuit design of the fully-fledged ISD procedure, an implementation in the quantum simulation library Qibo as well as precise estimates of its complexities. We show that against common belief, Prange's ISD algorithm can be implemented rather efficiently on a quantum computer, namely with only a logarithmic overhead in circuit depth compared to a classical implementation. As another major contribution, we leverage the idea of classical co-processors to design hybrid classical-quantum trade-offs, that allow to tailor the necessary qubits to any available amount, while still providing quantum speedups. Interestingly, when constraining the width of the circuit instead of its depth we are able to overcome previous optimality results on constraint quantum search.
Surrogates, models that mimic the behavior of programs, form the basis of a variety of development workflows. We study three surrogate-based design patterns, evaluating each in case studies on a large-scale CPU simulator. With surrogate compilation, programmers develop a surrogate that mimics the behavior of a program to deploy to end-users in place of the original program. Surrogate compilation accelerates the CPU simulator under study by $1.6\times$. With surrogate adaptation, programmers develop a surrogate of a program then retrain that surrogate on a different task. Surrogate adaptation decreases the simulator's error by up to $50\%$. With surrogate optimization, programmers develop a surrogate of a program, optimize input parameters of the surrogate, then plug the optimized input parameters back into the original program. Surrogate optimization finds simulation parameters that decrease the simulator's error by $5\%$ compared to the error induced by expert-set parameters. In this paper we formalize this taxonomy of surrogate-based design patterns. We further describe the programming methodology common to all three design patterns. Our work builds a foundation for the emerging class of workflows based on programming with surrogates of programs.
There has been a lot of effort to construct good quantum codes from the classical error correcting codes. Constructing new quantum codes, using Hermitian self-orthogonal codes, seems to be a difficult problem in general. In this paper, Hermitian self-orthogonal codes are studied from algebraic function fields. Sufficient conditions for the Hermitian self-orthogonality of an algebraic geometry code are presented. New Hermitian self-orthogonal codes are constructed from projective lines, elliptic curves, hyper-elliptic curves, Hermitian curves, and Artin-Schreier curves. In addition, over the projective lines, we construct new families of MDS quantum codes with parameters $[[N,N-2K,K+1]]_q$ under the following conditions: i) $N=t(q-1)+1$ or $t(q-1)+2$ with $t|(q+1)$ and $K=\lfloor\frac{t(q-1)+1}{2t}\rfloor+1$; ii) $(n-1)|(q^2-1)$, $N=n$ or $N=n+1$, $K_0=\lfloor\frac{n+q-1}{q+1}\rfloor$, and $K\ge K_0+1$; iii) $N=tq+1$, $\forall~1\le t\le q$ and $K=\lfloor\frac{tq+q-1}{q+1}\rfloor+1$; iv) $n|(q^2-1)$, $n_2=\frac{n}{\gcd (n,q+1)}$, $\forall~ 1\le t\le \frac{q-1}{n_2}-1$, $N=(t+1)n+2$ and $K=\lfloor \frac{(t+1)n+1+q-1}{q+1}\rfloor+1$.
This contribution focuses on the development of Model Order Reduction (MOR) for one-way coupled steady state linear thermomechanical problems in a finite element setting. We apply Proper Orthogonal Decomposition (POD) for the computation of reduced basis space. On the other hand, for the evaluation of the modal coefficients, we use two different methodologies: the one based on the Galerkin projection (G) and the other one based on Artificial Neural Network (ANN). We aim at comparing POD-G and POD-ANN in terms of relevant features including errors and computational efficiency. In this context, both physical and geometrical parametrization are considered. We also carry out a validation of the Full Order Model (FOM) based on customized benchmarks in order to provide a complete computational pipeline. The framework proposed is applied to a relevant industrial problem related to the investigation of thermomechanical phenomena arising in blast furnace hearth walls. Keywords: Thermomechanical problems, Finite element method, Proper orthogonal decomposition, Galerkin projection, Artificial neural network, Geometric and physical parametrization, Blast furnace.
In this paper we study and relate several invariants connected to the solving degree of a polynomial system. This provides a rigorous framework for estimating the complexity of solving a system of polynomial equations via Groebner bases methods. Our main results include a connection between the solving degree and the last fall degree and one between the degree of regularity and the Castelnuovo-Mumford regularity.
Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.