Under suitable assumptions, the algorithms in [Lin, Tong, Quantum 2020] can estimate the ground state energy and prepare the ground state of a quantum Hamiltonian with near-optimal query complexities. However, this is based on a block encoding input model of the Hamiltonian, whose implementation is known to require a large resource overhead. We develop a tool called quantum eigenvalue transformation of unitary matrices with real polynomials (QET-U), which uses a controlled Hamiltonian evolution as the input model, a single ancilla qubit and no multi-qubit control operations, and is thus suitable for early fault-tolerant quantum devices. This leads to a simple quantum algorithm that outperforms all previous algorithms with a comparable circuit structure for estimating the ground state energy. For a class of quantum spin Hamiltonians, we propose a new method that exploits certain anti-commutation relations and further removes the need of implementing the controlled Hamiltonian evolution. Coupled with Trotter based approximation of the Hamiltonian evolution, the resulting algorithm can be very suitable for early fault-tolerant quantum devices. We demonstrate the performance of the algorithm using IBM Qiskit for the transverse field Ising model. If we are further allowed to use multi-qubit Toffoli gates, we can then implement amplitude amplification and a new binary amplitude estimation algorithm, which increases the circuit depth but decreases the total query complexity. The resulting algorithm saturates the near-optimal complexity for ground state preparation and energy estimating using a constant number of ancilla qubits (no more than 3).
In this paper, we present a novel class of high-order energy-preserving schemes for solving the Zakharov-Rubenchik equations. The main idea of the scheme is first to introduce an quadratic auxiliary variable to transform the Hamiltonian energy into a modified quadratic energy and the original system is then reformulated into an equivalent system which satisfies the mass, modified energy as well as two linear invariants. The symplectic Runge-Kutta method in time, together with the Fourier pseudo-spectral method in space is employed to compute the solution of the reformulated system. The main benefit of the proposed schemes is that it can achieve arbitrarily high-order accurate in time and conserve the three invariants: mass, Hamiltonian energy and two linear invariants. In addition, an efficient fixed-point iteration is proposed to solve the resulting nonlinear equations of the proposed schemes. Several experiments are addressed to validate the theoretical results.
Neutral atoms are a promising choice for scalable quantum computing architectures. Features such as long distance interactions and native multiqubit gates offer reductions in communication costs and operation count. However, the trapped atoms used as qubits can be lost over the course of computation and due to adverse environmental factors. The value of a lost computation qubit cannot be recovered and requires the reloading of the array and rerunning of the computation, greatly increasing the number of runs of a circuit. Software mitigation strategies exist but exhaust the original mapped locations of the circuit slowly and create more spread out clusters of qubits across the architecture decreasing the probability of success. We increase flexibility by developing strategies that find all reachable qubits, rather only adjacent hardware qubits. Second, we divide the architecture into separate sections, and run the circuit in each section, free of lost atoms. Provided the architecture is large enough, this resets the circuit without having to reload the entire architecture. This increases the number of effective shots before reloading by a factor of two for a circuit that utilizes 30% of the architecture. We also explore using these sections to parallelize execution of circuits, reducing the overall runtime by a total 50% for 30 qubit circuit. These techniques contribute to a dynamic new set of strategies to combat the detrimental effects of lost computational space.
Powerful hardware services and software libraries are vital tools for quickly and affordably designing, testing, and executing quantum algorithms. A robust large-scale study of how the performance of these platforms scales with the number of qubits is key to providing quantum solutions to challenging industry problems. Such an evaluation is difficult owing to the availability and price of physical quantum processing units. This work benchmarks the runtime and accuracy for a representative sample of specialized high-performance simulated and physical quantum processing units. Results show the QMware cloud computing service can reduce the runtime for executing a quantum circuit by up to 78% compared to the next fastest option for algorithms with fewer than 27 qubits. The AWS SV1 simulator offers a runtime advantage for larger circuits, up to the maximum 34 qubits available with SV1. Beyond this limit, QMware provides the ability to execute circuits as large as 40 qubits. Physical quantum devices, such as Rigetti's Aspen-M2, can provide an exponential runtime advantage for circuits with more than 30. However, the high financial cost of physical quantum processing units presents a serious barrier to practical use. Moreover, of the four quantum devices tested, only IonQ's Harmony achieves high fidelity with more than four qubits. This study paves the way to understanding the optimal combination of available software and hardware for executing practical quantum algorithms.
The present work is devoted to the eigenvalue asymptotic expansion of the Toeplitz matrix $T_{n}(a)$ whose generating function $a$ is complex valued and has a power singularity at one point. As a consequence, $T_{n}(a)$ is non-Hermitian and we know that the eigenvalue computation is a non-trivial task in the non-Hermitian setting for large sizes. We follow the work of Bogoya, B\"ottcher, Grudsky, and Maximenko and deduce a complete asymptotic expansion for the eigenvalues. After that, we apply matrix-less algorithms, in the spirit of the work by Ekstr\"om, Furci, Garoni, Serra-Capizzano et al, for computing those eigenvalues. Since the inner and extreme eigenvalues have different asymptotic behaviors, we worked on them independently, and combined the results to produce a high precision global numerical and matrix-less algorithm. The numerical results are very precise and the computational cost of the proposed algorithms is independent of the size of the considered matrices for each eigenvalue, which implies a linear cost when all the spectrum is computed. From the viewpoint of real world applications, we emphasize that the matrix class under consideration includes the matrices stemming from the numerical approximation of fractional diffusion equations. In the final conclusion section a concise discussion on the matter and few open problems are presented.
Extremely large-scale massive MIMO (XL-MIMO) has been reviewed as a promising technology for future wireless communications. The deployment of XL-MIMO, especially at high-frequency bands, leads to users being located in the near-field region instead of the conventional far-field. This letter proposes efficient model-based deep learning algorithms for estimating the near-field wireless channel of XL-MIMO communications. In particular, we first formulate the XL-MIMO near-field channel estimation task as a compressed sensing problem using the spatial gridding-based sparsifying dictionary, and then solve the resulting problem by applying the Learning Iterative Shrinkage and Thresholding Algorithm (LISTA). Due to the near-field characteristic, the spatial gridding-based sparsifying dictionary may result in low channel estimation accuracy and a heavy computational burden. To address this issue, we further propose a new sparsifying dictionary learning-LISTA (SDL-LISTA) algorithm that formulates the sparsifying dictionary as a neural network layer and embeds it into LISTA neural network. The numerical results show that our proposed algorithms outperform non-learning benchmark schemes, and SDL-LISTA achieves better performance than LISTA with ten times atoms reduction.
A detection system, modeled in a graph, uses "detectors" on a subset of vertices to uniquely identify an "intruder" at any vertex. We consider two types of detection systems: open-locating-dominating (OLD) sets and identifying codes (ICs). An OLD set gives each vertex a unique, non-empty open neighborhood of detectors, while an IC provides a unique, non-empty closed neighborhood of detectors. We explore their fault-tolerant variants: redundant OLD (RED:OLD) sets and redundant ICs (RED:ICs), which ensure that removing/disabling at most one detector guarantees the properties of OLD sets and ICs, respectively. This paper focuses on constructing optimal RED:OLD sets and RED:ICs on the infinite king's grid, and presents the proof for the bounds on their minimum densities; [3/10, 1/3] for RED:OLD sets and [3/11, 1/3] for RED:ICs.
We describe a new, adaptive solver for the two-dimensional Poisson equation in complicated geometries. Using classical potential theory, we represent the solution as the sum of a volume potential and a double layer potential. Rather than evaluating the volume potential over the given domain, we first extend the source data to a geometrically simpler region with high order accuracy. This allows us to accelerate the evaluation of the volume potential using an efficient, geometry-unaware fast multipole-based algorithm. To impose the desired boundary condition, it remains only to solve the Laplace equation with suitably modified boundary data. This is accomplished with existing fast and accurate boundary integral methods. The novelty of our solver is the scheme used for creating the source extension, assuming it is provided on an adaptive quad-tree. For leaf boxes intersected by the boundary, we construct a universal "stencil" and require that the data be provided at the subset of those points that lie within the domain interior. This universality permits us to precompute and store an interpolation matrix which is used to extrapolate the source data to an extended set of leaf nodes with full tensor-product grids on each. We demonstrate the method's speed, robustness and high-order convergence with several examples, including domains with piecewise smooth boundaries.
Several cryptosystems based on the \emph{Ring Learning with Errors} (RLWE) problem have been proposed within the NIST post-quantum cryptography standardization process, e.g., NewHope. Furthermore, there are systems like Kyber which are based on the closely related MLWE assumption. Both previously mentioned schemes result in a non-zero decryption failure rate (DFR). The combination of encryption and decryption for these kinds of algorithms can be interpreted as data transmission over a noisy channel. To the best of our knowledge this paper is the first work that analyzes the capacity of this channel. We show how to modify the encryption schemes such that the input alphabets of the corresponding channels are increased. In particular, we present lower bounds on their capacities which show that the transmission rate can be significantly increased compared to standard proposals in the literature. Furthermore, under the common assumption of stochastically independent coefficient failures, we give lower bounds on achievable rates based on both the Gilbert-Varshamov bound and concrete code constructions using BCH codes. By means of our constructions, we can either increase the total bitrate (by a factor of $1.84$ for Kyber and by factor of $7$ for NewHope) while guaranteeing the same DFR or for the same bitrate, we can significantly reduce the DFR for all schemes considered in this work (e.g., for NewHope from $2^{-216}$ to $2^{-12769}$).
We study a novel graph path planning problem for multiple agents that may crash at runtime, and block part of the workspace. In our setting, agents can detect neighboring crashed agents, and change followed paths at runtime. The objective is then to prepare a set of paths and switching rules for each agent, ensuring that all correct agents reach their destinations without collisions or deadlocks, despite unforeseen crashes of other agents. Such planning is attractive to build reliable multi-robot systems. We present problem formalization, theoretical analysis such as computational complexities, and how to solve this offline planning problem.
Quantum computing (QC) has received a lot of attention according to its light training parameter numbers and computational speeds by qubits. Moreover, various researchers have tried to enable quantum machine learning (QML) using QC, where there are also multifarious efforts to use QC to implement quantum multi-agent reinforcement learning (QMARL). Existing classical multi-agent reinforcement learning (MARL) using neural network features non-stationarity and uncertain properties due to its large number of parameters. Therefore, this paper presents a visual simulation software framework for a novel QMARL algorithm to control autonomous multi-drone systems to take advantage of QC. Our proposed QMARL framework accomplishes reasonable reward convergence and service quality performance with fewer trainable parameters than the classical MARL. Furthermore, QMARL shows more stable training results than existing MARL algorithms. Lastly, our proposed visual simulation software allows us to analyze the agents' training process and results.