The use of quantum computation for wireless network applications is emerging as a promising paradigm to bridge the performance gap between in-practice and optimal wireless algorithms. While today's quantum technology offers limited number of qubits and low fidelity gates, application-based quantum solutions help us to understand and improve the performance of such technology even further. This paper introduces QGateD-Polar, a novel Quantum Gate-based Maximum-Likelihood Decoder design for Polar error correction codes, which are becoming widespread in today's 5G and tomorrow's NextG wireless networks. QGateD-Polar uses quantum gates to dictate the time evolution of Polar code decoding -- from the received wireless soft data to the final decoded solution -- by leveraging quantum phenomena such as superposition, entanglement, and interference, making it amenable to quantum gate-based computers. Our early results show that QGateD-Polar achieves the Maximum Likelihood performance in ideal quantum simulations, demonstrating how performance varies with noise.
The quantum internet is envisioned as the ultimate stage of the quantum revolution, which surpasses its classical counterpart in various aspects, such as the efficiency of data transmission, the security of network services, and the capability of information processing. Given its disruptive impact on the national security and the digital economy, a global race to build scalable quantum networks has already begun. With the joint effort of national governments, industrial participants and research institutes, the development of quantum networks has advanced rapidly in recent years, bringing the first primitive quantum networks within reach. In this work, we aim to provide an up-to-date review of the field of quantum networks from both theoretical and experimental perspectives, contributing to a better understanding of the building blocks required for the establishment of a global quantum internet. We also introduce a newly developed quantum network toolkit to facilitate the exploration and evaluation of innovative ideas. Particularly, it provides dual quantum computing engines, supporting simulations in both the quantum circuit and measurement-based models. It also includes a compilation scheme for mapping quantum network protocols onto quantum circuits, enabling their emulations on real-world quantum hardware devices. We showcase the power of this toolkit with several featured demonstrations, including a simulation of the Micius quantum satellite experiment, a testing of a four-layer quantum network architecture with resource management, and a quantum emulation of the CHSH game. We hope this work can give a better understanding of the state-of-the-art development of quantum networks and provide the necessary tools to make further contributions along the way.
In this new computing paradigm, named quantum computing, researchers from all over the world are taking their first steps in designing quantum circuits for image processing, through a difficult process of knowledge transfer. This effort is named Quantum Image Processing, an emerging research field pushed by powerful parallel computing capabilities of quantum computers. This work goes in this direction and proposes the challenging development of a powerful method of image denoising, such as the Total Variation (TV) model, in a quantum environment. The proposed Quantum TV is described and its sub-components are analysed. Despite the natural limitations of the current capabilities of quantum devices, the experimental results show a competitive denoising performance compared to the classical variational TV counterpart.
Smart retail stores are becoming the fact of our lives. Several computer vision and sensor based systems are working together to achieve such a complex and automated operation. Besides, the retail sector already has several open and challenging problems which can be solved with the help of pattern recognition and computer vision methods. One important problem to be tackled is the planogram compliance control. In this study, we propose a novel method to solve it. The proposed method is based on object detection, planogram compliance control, and focused and iterative search steps. The object detection step is formed by local feature extraction and implicit shape model formation. The planogram compliance control step is formed by sequence alignment via the modified Needleman-Wunsch algorithm. The focused and iterative search step aims to improve the performance of the object detection and planogram compliance control steps. We tested all three steps on two different datasets. Based on these tests, we summarize the key findings as well as strengths and weaknesses of the proposed method.
Variational quantum circuits have been widely employed in quantum simulation and quantum machine learning in recent years. However, quantum circuits with random structures have poor trainability due to the exponentially vanishing gradient with respect to the circuit depth and the qubit number. This result leads to a general standpoint that deep quantum circuits would not be feasible for practical tasks. In this work, we propose an initialization strategy with theoretical guarantees for the vanishing gradient problem in general deep quantum circuits. Specifically, we prove that under proper Gaussian initialized parameters, the norm of the gradient decays at most polynomially when the qubit number and the circuit depth increase. Our theoretical results hold for both the local and the global observable cases, where the latter was believed to have vanishing gradients even for very shallow circuits. Experimental results verify our theoretical findings in the quantum simulation and quantum chemistry.
Quantum machine learning has become an area of growing interest but has certain theoretical and hardware-specific limitations. Notably, the problem of vanishing gradients, or barren plateaus, renders the training impossible for circuits with high qubit counts, imposing a limit on the number of qubits that data scientists can use for solving problems. Independently, angle-embedded supervised quantum neural networks were shown to produce truncated Fourier series with a degree directly dependent on two factors: the depth of the encoding, and the number of parallel qubits the encoding is applied to. The degree of the Fourier series limits the model expressivity. This work introduces two new architectures whose Fourier degrees grow exponentially: the sequential and parallel exponential quantum machine learning architectures. This is done by efficiently using the available Hilbert space when encoding, increasing the expressivity of the quantum encoding. Therefore, the exponential growth allows staying at the low-qubit limit to create highly expressive circuits avoiding barren plateaus. Practically, parallel exponential architecture was shown to outperform the existing linear architectures by reducing their final mean square error value by up to 44.7% in a one-dimensional test problem. Furthermore, the feasibility of this technique was also shown on a trapped ion quantum processing unit.
Quantum computers promise to enhance machine learning for practical applications. Quantum machine learning for real-world data has to handle extensive amounts of high-dimensional data. However, conventional methods for measuring quantum kernels are impractical for large datasets as they scale with the square of the dataset size. Here, we measure quantum kernels using randomized measurements. The quantum computation time scales linearly with dataset size and quadratic for classical post-processing. While our method scales in general exponentially in qubit number, we gain a substantial speed-up when running on intermediate-sized quantum computers. Further, we efficiently encode high-dimensional data into quantum computers with the number of features scaling linearly with the circuit depth. The encoding is characterized by the quantum Fisher information metric and is related to the radial basis function kernel. Our approach is robust to noise via a cost-free error mitigation scheme. We demonstrate the advantages of our methods for noisy quantum computers by classifying images with the IBM quantum computer. To achieve further speedups we distribute the quantum computational tasks between different quantum computers. Our method enables benchmarking of quantum machine learning algorithms with large datasets on currently available quantum computers.
Random quantum circuits have been utilized in the contexts of quantum supremacy demonstrations, variational quantum algorithms for chemistry and machine learning, and blackhole information. The ability of random circuits to approximate any random unitaries has consequences on their complexity, expressibility, and trainability. To study this property of random circuits, we develop numerical protocols for estimating the frame potential, the distance between a given ensemble and the exact randomness. Our tensor-network-based algorithm has polynomial complexity for shallow circuits and is high-performing using CPU and GPU parallelism. We study 1. local and parallel random circuits to verify the linear growth in complexity as stated by the Brown-Susskind conjecture, and; 2. hardware-efficient ans\"atze to shed light on its expressibility and the barren plateau problem in the context of variational algorithms. Our work shows that large-scale tensor network simulations could provide important hints toward open problems in quantum information science.
We introduce a new weight and corresponding metric over finite extension fields for asymmetric error correction. The weight distinguishes between elements from the base field and the ones outside of it, which is motivated by asymmetric quantum codes. We set up the theoretic framework for this weight and metric, including upper and lower bounds, asymptotic behavior of random codes, and we show the existence of an optimal family of codes achieving the Singleton-type upper bound.
We study how the choices made when designing an oracle affect the complexity of quantum property testing problems defined relative to this oracle. We encode a regular graph of even degree as an invertible function $f$, and present $f$ in different oracle models. We first give a one-query QMA protocol to test if a graph encoded in $f$ has a small disconnected subset. We then use representation theory to show that no classical witness can help a quantum verifier efficiently decide this problem relative to an in-place oracle. Perhaps surprisingly, a simple modification to the standard oracle prevents a quantum verifier from efficiently deciding this problem, even with access to an unbounded witness.
Machine learning has emerged recently as a powerful tool for predicting properties of quantum many-body systems. For many ground states of gapped Hamiltonians, generative models can learn from measurements of a single quantum state to reconstruct the state accurately enough to predict local observables. Alternatively, kernel methods can predict local observables by learning from measurements on different but related states. In this work, we combine the benefits of both approaches and propose the use of conditional generative models to simultaneously represent a family of states, by learning shared structures of different quantum states from measurements. The trained model allows us to predict arbitrary local properties of ground states, even for states not present in the training data, and without necessitating further training for new observables. We numerically validate our approach (with simulations of up to 45 qubits) for two quantum many-body problems, 2D random Heisenberg models and Rydberg atom systems.