In this paper, we present Q# implementations for arbitrary single-variabled fixed-point arithmetic operations for a gate-based quantum computer based on lookup tables (LUTs). In general, this is an inefficent way of implementing a function since the number of inputs can be large or even infinite. However, if the input domain can be bounded and there can be some error tolerance in the output (both of which are often the case in practical use-cases), the quantum LUT implementation of certain quantum arithmetic functions can be more efficient than their corresponding reversible arithmetic implementations. We discuss the implementation of the LUT using Q\# and its approximation errors. We then show examples of how to use the LUT to implement quantum arithmetic functions and compare the resources required for the implementation with the current state-of-the-art bespoke implementations of some commonly used arithmetic functions. The implementation of the LUT is designed for use by practitioners to use when implementing end-to-end quantum algorithms. In addition, given its well-defined approximation errors, the LUT implementation makes for a clear benchmark for evaluating the efficiency of bespoke quantum arithmetic circuits .
Problem instances of a size suitable for practical applications are not likely to be addressed during the noisy intermediate-scale quantum (NISQ) period with (almost) pure quantum algorithms. Hybrid classical-quantum algorithms have potential, however, to achieve good performance on much larger problem instances. We investigate one such hybrid algorithm on a problem of substantial importance: vehicle routing for supply chain logistics with multiple trucks and complex demand structure. We use reinforcement learning with neural networks with embedded quantum circuits. In such neural networks, projecting high-dimensional feature vectors down to smaller vectors is necessary to accommodate restrictions on the number of qubits of NISQ hardware. However, we use a multi-head attention mechanism where, even in classical machine learning, such projections are natural and desirable. We consider data from the truck routing logistics of a company in the automotive sector, and apply our methodology by decomposing into small teams of trucks, and we find results comparable to human truck assignment.
The quantum internet is envisioned as the ultimate stage of the quantum revolution, which surpasses its classical counterpart in various aspects, such as the efficiency of data transmission, the security of network services, and the capability of information processing. Given its disruptive impact on the national security and the digital economy, a global race to build scalable quantum networks has already begun. With the joint effort of national governments, industrial participants and research institutes, the development of quantum networks has advanced rapidly in recent years, bringing the first primitive quantum networks within reach. In this work, we aim to provide an up-to-date review of the field of quantum networks from both theoretical and experimental perspectives, contributing to a better understanding of the building blocks required for the establishment of a global quantum internet. We also introduce a newly developed quantum network toolkit to facilitate the exploration and evaluation of innovative ideas. Particularly, it provides dual quantum computing engines, supporting simulations in both the quantum circuit and measurement-based models. It also includes a compilation scheme for mapping quantum network protocols onto quantum circuits, enabling their emulations on real-world quantum hardware devices. We showcase the power of this toolkit with several featured demonstrations, including a simulation of the Micius quantum satellite experiment, a testing of a four-layer quantum network architecture with resource management, and a quantum emulation of the CHSH game. We hope this work can give a better understanding of the state-of-the-art development of quantum networks and provide the necessary tools to make further contributions along the way.
Error correction in automatic speech recognition (ASR) aims to correct those incorrect words in sentences generated by ASR models. Since recent ASR models usually have low word error rate (WER), to avoid affecting originally correct tokens, error correction models should only modify incorrect words, and therefore detecting incorrect words is important for error correction. Previous works on error correction either implicitly detect error words through target-source attention or CTC (connectionist temporal classification) loss, or explicitly locate specific deletion/substitution/insertion errors. However, implicit error detection does not provide clear signal about which tokens are incorrect and explicit error detection suffers from low detection accuracy. In this paper, we propose SoftCorrect with a soft error detection mechanism to avoid the limitations of both explicit and implicit error detection. Specifically, we first detect whether a token is correct or not through a probability produced by a dedicatedly designed language model, and then design a constrained CTC loss that only duplicates the detected incorrect tokens to let the decoder focus on the correction of error tokens. Compared with implicit error detection with CTC loss, SoftCorrect provides explicit signal about which words are incorrect and thus does not need to duplicate every token but only incorrect tokens; compared with explicit error detection, SoftCorrect does not detect specific deletion/substitution/insertion errors but just leaves it to CTC loss. Experiments on AISHELL-1 and Aidatatang datasets show that SoftCorrect achieves 26.1% and 9.4% CER reduction respectively, outperforming previous works by a large margin, while still enjoying fast speed of parallel generation.
In many applications, we want to influence the decisions of independent agents by designing incentives for their actions. We revisit a fundamental problem in this area, called GAME IMPLEMENTATION: Given a game in standard form and a set of desired strategies, can we design a set of payment promises such that if the players take the payment promises into account, then all undominated strategies are desired? Furthermore, we aim to minimize the cost, that is, the worst-case amount of payments. We study the tractability of computing such payment promises and determine more closely what obstructions we may have to overcome in doing so. We show that GAME IMPLEMENTATION is NP-hard even for two players, solving in particular a long open question (Eidenbenz et al. 2011) and suggesting more restrictions are necessary to obtain tractability results. We thus study the regime in which players have only a small constant number of strategies and obtain the following. First, this case remains NP-hard even if each player's utility depends only on three others. Second, we repair a flawed efficient algorithm for the case of both small number of strategies and small number of players. Among further results, we characterize sets of desired strategies that can be implemented at zero cost as a kind of stable core of the game.
Quantum computers promise to enhance machine learning for practical applications. Quantum machine learning for real-world data has to handle extensive amounts of high-dimensional data. However, conventional methods for measuring quantum kernels are impractical for large datasets as they scale with the square of the dataset size. Here, we measure quantum kernels using randomized measurements. The quantum computation time scales linearly with dataset size and quadratic for classical post-processing. While our method scales in general exponentially in qubit number, we gain a substantial speed-up when running on intermediate-sized quantum computers. Further, we efficiently encode high-dimensional data into quantum computers with the number of features scaling linearly with the circuit depth. The encoding is characterized by the quantum Fisher information metric and is related to the radial basis function kernel. Our approach is robust to noise via a cost-free error mitigation scheme. We demonstrate the advantages of our methods for noisy quantum computers by classifying images with the IBM quantum computer. To achieve further speedups we distribute the quantum computational tasks between different quantum computers. Our method enables benchmarking of quantum machine learning algorithms with large datasets on currently available quantum computers.
We introduce partial differential encodings of Boolean functions as a way of measuring the complexity of Boolean functions. These encodings enable us to derive from group actions non-trivial bounds on the Chow-Rank of polynomials used to specify partial differential encodings of Boolean functions. We also introduce variants of partial differential encodings called partial differential programs. We show that such programs optimally describe important families of polynomials including determinants and permanents. Partial differential programs also enables to quantitively contrast these two families of polynomials. Finally we derive from polynomial constructions inspired by partial differential programs which exhibit an unconditional exponential separation between high order hypergraph isomorhism instances and their sub-isomorphism counterparts.
A simultaneously transmitting and reflecting surface (STARS) aided terahertz (THz) communication system is proposed. A novel power consumption model depending on the type and the resolution of individual elements is proposed for the STARS. Then, the system energy efficiency (EE) and spectral efficiency (SE) are maximized in both narrowband and wideband THz systems. 1) For the narrowband system, an iterative algorithm based on penalty dual decomposition is proposed to jointly optimize the hybrid beamforming at the base station (BS) and the independent phase-shift coefficients at the STARS. The proposed algorithm is then extended to the coupled phase-shift STARS. 2) For the wideband system, to eliminate the beam split effect, a time-delay (TD) network implemented by the true-time-delayers is applied in the hybrid beamforming structure. An iterative algorithm based on the quasi-Newton method is proposed to design the coefficients of the TD network. Finally, our numerical results reveal that i) there is a slight performance loss of EE and SE caused by coupled phase shifts of the STARS in both narrowband and wideband systems, and ii) the conventional hybrid beamforming achieved close performance of EE and SE to the full-digital one in the narrowband system, but not in the wideband system where the TD-based hybrid beamforming is more efficient.
Random quantum circuits have been utilized in the contexts of quantum supremacy demonstrations, variational quantum algorithms for chemistry and machine learning, and blackhole information. The ability of random circuits to approximate any random unitaries has consequences on their complexity, expressibility, and trainability. To study this property of random circuits, we develop numerical protocols for estimating the frame potential, the distance between a given ensemble and the exact randomness. Our tensor-network-based algorithm has polynomial complexity for shallow circuits and is high-performing using CPU and GPU parallelism. We study 1. local and parallel random circuits to verify the linear growth in complexity as stated by the Brown-Susskind conjecture, and; 2. hardware-efficient ans\"atze to shed light on its expressibility and the barren plateau problem in the context of variational algorithms. Our work shows that large-scale tensor network simulations could provide important hints toward open problems in quantum information science.
We introduce a new weight and corresponding metric over finite extension fields for asymmetric error correction. The weight distinguishes between elements from the base field and the ones outside of it, which is motivated by asymmetric quantum codes. We set up the theoretic framework for this weight and metric, including upper and lower bounds, asymptotic behavior of random codes, and we show the existence of an optimal family of codes achieving the Singleton-type upper bound.
We study how the choices made when designing an oracle affect the complexity of quantum property testing problems defined relative to this oracle. We encode a regular graph of even degree as an invertible function $f$, and present $f$ in different oracle models. We first give a one-query QMA protocol to test if a graph encoded in $f$ has a small disconnected subset. We then use representation theory to show that no classical witness can help a quantum verifier efficiently decide this problem relative to an in-place oracle. Perhaps surprisingly, a simple modification to the standard oracle prevents a quantum verifier from efficiently deciding this problem, even with access to an unbounded witness.