The quantum internet is envisioned as the ultimate stage of the quantum revolution, which surpasses its classical counterpart in various aspects, such as the efficiency of data transmission, the security of network services, and the capability of information processing. Given its disruptive impact on the national security and the digital economy, a global race to build scalable quantum networks has already begun. With the joint effort of national governments, industrial participants and research institutes, the development of quantum networks has advanced rapidly in recent years, bringing the first primitive quantum networks within reach. In this work, we aim to provide an up-to-date review of the field of quantum networks from both theoretical and experimental perspectives, contributing to a better understanding of the building blocks required for the establishment of a global quantum internet. We also introduce a newly developed quantum network toolkit to facilitate the exploration and evaluation of innovative ideas. Particularly, it provides dual quantum computing engines, supporting simulations in both the quantum circuit and measurement-based models. It also includes a compilation scheme for mapping quantum network protocols onto quantum circuits, enabling their emulations on real-world quantum hardware devices. We showcase the power of this toolkit with several featured demonstrations, including a simulation of the Micius quantum satellite experiment, a testing of a four-layer quantum network architecture with resource management, and a quantum emulation of the CHSH game. We hope this work can give a better understanding of the state-of-the-art development of quantum networks and provide the necessary tools to make further contributions along the way.
We propose a class of randomized quantum algorithms for the task of sampling from matrix functions, without the use of quantum block encodings or any other coherent oracle access to the matrix elements. As such, our use of qubits is purely algorithmic, and no additional qubits are required for quantum data structures. For $N\times N$ Hermitian matrices, the space cost is $\log(N)+1$ qubits and depending on the structure of the matrices, the gate complexity can be comparable to state-of-the-art methods that use quantum data structures of up to size $O(N^2)$, when considering equivalent end-to-end problems. Within our framework, we present a quantum linear system solver that allows one to sample properties of the solution vector, as well as an algorithm for sampling properties of ground states of Hamiltonians. As a concrete application, we combine these two sub-routines to present a scheme for calculating Green's functions of quantum many-body systems.
This paper presents a new method for quantum identity authentication (QIA) protocols. The logic of classical zero-knowledge proofs (ZKPs) due to Schnorr is applied in quantum circuits and algorithms. This novel approach gives an exact way with which a prover $P$ can prove they know some secret by encapsulating it in a quantum state before sending to a verifier $V$ by means of a quantum channel - allowing for a ZKP wherein an eavesdropper or manipulation can be detected with a fail-safe design. This is achieved by moving away from the hardness of the Discrete Logarithm Problem towards the hardness of estimating quantum states. This paper presents a method with which this can be achieved and some bounds for the security of the protocol provided. With the anticipated advent of a `quantum internet', such protocols and ideas may soon have utility and execution in the real world.
Many standard linear algebra problems can be solved on a quantum computer by using recently developed quantum linear algebra algorithms that make use of block encodings and quantum eigenvalue/singular value transformations. A block encoding embeds a properly scaled matrix of interest A in a larger unitary transformation U that can be decomposed into a product of simpler unitaries and implemented efficiently on a quantum computer. Although quantum algorithms can potentially achieve exponential speedup in solving linear algebra problems compared to the best classical algorithm, such gain in efficiency ultimately hinges on our ability to construct an efficient quantum circuit for the block encoding of A, which is difficult in general, and not trivial even for well-structured sparse matrices. In this paper, we give a few examples on how efficient quantum circuits can be explicitly constructed for some well-structured sparse matrices, and discuss a few strategies used in these constructions. We also provide implementations of these quantum circuits in MATLAB.
Feature selection is a technique in statistical prediction modeling that identifies features in a record with a strong statistical connection to the target variable. Excluding features with a weak statistical connection to the target variable in training not only drops the dimension of the data, which decreases the time complexity of the algorithm, it also decreases noise within the data which assists in avoiding overfitting. In all, feature selection assists in training a robust statistical model that performs well and is stable. Given the lack of scalability in classical computation, current techniques only consider the predictive power of the feature and not redundancy between the features themselves. Recent advancements in feature selection that leverages quantum annealing (QA) gives a scalable technique that aims to maximize the predictive power of the features while minimizing redundancy. As a consequence, it is expected that this algorithm would assist in the bias/variance trade-off yielding better features for training a statistical model. This paper tests this intuition against classical methods by utilizing open-source data sets and evaluate the efficacy of each trained statistical model well-known prediction algorithms. The numerical results display an advantage utilizing the features selected from the algorithm that leveraged QA.
Sensitivity analysis measures the influence of a Bayesian network's parameters on a quantity of interest defined by the network, such as the probability of a variable taking a specific value. Various sensitivity measures have been defined to quantify such influence, most commonly some function of the quantity of interest's partial derivative with respect to the network's conditional probabilities. However, computing these measures in large networks with thousands of parameters can become computationally very expensive. We propose an algorithm combining automatic differentiation and exact inference to efficiently calculate the sensitivity measures in a single pass. It first marginalizes the whole network once, using e.g. variable elimination, and then backpropagates this operation to obtain the gradient with respect to all input parameters. Our method can be used for one-way and multi-way sensitivity analysis and the derivation of admissible regions. Simulation studies highlight the efficiency of our algorithm by scaling it to massive networks with up to 100'000 parameters and investigate the feasibility of generic multi-way analyses. Our routines are also showcased over two medium-sized Bayesian networks: the first modeling the country-risks of a humanitarian crisis, the second studying the relationship between the use of technology and the psychological effects of forced social isolation during the COVID-19 pandemic. An implementation of the methods using the popular machine learning library PyTorch is freely available.
Machine learning algorithms based on parametrized quantum circuits are prime candidates for near-term applications on noisy quantum computers. In this direction, various types of quantum machine learning models have been introduced and studied extensively. Yet, our understanding of how these models compare, both mutually and to classical models, remains limited. In this work, we identify a constructive framework that captures all standard models based on parametrized quantum circuits: that of linear quantum models. In particular, we show using tools from quantum information theory how data re-uploading circuits, an apparent outlier of this framework, can be efficiently mapped into the simpler picture of linear models in quantum Hilbert spaces. Furthermore, we analyze the experimentally-relevant resource requirements of these models in terms of qubit number and amount of data needed to learn. Based on recent results from classical machine learning, we prove that linear quantum models must utilize exponentially more qubits than data re-uploading models in order to solve certain learning tasks, while kernel methods additionally require exponentially more data points. Our results provide a more comprehensive view of quantum machine learning models as well as insights on the compatibility of different models with NISQ constraints.
Various physical constraints limit the number of qubits that can be implemented in a single quantum processor, and thus it is necessary to connect multiple quantum processors via quantum interconnects. While several compiler implementations for interconnected quantum computers have been proposed, there is no suitable representation as their compilation target. The lack of such representation impairs the reusability of compiled programs and makes it difficult to reason formally about the complicated behavior of distributed quantum programs. We propose InQuIR, an intermediate representation that can express communication and computation on distributed quantum systems. InQuIR has formal semantics that allows us to describe precisely the behaviors of distributed quantum programs. We give examples written in InQuIR to illustrate the problems arising in distributed programs, such as deadlock. We present a roadmap for static verification using type systems to deal with such a problem. We also provide software tools for InQuIR and evaluate the computational costs of quantum circuits under various conditions. Our tools are available at //github.com/team-InQuIR/InQuIR.
Even when actual technologies present the potential to augment inclusion and the United Nations has been stablished the digital access to information as a human right, people with disabilities continuously faced barriers in their profession. In many cases, in sciences, the lack of accessible and user centred tools left behind researches with disabilities and not facilitate them to conduct front-line research by using their respective strengths. In this contribution, we discuss some hurdles and solutions relevant for using new technology for data analysis, analysing the barriers found by final users. A focus group session was conducted with nine people with and without visual impairment, using the tool sonoUno with one linear function and an astronomical data set downloaded from the Sloan Digital Sky Survey. As a result of the focus group study, incorporating data analysis using sonification, we conclude that functionally diverse people require tools to be autonomous, thereby enabling precision, certainty, effectiveness and efficiency in their work, resulting in enhanced equity. This can be achieved by pursuing a user-centred design approach as integral to software development, and by adapting resources according to the research objectives. Development of tools that empower people with wide-ranging abilities to not only access data using multi-sensorial techniques, but also address the current lack of inclusion, is sorely needed.
Graph Neural Networks (GNNs), which generalize deep neural networks to graph-structured data, have drawn considerable attention and achieved state-of-the-art performance in numerous graph related tasks. However, existing GNN models mainly focus on designing graph convolution operations. The graph pooling (or downsampling) operations, that play an important role in learning hierarchical representations, are usually overlooked. In this paper, we propose a novel graph pooling operator, called Hierarchical Graph Pooling with Structure Learning (HGP-SL), which can be integrated into various graph neural network architectures. HGP-SL incorporates graph pooling and structure learning into a unified module to generate hierarchical representations of graphs. More specifically, the graph pooling operation adaptively selects a subset of nodes to form an induced subgraph for the subsequent layers. To preserve the integrity of graph's topological information, we further introduce a structure learning mechanism to learn a refined graph structure for the pooled graph at each layer. By combining HGP-SL operator with graph neural networks, we perform graph level representation learning with focus on graph classification task. Experimental results on six widely used benchmarks demonstrate the effectiveness of our proposed model.
Transfer learning aims at improving the performance of target learners on target domains by transferring the knowledge contained in different but related source domains. In this way, the dependence on a large number of target domain data can be reduced for constructing target learners. Due to the wide application prospects, transfer learning has become a popular and promising area in machine learning. Although there are already some valuable and impressive surveys on transfer learning, these surveys introduce approaches in a relatively isolated way and lack the recent advances in transfer learning. As the rapid expansion of the transfer learning area, it is both necessary and challenging to comprehensively review the relevant studies. This survey attempts to connect and systematize the existing transfer learning researches, as well as to summarize and interpret the mechanisms and the strategies in a comprehensive way, which may help readers have a better understanding of the current research status and ideas. Different from previous surveys, this survey paper reviews over forty representative transfer learning approaches from the perspectives of data and model. The applications of transfer learning are also briefly introduced. In order to show the performance of different transfer learning models, twenty representative transfer learning models are used for experiments. The models are performed on three different datasets, i.e., Amazon Reviews, Reuters-21578, and Office-31. And the experimental results demonstrate the importance of selecting appropriate transfer learning models for different applications in practice.