亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper explores a new approach to fault-tolerant quantum computing, relying on quantum polar codes. We consider quantum polar codes of Calderbank-Shor-Steane type, encoding one logical qubit, which we refer to as $\mathcal{Q}_1$ codes. First, we show that a subfamily of $\mathcal{Q}_1$ codes is equivalent to the well-known family of Shor codes. Moreover, we show that $\mathcal{Q}_1$ codes significantly outperform Shor codes, of the same length and minimum distance. Second, we consider the fault-tolerant preparation of $\mathcal{Q}_1$ code states. We give a recursive procedure to prepare a $\mathcal{Q}_1$ code state, based on two-qubit Pauli measurements only. The procedure is not by itself fault-tolerant, however, the measurement operations therein provide redundant classical bits, which can be advantageously used for error detection. Fault tolerance is then achieved by combining the proposed recursive procedure with an error detection method. Finally, we consider the fault-tolerant error correction of $\mathcal{Q}_1$ codes. We use Steane's error correction technique, which incorporates the proposed fault-tolerant code state preparation procedure. We provide numerical estimates of the logical error rates for $\mathcal{Q}_1$ and Shor codes of length $16$ and $64$ qubits, assuming a circuit-level depolarizing noise model. Remarkably, the $\mathcal{Q}_1$ code of length $64$ qubits achieves a pseudothreshold value slightly below $1\%$, demonstrating the potential of polar codes for fault-tolerant quantum computing.

相關內容

代碼(Code)是專知網的一個重要知識資料文檔板塊,旨在整理收錄論文源代碼、復現代碼,經典工程代碼等,便于用戶查閱下載使用。

Recent constructions of quantum low-density parity-check (QLDPC) codes provide optimal scaling of the number of logical qubits and the minimum distance in terms of the code length, thereby opening the door to fault-tolerant quantum systems with minimal resource overhead. However, the hardware path from nearest-neighbor-connection-based topological codes to long-range-interaction-demanding QLDPC codes is likely a challenging one. Given the practical difficulty in building a monolithic architecture for quantum systems, such as computers, based on optimal QLDPC codes, it is worth considering a distributed implementation of such codes over a network of interconnected medium-sized quantum processors. In such a setting, all syndrome measurements and logical operations must be performed through the use of high-fidelity shared entangled states between the processing nodes. Since probabilistic many-to-1 distillation schemes for purifying entanglement are inefficient, we investigate quantum error correction based entanglement purification in this work. Specifically, we employ QLDPC codes to distill GHZ states, as the resulting high-fidelity logical GHZ states can interact directly with the code used to perform distributed quantum computing (DQC), e.g. for fault-tolerant Steane syndrome extraction. This protocol is applicable beyond the application of DQC since entanglement distribution and purification is a quintessential task of any quantum network. We use the min-sum algorithm (MSA) based iterative decoder with a sequential schedule for distilling 3-qubit GHZ states using a rate 0.118 family of lifted product QLDPC codes and obtain a threshold of 10.7% under depolarizing noise. Our results apply to larger size GHZ states as well, where we extend our technical result about a measurement property of 3-qubit GHZ states to construct a scalable GHZ purification protocol.

Quantum speedup for mixing a Markov chain can be achieved based on the construction of slowly-varying $r$ Markov chains where the initial chain can be easily prepared and the spectral gaps have uniform lower bound. The overall complexity is proportional to $r$. We present a multi-level approach to construct such a sequence of $r$ Markov chains by varying a resolution parameter $h.$ We show that the density function of a low-resolution Markov chain can be used to warm start the Markov chain with high resolution. We prove that in terms of the chain length the new algorithm has $O(1)$ complexity rather than $O(r).$

We introduce and analyse an efficient decoder for the quantum Tanner codes of that can correct adversarial errors of linear weight. Previous decoders for quantum low-density parity-check codes could only handle adversarial errors of weight $O(\sqrt{n \log n})$. We also work on the link between quantum Tanner codes and the Lifted Product codes of Panteleev and Kalachev, and show that our decoder can be adapted to the latter. The decoding algorithm alternates between sequential and parallel procedures and converges in linear time.

The encoder network of an autoencoder is an approximation of the nearest point projection onto the manifold spanned by the decoder. A concern with this approximation is that, while the output of the encoder is always unique, the projection can possibly have infinitely many values. This implies that the latent representations learned by the autoencoder can be misleading. Borrowing from geometric measure theory, we introduce the idea of using the reach of the manifold spanned by the decoder to determine if an optimal encoder exists for a given dataset and decoder. We develop a local generalization of this reach and propose a numerical estimator thereof. We demonstrate that this allows us to determine which observations can be expected to have a unique, and thereby trustworthy, latent representation. As our local reach estimator is differentiable, we investigate its usage as a regularizer and show that this leads to learned manifolds for which projections are more often unique than without regularization.

Principal component analysis (PCA) is a dimensionality reduction method in data analysis that involves diagonalizing the covariance matrix of the dataset. Recently, quantum algorithms have been formulated for PCA based on diagonalizing a density matrix. These algorithms assume that the covariance matrix can be encoded in a density matrix, but a concrete protocol for this encoding has been lacking. Our work aims to address this gap. Assuming amplitude encoding of the data, with the data given by the ensemble $\{p_i,| \psi_i \rangle\}$, then one can easily prepare the ensemble average density matrix $\overline{\rho} = \sum_i p_i |\psi_i\rangle \langle \psi_i |$. We first show that $\overline{\rho}$ is precisely the covariance matrix whenever the dataset is centered. For quantum datasets, we exploit global phase symmetry to argue that there always exists a centered dataset consistent with $\overline{\rho}$, and hence $\overline{\rho}$ can always be interpreted as a covariance matrix. This provides a simple means for preparing the covariance matrix for arbitrary quantum datasets or centered classical datasets. For uncentered classical datasets, our method is so-called "PCA without centering", which we interpret as PCA on a symmetrized dataset. We argue that this closely corresponds to standard PCA, and we derive equations and inequalities that bound the deviation of the spectrum obtained with our method from that of standard PCA. We numerically illustrate our method for the MNIST handwritten digit dataset. We also argue that PCA on quantum datasets is natural and meaningful, and we numerically implement our method for molecular ground-state datasets.

Minimal perfect hashing is the problem of mapping a static set of $n$ distinct keys into the address space $\{1,\ldots,n\}$ bijectively. It is well-known that $n\log_2 e$ bits are necessary to specify a minimal perfect hash function $f$, when no additional knowledge of the input keys is to be used. However, it is often the case in practice that the input keys have intrinsic relationships that we can exploit to lower the bit complexity of $f$. For example, consider a string and the set of all its distinct sub-strings of length $k$ - the so-called $k$-mers of the string. Two consecutive $k$-mers in the string have a strong intrinsic relationship in that they share an overlap of $k-1$ symbols. Hence, it seems intuitively possible to beat the classic $\log_2 e$ bits/key barrier in this case. Moreover, we would like $f$ to map consecutive $k$-mers to consecutive addresses, as to preserve as much as possible the relationships between the keys also in the co-domain $\{1,\ldots,n\}$. This is a useful feature in practice as it guarantees a certain degree of locality of reference for $f$, resulting in a better evaluation time when querying consecutive $k$-mers from a string. Motivated by these premises, we initiate the study of a new type of locality-preserving minimal perfect hash functions designed for $k$-mers extracted consecutively from a string (or collections of strings). We show a theoretic lower bound on the bit complexity of any $(1-\varepsilon)$-locality-preserving MPHF, for a parameter $0 < \varepsilon < 1$. The complexity is lower than $n\log_2 e$ bits for sufficiently small $\varepsilon$. We propose a construction that approaches the theoretic minimum space for growing $k$ and present a practical implementation of the method.

The Galois inner product is a generalization of the Euclidean inner product and Hermitian inner product. The Galois hull of a linear code is the intersection of itself and its Galois dual code, which has aroused the interest of researchers in these years. In this paper, we study Galois hulls of linear codes. Firstly, the symmetry of the dimensions of Galois hulls is found. Some new necessary and sufficient conditions for linear codes being Galois self-orthogonal codes, Galois self-dual codes and Galois linear complementary dual codes are characterized. Then, based on these properties, we develop the previous theory and propose explicit methods to construct Galois self-orthogonal codes of lengths $n+2i$ ($i\geq 0$) and $n+2i+1$ ($i\geq 1$) from Galois self-orthogonal codes of length $n$. As applications, linear codes of lengths $n+2i$ and $n+2i+1$ with Galois hulls of arbitrary dimensions are derived immediately. After this, two new classes of Hermitian self-orthogonal MDS codes are also constructed. Finally, applying all the results to the constructions of entanglement-assisted quantum error-correcting codes (EAQECCs), many new EAQECCs and MDS EAQECCs with rates greater than or equal to $\frac{1}{2}$ and positive net rates can be obtained.

Quantum cloud computing is a promising paradigm for efficiently provisioning quantum resources (i.e., qubits) to users. In quantum cloud computing, quantum cloud providers provision quantum resources in reservation and on-demand plans for users. Literally, the cost of quantum resources in the reservation plan is expected to be cheaper than the cost of quantum resources in the on-demand plan. However, quantum resources in the reservation plan have to be reserved in advance without information about the requirement of quantum circuits beforehand, and consequently, the resources are insufficient, i.e., under-reservation. Hence, quantum resources in the on-demand plan can be used to compensate for the unsatisfied quantum resources required. To end this, we propose a quantum resource allocation for the quantum cloud computing system in which quantum resources and the minimum waiting time of quantum circuits are jointly optimized. Particularly, the objective is to minimize the total costs of quantum circuits under uncertainties regarding qubit requirement and minimum waiting time of quantum circuits. In experiments, practical circuits of quantum Fourier transform are applied to evaluate the proposed qubit resource allocation. The results illustrate that the proposed qubit resource allocation can achieve the optimal total costs.

Variational quantum algorithms have been introduced as a promising class of quantum-classical hybrid algorithms that can already be used with the noisy quantum computing hardware available today by employing parameterized quantum circuits. Considering the non-trivial nature of quantum circuit compilation and the subtleties of quantum computing, it is essential to verify that these parameterized circuits have been compiled correctly. Established equivalence checking procedures that handle parameter-free circuits already exist. However, no methodology capable of handling circuits with parameters has been proposed yet. This work fills this gap by showing that verifying the equivalence of parameterized circuits can be achieved in a purely symbolic fashion using an equivalence checking approach based on the ZX-calculus. At the same time, proofs of inequality can be efficiently obtained with conventional methods by taking advantage of the degrees of freedom inherent to parameterized circuits. We implemented the corresponding methods and proved that the resulting methodology is complete. Experimental evaluations (using the entire parametric ansatz circuit library provided by Qiskit as benchmarks) demonstrate the efficacy of the proposed approach. The implementation is open source and publicly available as part of the equivalence checking tool QCEC (//github.com/cda-tum/qcec) which is part of the Munich Quantum Toolkit (MQT).

Today, an increasing number of Adaptive Deep Neural Networks (AdNNs) are being used on resource-constrained embedded devices. We observe that, similar to traditional software, redundant computation exists in AdNNs, resulting in considerable performance degradation. The performance degradation is dependent on the input and is referred to as input-dependent performance bottlenecks (IDPBs). To ensure an AdNN satisfies the performance requirements of resource-constrained applications, it is essential to conduct performance testing to detect IDPBs in the AdNN. Existing neural network testing methods are primarily concerned with correctness testing, which does not involve performance testing. To fill this gap, we propose DeepPerform, a scalable approach to generate test samples to detect the IDPBs in AdNNs. We first demonstrate how the problem of generating performance test samples detecting IDPBs can be formulated as an optimization problem. Following that, we demonstrate how DeepPerform efficiently handles the optimization problem by learning and estimating the distribution of AdNNs' computational consumption. We evaluate DeepPerform on three widely used datasets against five popular AdNN models. The results show that DeepPerform generates test samples that cause more severe performance degradation (FLOPs: increase up to 552\%). Furthermore, DeepPerform is substantially more efficient than the baseline methods in generating test inputs(runtime overhead: only 6-10 milliseconds).

北京阿比特科技有限公司