亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Quantum low-density parity-check (LDPC) codes are an important class of quantum error correcting codes. In such codes, each qubit only affects a constant number of syndrome bits, and each syndrome bit only relies on some constant number of qubits. Constructing quantum LDPC codes is challenging. It is an open problem to understand if there exist good quantum LDPC codes, i.e. with constant rate and relative distance. Furthermore, techniques to perform fault-tolerant gates are poorly understood. We present a unified way to address these problems. Our main results are a) a bound on the distance, b) a bound on the code dimension and c) limitations on certain fault-tolerant gates that can be applied to quantum LDPC codes. All three of these bounds are cast as a function of the graph separator of the connectivity graph representation of the quantum code. We find that unless the connectivity graph contains an expander, the code is severely limited. This implies a necessary, but not sufficient, condition to construct good codes. This is the first bound that studies the limitations of quantum LDPC codes that does not rely on locality. As an application, we present novel bounds on quantum LDPC codes associated with local graphs in $D$-dimensional hyperbolic space.

相關內容

We prove that any $n$-qubit unitary can be implemented (i) approximately in time $\tilde O\big(2^{n/2}\big)$ with query access to an appropriate classical oracle, and also (ii) exactly by a circuit of depth $\tilde O\big(2^{n/2}\big)$ with one- and two-qubit gates and $2^{O(n)}$ ancillae. The proofs of (i) and (ii) involve similar reductions to Grover search. The proof of (ii) also involves a linear-depth construction of arbitrary quantum states using one- and two-qubit gates (in fact, this can be improved to constant depth with the addition of fanout and generalized Toffoli gates) which may be of independent interest. We also prove a matching $\Omega\big(2^{n/2}\big)$ lower bound for (i) and (ii) for a certain class of implementations.

We study the classical expander codes, introduced by Sipser and Spielman \cite{SS96}. Given any constants $0< \alpha, \varepsilon < 1/2$, and an arbitrary bipartite graph with $N$ vertices on the left, $M < N$ vertices on the right, and left degree $D$ such that any left subset $S$ of size at most $\alpha N$ has at least $(1-\varepsilon)|S|D$ neighbors, we show that the corresponding linear code given by parity checks on the right has distance at least roughly $\frac{\alpha N}{2 \varepsilon }$. This is strictly better than the best known previous result of $2(1-\varepsilon ) \alpha N$ \cite{Sudan2000note, Viderman13b} whenever $\varepsilon < 1/2$, and improves the previous result significantly when $\varepsilon $ is small. Furthermore, we show that this distance is tight in general, thus providing a complete characterization of the distance of general expander codes. Next, we provide several efficient decoding algorithms, which vastly improve previous results in terms of the fraction of errors corrected, whenever $\varepsilon < \frac{1}{4}$. Finally, we also give a bound on the list-decoding radius of general expander codes, which beats the classical Johnson bound in certain situations (e.g., when the graph is almost regular and the code has a high rate). Our techniques exploit novel combinatorial properties of bipartite expander graphs. In particular, we establish a new size-expansion tradeoff, which may be of independent interests.

Ancestral state reconstruction is one of the most important tasks in evolutionary biology. Conditions under which we can reliably reconstruct the ancestral state have been studied for both discrete and continuous traits. However, the connection between these results is unclear, and it seems that each model needs different conditions. In this work, we provide a unifying theory on the consistency of ancestral state reconstruction for various types of trait evolution models. Notably, we show that for a sequence of nested trees with bounded heights, the necessary and sufficient conditions for the existence of a consistent ancestral state reconstruction method under discrete models, the Brownian motion model, and the threshold model are equivalent. When tree heights are unbounded, we provide a simple counter-example to show that this equivalence is no longer valid.

We design an algorithm for computing connectivity in hypergraphs which runs in time $\hat O_r(p + \min\{\lambda^{\frac{r-3}{r-1}} n^2, n^r/\lambda^{r/(r-1)}\})$ (the $\hat O_r(\cdot)$ hides the terms subpolynomial in the main parameter and terms that depend only on $r$) where $p$ is the size, $n$ is the number of vertices, and $r$ is the rank of the hypergraph. Our algorithm is faster than existing algorithms when the the rank is constant and the connectivity $\lambda$ is $\omega(1)$. At the heart of our algorithm is a structural result regarding min-cuts in simple hypergraphs. We show a trade-off between the number of hyperedges taking part in all min-cuts and the size of the smaller side of the min-cut. This structural result can be viewed as a generalization of a well-known structural theorem for simple graphs [Kawarabayashi-Thorup, JACM 19]. We extend the framework of expander decomposition to simple hypergraphs in order to prove this structural result. We also make the proof of the structural result constructive to obtain our faster hypergraph connectivity algorithm.

A basic problem for constant dimension codes is to determine the maximum possible size $A_q(n,d;k)$ of a set of $k$-dimensional subspaces in $\mathbb{F}_q^n$, called codewords, such that the subspace distance satisfies $d_S(U,W):=2k-2\dim(U\cap W)\ge d$ for all pairs of different codewords $U$, $W$. Constant dimension codes have applications in e.g.\ random linear network coding, cryptography, and distributed storage. Bounds for $A_q(n,d;k)$ are the topic of many recent research papers. Providing a general framework we survey many of the latest constructions and show up the potential for further improvements. As examples we give improved constructions for the cases $A_q(10,4;5)$, $A_q(11,4;4)$, $A_q(12,6;6)$, and $A_q(15,4;4)$. We also derive general upper bounds for subcodes arising in those constructions.

Variational quantum circuits are used in quantum machine learning and variational quantum simulation tasks. Designing good variational circuits or predicting how well they perform for given learning or optimization tasks is still unclear. Here we discuss these problems, analyzing variational quantum circuits using the theory of neural tangent kernels. We define quantum neural tangent kernels, and derive dynamical equations for their associated loss function in optimization and learning tasks. We analytically solve the dynamics in the frozen limit, or lazy training regime, where variational angles change slowly and a linear perturbation is good enough. We extend the analysis to a dynamical setting, including quadratic corrections in the variational angles. We then consider hybrid quantum-classical architecture and define a large-width limit for hybrid kernels, showing that a hybrid quantum-classical neural network can be approximately Gaussian. The results presented here show limits for which analytical understandings of the training dynamics for variational quantum circuits, used for quantum machine learning and optimization problems, are possible. These analytical results are supported by numerical simulations of quantum machine learning experiments.

Developing state-of-the-art classical simulators of quantum circuits is of utmost importance to test and evaluate early quantum technology and understand the true potential of full-blown error-corrected quantum computers. In the past few years, multiple theoretical and numerical advances have continuously pushed the boundary of what is classically simulable, hence the development of a plethora of tools which are often limited to a specific purpose or designed for a particular hardware (e.g. CPUs vs. GPUs). Moreover, such tools are typically developed using tailored languages and syntax, which makes it hard to compare results from, and create hybrid approaches using, different simulation techniques. To support unified and optimized use of these techniques across platforms, we developed HybridQ, a highly extensible platform designed to provide a common framework to integrate multiple state-of-the-art techniques to run on a variety of hardware. The philosophy behind its development has been driven by three main pillars: "Easy to Use", "Easy to Extend", and "Use the Best Available Technology". The powerful tools of HybridQ allow users to manipulate, develop, and extend noiseless and noisy circuits for different hardware architectures. HybridQ supports large-scale high-performance computing (HPC) simulations, automatically balancing workload among different processor nodes and enabling the use of multiple backends to maximize parallel efficiency. Everything is then glued together by a simple and expressive language that allows seamless switching from one technique to another as well as from one hardware to the next, without the need to write lengthy translations, thus greatly simplifying the development of new hybrid algorithms and techniques.

We study the performance of medium-length quantum LDPC (QLDPC) codes in the depolarizing channel. Only degenerate codes with the maximal stabilizer weight much smaller than their minimum distance are considered. It is shown that with the help of OSD-like post-processing the performance of the standard belief propagation (BP) decoder on many QLDPC codes can be improved by several orders of magnitude. Using this new BP-OSD decoder we study the performance of several known classes of degenerate QLDPC codes including hypergraph product codes, hyperbicycle codes, homological product codes, and Haah's cubic codes. We also construct several interesting examples of short generalized bicycle codes. Some of them have an additional property that their syndromes are protected by small BCH codes, which may be useful for the fault-tolerant syndrome measurement. We also propose a new large family of QLDPC codes that contains the class of hypergraph product codes, where one of the used parity-check matrices is square. It is shown that in some cases such codes have better performance than hypergraph product codes. Finally, we demonstrate that the performance of the proposed BP-OSD decoder for some of the constructed codes is better than for a relatively large surface code decoded by a near-optimal decoder.

The induced odd cycle packing number $iocp(G)$ of a graph $G$ is the maximum integer $k$ such that $G$ contains an induced subgraph consisting of $k$ pairwise vertex-disjoint odd cycles. Motivated by applications to geometric graphs, Bonamy et al.~\cite{indoc} proved that graphs of bounded induced odd cycle packing number, bounded VC dimension, and linear independence number admit a randomized EPTAS for the independence number. We show that the assumption of bounded VC dimension is not necessary, exhibiting a randomized algorithm that for any integers $k\ge 0$ and $t\ge 1$ and any $n$-vertex graph $G$ of induced odd cycle packing number at most $k$ returns in time $O_{k,t}(n^{k+4})$ an independent set of $G$ whose size is at least $\alpha(G)-n/t$ with high probability. In addition, we present $\chi$-boundedness results for graphs with bounded odd cycle packing number, and use them to design a QPTAS for the independence number only assuming bounded induced odd cycle packing number.

Discrete correlation filter (DCF) based trackers have shown considerable success in visual object tracking. These trackers often make use of low to mid level features such as histogram of gradients (HoG) and mid-layer activations from convolution neural networks (CNNs). We argue that including semantically higher level information to the tracked features may provide further robustness to challenging cases such as viewpoint changes. Deep salient object detection is one example of such high level features, as it make use of semantic information to highlight the important regions in the given scene. In this work, we propose an improvement over DCF based trackers by combining saliency based and other features based filter responses. This combination is performed with an adaptive weight on the saliency based filter responses, which is automatically selected according to the temporal consistency of visual saliency. We show that our method consistently improves a baseline DCF based tracker especially in challenging cases and performs superior to the state-of-the-art. Our improved tracker operates at 9.3 fps, introducing a small computational burden over the baseline which operates at 11 fps.

北京阿比特科技有限公司