亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Variational Quantum Algorithms (VQAs) are widely viewed as the best hope for near-term quantum advantage. However, recent studies have shown that noise can severely limit the trainability of VQAs, e.g., by exponentially flattening the cost landscape and suppressing the magnitudes of cost gradients. Error Mitigation (EM) shows promise in reducing the impact of noise on near-term devices. Thus, it is natural to ask whether EM can improve the trainability of VQAs. In this work, we first show that, for a broad class of EM strategies, exponential cost concentration cannot be resolved without committing exponential resources elsewhere. This class of strategies includes as special cases Zero Noise Extrapolation, Virtual Distillation, Probabilistic Error Cancellation, and Clifford Data Regression. Second, we perform analytical and numerical analysis of these EM protocols, and we find that some of them (e.g., Virtual Distillation) can make it harder to resolve cost function values compared to running no EM at all. As a positive result, we do find numerical evidence that Clifford Data Regression (CDR) can aid the training process in certain settings where cost concentration is not too severe. Our results show that care should be taken in applying EM protocols as they can either worsen or not improve trainability. On the other hand, our positive results for CDR highlight the possibility of engineering error mitigation methods to improve trainability.

相關內容

Lasso and Ridge are important minimization problems in machine learning and statistics. They are versions of linear regression with squared loss where the vector $\theta\in\mathbb{R}^d$ of coefficients is constrained in either $\ell_1$-norm (for Lasso) or in $\ell_2$-norm (for Ridge). We study the complexity of quantum algorithms for finding $\varepsilon$-minimizers for these minimization problems. We show that for Lasso we can get a quadratic quantum speedup in terms of $d$ by speeding up the cost-per-iteration of the Frank-Wolfe algorithm, while for Ridge the best quantum algorithms are linear in $d$, as are the best classical algorithms.

Variational quantum algorithms rely on gradient based optimization to iteratively minimize a cost function evaluated by measuring output(s) of a quantum processor. A barren plateau is the phenomenon of exponentially vanishing gradients in sufficiently expressive parametrized quantum circuits. It has been established that the onset of a barren plateau regime depends on the cost function, although the particular behavior has been demonstrated only for certain classes of cost functions. Here we derive a lower bound on the variance of the gradient, which depends mainly on the width of the circuit causal cone of each term in the Pauli decomposition of the cost function. Our result further clarifies the conditions under which barren plateaus can occur.

The quantum capacity of a noisy quantum channel determines the maximal rate at which we can code reliably over asymptotically many uses of the channel, and it characterizes the channel's ultimate ability to transmit quantum information coherently. In this paper, we derive single-letter upper bounds on the quantum and private capacities of quantum channels. The quantum capacity of a quantum channel is always no larger than the quantum capacity of its extended channels, since the extensions of the channel can be considered as assistance from the environment. By optimizing the parametrized extended channels with specific structures such as the flag structure, we obtain new upper bounds on the quantum capacity of the original quantum channel. Furthermore, we extend our approach to estimating the fundamental limits of private communication and one-way entanglement distillation. As notable applications, we establish improved upper bounds to the quantum and private capacities for fundamental quantum channels of interest in quantum information, some of which are also the sources of noise in superconducting quantum computing. In particular, our upper bounds on the quantum capacities of the depolarizing channel and the generalized amplitude damping channel are strictly better than previously best-known bounds for certain regimes.

In a recent breakthrough, Mahadev constructed a classical verification of quantum computation (CVQC) protocol for a classical client to delegate decision problems in BQP to an untrusted quantum prover under computational assumptions. In this work, we explore further the feasibility of CVQC with the more general sampling problems in BQP and with the desirable blindness property. We contribute affirmative solutions to both as follows. (1) Motivated by the sampling nature of many quantum applications (e.g., quantum algorithms for machine learning and quantum supremacy tasks), we initiate the study of CVQC for quantum sampling problems (denoted by SampBQP). More precisely, in a CVQC protocol for a SampBQP problem, the prover and the verifier are given an input $x\in \{0,1\}^n$ and a quantum circuit $C$, and the goal of the classical client is to learn a sample from the output $z \leftarrow C(x)$ up to a small error, from its interaction with an untrusted prover. We demonstrate its feasibility by constructing a four-message CVQC protocol for SampBQP based on the quantum Learning With Error assumption. (2) The blindness of CVQC protocols refers to a property of the protocol where the prover learns nothing, and hence is blind, about the client's input. It is a highly desirable property that has been intensively studied for the delegation of quantum computation. We provide a simple yet powerful generic compiler that transforms any CVQC protocol to a blind one while preserving its completeness and soundness errors as well as the number of rounds. Applying our compiler to (a parallel repetition of) Mahadev's CVQC protocol for BQP and our CVQC protocol for SampBQP yields the first constant-round blind CVQC protocol for BQP and SampBQP respectively, with negligible and inverse polynomial soundness errors respectively, and negligible completeness errors.

One of the main reasons for query model's prominence in quantum complexity is the presence of concrete lower bounding techniques: polynomial method and adversary method. There have been considerable efforts to not just give lower bounds using these methods but even to compare and relate them. We explore the value of these bounds on quantum query complexity for the class of symmetric functions, arguably one of the most natural and basic set of Boolean functions. We show (using the recently introduced spectral sensitivity) that both these bounds (positive adversary and approximate degree) give the same value for every total symmetric Boolean function. We also look at the quantum query complexity of Gap Majority, a partial symmetric function. It has gained importance recently in regard to understanding the composition of randomized query complexity. We characterize the quantum query complexity of Gap Majority and show a lower bound on noisy randomized query complexity (Ben-David and Blais, FOCS 2020) in terms of quantum query complexity. In addition, we study how large certificate complexity and block sensitivity can be as compared to sensitivity (even up to constant factors) for symmetric functions. We show tight separations, i.e., give upper bound on possible separations and construct functions achieving the same.

Phase estimation is a quantum algorithm for measuring the eigenvalues of a Hamiltonian. We propose and rigorously analyse a randomized phase estimation algorithm with two distinctive features. First, our algorithm has complexity independent of the number of terms L in the Hamiltonian. Second, unlike previous L-independent approaches, such as those based on qDRIFT, all sources of error in our algorithm can be suppressed by collecting more data samples, without increasing the circuit depth.

Should quantum computers become available, they will reduce the effective key length of basic secret-key primitives, such as blockciphers. To address this we will either need to use blockciphers which inherently have longer keys or use key-length extension techniques which employ a blockcipher to construct a more secure blockcipher that uses longer keys. We consider the latter approach and revisit the FX and double encryption constructions. Classically, FX is known to be secure, while double encryption is no more secure than single encryption due to a meet-in-the-middle attack. We provide positive results, with concrete and tight bounds, for both of these constructions against quantum attackers in ideal models. For FX, we consider a partially-quantum model, where the attacker has quantum access to the ideal primitive, but only classic access to FX. We provide two results for FX in this model. The first establishes the security of FX against non-adaptive attackers. The second establishes security against general adaptive attacks for a variant of FX using a random oracle in place of an ideal cipher. This result relies on the techniques of Zhandry (CRYPTO '19) for lazily sampling a quantum random oracle. An extension to perfectly lazily sampling a quantum random permutation, which would help resolve the adaptive security of standard FX, is an important but challenging open question. We introduce techniques for partially-quantum proofs without relying on analyzing the classical and quantum oracles separately, which is common in existing work. This may be of broader interest. For double encryption we apply a technique of Tessaro and Thiruvengadam (TCC '18) to establish that security reduces to the difficulty of solving the list disjointness problem, which we are able to reduce through a chain of results to the known quantum difficulty of the element distinctness problem.

CPU registers are small discrete storage units, used to hold temporary data and instructions within the CPU. Registers are not addressable in the same way memory is, which makes them immune from memory attacks and manipulation by other means. In this paper, we take advantage of this to provide a protection mechanism for critical program data; both active local variables and control objects on the stack. This protection effectively eliminates the threat of control- and data-oriented attacks, even by adversaries with full knowledge of the active stack. Our solution RegGuard, is a compiler register allocation strategy that utilises the available CPU registers to hold critical variables during execution. Unlike conventional allocations schemes, RegGuard prioritises the security significance of a program variable over its expected performance gain. Our scheme can deal effectively with saved registers to the stack, i.e., when the compiler needs to free up registers to make room for the variables of a new function call. With RegGuard, critical data objects anywhere on the entire stack are effectively protected from corruption, even by adversaries with arbitrary read and write access. While our primary design focus is on security, performance is very important for a scheme to be adopted in practice. RegGuard is still benefiting from the performance gain normally associated with register allocations, and the overhead is within a few percent of other unsecured register allocation schemes for most cases. We present detailed experiments that showcase the performance of RegGuard using different benchmark programs and the C library on ARM64 platform.

Combinatorial optimization is regarded as a potentially promising application of near and long-term quantum computers. The best-known heuristic quantum algorithm for combinatorial optimization on gate-based devices, the Quantum Approximate Optimization Algorithm (QAOA), has been the subject of many theoretical and empirical studies. Unfortunately, its application to specific combinatorial optimization problems poses several difficulties: among these, few performance guarantees are known, and the variational nature of the algorithm makes it necessary to classically optimize a number of parameters. In this work, we partially address these issues for a specific combinatorial optimization problem: diluted spin models, with MAX-CUT as a notable special case. Specifically, generalizing the analysis of the Sherrington-Kirkpatrick model by Farhi et al., we establish an explicit algorithm to evaluate the performance of QAOA on MAX-CUT applied to random Erdos-Renyi graphs of expected degree $d$ for an arbitrary constant number of layers $p$ and as the problem size tends to infinity. This analysis yields an explicit mapping between QAOA parameters for MAX-CUT on Erdos-Renyi graphs of expected degree $d$, in the limit $d \to \infty$, and the Sherrington-Kirkpatrick model, and gives good QAOA variational parameters for MAX-CUT applied to Erdos-Renyi graphs. We then partially generalize the latter analysis to graphs with a degree distribution rather than a single degree $d$, and finally to diluted spin-models with $D$-body interactions ($D \geq 3$). We validate our results with numerical experiments suggesting they may have a larger reach than rigorously established; among other things, our algorithms provided good initial, if not nearly optimal, variational parameters for very small problem instances where the infinite-size limit assumption is clearly violated.

Quantum hardware and quantum-inspired algorithms are becoming increasingly popular for combinatorial optimization. However, these algorithms may require careful hyperparameter tuning for each problem instance. We use a reinforcement learning agent in conjunction with a quantum-inspired algorithm to solve the Ising energy minimization problem, which is equivalent to the Maximum Cut problem. The agent controls the algorithm by tuning one of its parameters with the goal of improving recently seen solutions. We propose a new Rescaled Ranked Reward (R3) method that enables stable single-player version of self-play training that helps the agent to escape local optima. The training on any problem instance can be accelerated by applying transfer learning from an agent trained on randomly generated problems. Our approach allows sampling high-quality solutions to the Ising problem with high probability and outperforms both baseline heuristics and a black-box hyperparameter optimization approach.

北京阿比特科技有限公司