亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The problem of sampling from the stationary distribution of a Markov chain finds widespread applications in a variety of fields. The time required for a Markov chain to converge to its stationary distribution is known as the classical mixing time. In this article, we deal with analog quantum algorithms for mixing. First, we provide an analog quantum algorithm that given a Markov chain, allows us to sample from its stationary distribution in a time that scales as the sum of the square root of the classical mixing time and the square root of the classical hitting time. Our algorithm makes use of the framework of interpolated quantum walks and relies on Hamiltonian evolution in conjunction with von Neumann measurements. There also exists a different notion for quantum mixing: the problem of sampling from the limiting distribution of quantum walks, defined in a time-averaged sense. In this scenario, the quantum mixing time is defined as the time required to sample from a distribution that is close to this limiting distribution. Recently we provided an upper bound on the quantum mixing time for Erd\"os-Renyi random graphs [Phys. Rev. Lett. 124, 050501 (2020)]. Here, we also extend and expand upon our findings therein. Namely, we provide an intuitive understanding of the state-of-the-art random matrix theory tools used to derive our results. In particular, for our analysis we require information about macroscopic, mesoscopic and microscopic statistics of eigenvalues of random matrices which we highlight here. Furthermore, we provide numerical simulations that corroborate our analytical findings and extend this notion of mixing from simple graphs to any ergodic, reversible, Markov chain.

相關內容

Quantum machine learning is an emerging field at the intersection of machine learning and quantum computing. Classical cross entropy plays a central role in machine learning. We define its quantum generalization, the quantum cross entropy, prove its lower bounds, and investigate its relation to quantum fidelity. In the classical case, minimizing cross entropy is equivalent to maximizing likelihood. In the quantum case, when the quantum cross entropy is constructed from quantum data undisturbed by quantum measurements, this relation holds. Classical cross entropy is equal to negative log-likelihood. When we obtain quantum cross entropy through empirical density matrix based on measurement outcomes, the quantum cross entropy is lower-bounded by negative log-likelihood. These two different scenarios illustrate the information loss when making quantum measurements. We conclude that to achieve the goal of full quantum machine learning, it is crucial to utilize the deferred measurement principle.

Spiking Neural Networks are a type of neural networks where neurons communicate using only spikes. They are often presented as a low-power alternative to classical neural networks, but few works have proven these claims to be true. In this work, we present a metric to estimate the energy consumption of SNNs independently of a specific hardware. We then apply this metric on SNNs processing three different data types (static, dynamic and event-based) representative of real-world applications. As a result, all of our SNNs are 6 to 8 times more efficient than their FNN counterparts.

The research area of algorithms with predictions has seen recent success showing how to incorporate machine learning into algorithm design to improve performance when the predictions are correct, while retaining worst-case guarantees when they are not. Most previous work has assumed that the algorithm has access to a single predictor. However, in practice, there are many machine learning methods available, often with incomparable generalization guarantees, making it hard to pick a best method a priori. In this work we consider scenarios where multiple predictors are available to the algorithm and the question is how to best utilize them. Ideally, we would like the algorithm's performance to depend on the quality of the best predictor. However, utilizing more predictions comes with a cost, since we now have to identify which prediction is the best. We study the use of multiple predictors for a number of fundamental problems, including matching, load balancing, and non-clairvoyant scheduling, which have been well-studied in the single predictor setting. For each of these problems we introduce new algorithms that take advantage of multiple predictors, and prove bounds on the resulting performance.

Stochastic versions of proximal methods have gained much attention in statistics and machine learning. These algorithms tend to admit simple, scalable forms, and enjoy numerical stability via implicit updates. In this work, we propose and analyze a stochastic version of the recently proposed proximal distance algorithm, a class of iterative optimization methods that recover a desired constrained estimation problem as a penalty parameter $\rho \rightarrow \infty$. By uncovering connections to related stochastic proximal methods and interpreting the penalty parameter as the learning rate, we justify heuristics used in practical manifestations of the proximal distance method, establishing their convergence guarantees for the first time. Moreover, we extend recent theoretical devices to establish finite error bounds and a complete characterization of convergence rates regimes. We validate our analysis via a thorough empirical study, also showing that unsurprisingly, the proposed method outpaces batch versions on popular learning tasks.

Variational quantum algorithms have been introduced as a promising class of quantum-classical hybrid algorithms that can already be used with the noisy quantum computing hardware available today by employing parameterized quantum circuits. Considering the non-trivial nature of quantum circuit compilation and the subtleties of quantum computing, it is essential to verify that these parameterized circuits have been compiled correctly. Established equivalence checking procedures that handle parameter-free circuits already exist. However, no methodology capable of handling circuits with parameters has been proposed yet. This work fills this gap by showing that verifying the equivalence of parameterized circuits can be achieved in a purely symbolic fashion using an equivalence checking approach based on the ZX-calculus. At the same time, proofs of inequality can be efficiently obtained with conventional methods by taking advantage of the degrees of freedom inherent to parameterized circuits. We implemented the corresponding methods and proved that the resulting methodology is complete. Experimental evaluations (using the entire parametric ansatz circuit library provided by Qiskit as benchmarks) demonstrate the efficacy of the proposed approach. The implementation is open source and publicly available as part of the equivalence checking tool QCEC (//github.com/cda-tum/qcec) which is part of the Munich Quantum Toolkit (MQT).

Solving large systems of equations is a challenge for modeling natural phenomena, such as simulating subsurface flow. To avoid systems that are intractable on current computers, it is often necessary to neglect information at small scales, an approach known as coarse-graining. For many practical applications, such as flow in porous, homogenous materials, coarse-graining offers a sufficiently-accurate approximation of the solution. Unfortunately, fractured systems cannot be accurately coarse-grained, as critical network topology exists at the smallest scales, including topology that can push the network across a percolation threshold. Therefore, new techniques are necessary to accurately model important fracture systems. Quantum algorithms for solving linear systems offer a theoretically-exponential improvement over their classical counterparts, and in this work we introduce two quantum algorithms for fractured flow. The first algorithm, designed for future quantum computers which operate without error, has enormous potential, but we demonstrate that current hardware is too noisy for adequate performance. The second algorithm, designed to be noise resilient, already performs well for problems of small to medium size (order 10 to 1000 nodes), which we demonstrate experimentally and explain theoretically. We expect further improvements by leveraging quantum error mitigation and preconditioning.

The paper proposes a decoupled numerical scheme of the time-dependent Ginzburg-Landau equations under temporal gauge. For the order parameter and the magnetic potential, the discrete scheme adopts the second type Ned${\rm \acute{e}}$lec element and the linear element for spatial discretization, respectively, and a fully linearized backward Euler method and the first order exponential time differencing method for time discretization, respectively. The maximum bound principle of the order parameter and the energy dissipation law in the discrete sense are proved for this finite element-based scheme. This allows the application of the adaptive time stepping method which can significantly speed up long-time simulations compared to existing numerical schemes, especially for superconductors with complicated shapes. The error estimate is rigorously established in the fully discrete sense. Numerical examples verify the theoretical results of the proposed scheme and demonstrate the vortex motions of superconductors in an external magnetic field.

With the increase in availability of large pre-trained language models (LMs) in Natural Language Processing (NLP), it becomes critical to assess their fit for a specific target task a priori - as fine-tuning the entire space of available LMs is computationally prohibitive and unsustainable. However, encoder transferability estimation has received little to no attention in NLP. In this paper, we propose to generate quantitative evidence to predict which LM, out of a pool of models, will perform best on a target task without having to fine-tune all candidates. We provide a comprehensive study on LM ranking for 10 NLP tasks spanning the two fundamental problem types of classification and structured prediction. We adopt the state-of-the-art Logarithm of Maximum Evidence (LogME) measure from Computer Vision (CV) and find that it positively correlates with final LM performance in 94% of the setups. In the first study of its kind, we further compare transferability measures with the de facto standard of human practitioner ranking, finding that evidence from quantitative metrics is more robust than pure intuition and can help identify unexpected LM candidates.

We consider the sequential quantum channel discrimination problem using adaptive and non-adaptive strategies. In this setting the number of uses of the underlying quantum channel is not fixed but a random variable that is either bounded in expectation or with high probability. We show that both types of error probabilities decrease to zero exponentially fast and, when using adaptive strategies, the rates are characterized by the measured relative entropy between two quantum channels, yielding a strictly larger region than that achievable by non-adaptive strategies. Allowing for quantum memory, we see that the optimal rates are given by the regularized channel relative entropy. Finally, we discuss achievable rates when allowing for repeated measurements via quantum instruments and conjecture that the achievable rate region is not larger than that achievable with POVMs by connecting the result to the strong converse for the quantum channel Stein's Lemma.

The field of quantum machine learning (QML) explores how quantum computers can be used to more efficiently solve machine learning problems. As an application of hybrid quantum-classical algorithms, it promises a potential quantum advantages in the near term. In this thesis, we use the ZXW-calculus to diagrammatically analyse two key problems that QML applications face. First, we discuss algorithms to compute gradients on quantum hardware that are needed to perform gradient-based optimisation for QML. Concretely, we give new diagrammatic proofs of the common 2- and 4-term parameter shift rules used in the literature. Additionally, we derive a novel, generalised parameter shift rule with 2n terms that is applicable to gates that can be represented with n parametrised spiders in the ZXW-calculus. Furthermore, to the best of our knowledge, we give the first proof of a conjecture by Anselmetti et al. by proving a no-go theorem ruling out more efficient alternatives to the 4-term shift rule. Secondly, we analyse the gradient landscape of quantum ans\"atze for barren plateaus using both empirical and analytical techniques. Concretely, we develop a tool that automatically calculates the variance of gradients and use it to detect likely barren plateaus in commonly used quantum ans\"atze. Furthermore, we formally prove the existence or absence of barren plateaus for a selection of ans\"atze using diagrammatic techniques from the ZXW-calculus.

北京阿比特科技有限公司