亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Finding a minimum is an essential part of mathematical models, and it plays an important role in some optimization problems. Durr and Hoyer proposed a quantum searching algorithm (DHA), with a certain probability of success, to achieve quadratic speed than classical ones. In this paper, we propose an optimized quantum minimum searching algorithm with sure-success probability, which utilizes Grover-Long searching to implement the optimal exact searching, and the dynamic strategy to reduce the iterations of our algorithm. Besides, we optimize the oracle circuit to reduce the number of gates by the simplified rules. The performance evaluation including the theoretical success rate and computational complexity shows that our algorithm has higher accuracy and efficiency than DHA algorithm. Finally, a simulation experiment based on Cirq is performed to verify its feasibility.

相關內容

Learning tasks play an increasingly prominent role in quantum information and computation. They range from fundamental problems such as state discrimination and metrology over the framework of quantum probably approximately correct (PAC) learning, to the recently proposed shadow variants of state tomography. However, the many directions of quantum learning theory have so far evolved separately. We propose a general mathematical formalism for describing quantum learning by training on classical-quantum data and then testing how well the learned hypothesis generalizes to new data. In this framework, we prove bounds on the expected generalization error of a quantum learner in terms of classical and quantum information-theoretic quantities measuring how strongly the learner's hypothesis depends on the specific data seen during training. To achieve this, we use tools from quantum optimal transport and quantum concentration inequalities to establish non-commutative versions of decoupling lemmas that underlie recent information-theoretic generalization bounds for classical machine learning. Our framework encompasses and gives intuitively accessible generalization bounds for a variety of quantum learning scenarios such as quantum state discrimination, PAC learning quantum states, quantum parameter estimation, and quantumly PAC learning classical functions. Thereby, our work lays a foundation for a unifying quantum information-theoretic perspective on quantum learning.

We consider the general problem of Bayesian binary regression and we introduce a new class of distributions, the Perturbed Unified Skew Normal (pSUN, henceforth), which generalizes the Unified Skew-Normal (SUN) class. We show that the new class is conjugate to any binary regression model, provided that the link function may be expressed as a scale mixture of Gaussian densities. We discuss in detail the popular logit case, and we show that, when a logistic regression model is combined with a Gaussian prior, posterior summaries such as cumulants and normalizing constants can be easily obtained through the use of an importance sampling approach, opening the way to straightforward variable selection procedures. For more general priors, the proposed methodology is based on a simple Gibbs sampler algorithm. We also claim that, in the p > n case, the proposed methodology shows better performances - both in terms of mixing and accuracy - compared to the existing methods. We illustrate the performance through several simulation studies and two data analyses.

A numerical method is proposed for simulation of composite open quantum systems. It is based on Lindblad master equations and adiabatic elimination. Each subsystem is assumed to converge exponentially towards a stationary subspace, slightly impacted by some decoherence channels and weakly coupled to the other subsystems. This numerical method is based on a perturbation analysis with an asymptotic expansion. It exploits the formulation of the slow dynamics with reduced dimension. It relies on the invariant operators of the local and nominal dissipative dynamics attached to each subsystem. Second-order expansion can be computed only with local numerical calculations. It avoids computations on the tensor-product Hilbert space attached to the full system. This numerical method is particularly well suited for autonomous quantum error correction schemes. Simulations of such reduced models agree with complete full model simulations for typical gates acting on one and two cat-qubits (Z, ZZ and CNOT) when the mean photon number of each cat-qubit is less than 8. For larger mean photon numbers and gates with three cat-qubits (ZZZ and CCNOT), full model simulations are almost impossible whereas reduced model simulations remain accessible. In particular, they capture both the dominant phase-flip error-rate and the very small bit-flip error-rate with its exponential suppression versus the mean photon number.

Weights are geometrical degrees of freedom that allow to generalise Lagrangian finite elements. They are defined through integrals over specific supports, well understood in terms of differential forms and integration, and lie within the framework of finite element exterior calculus. In this work we exploit this formalism with the target of identifying supports that are appealing for finite element approximation. To do so, we study the related parametric matrix-sequences, with the matrix order tending to infinity as the mesh size tends to zero. We describe the conditioning and the spectral global behavior in terms of the standard Toeplitz machinery and GLT theory, leading to the identification of the optimal choices for weights. Moreover, we propose and test ad hoc preconditioners, in dependence of the discretization parameters and in connection with conjugate gradient method. The model problem we consider is a onedimensional Laplacian, both with constant and non constant coefficients. Numerical visualizations and experimental tests are reported and critically discussed, demonstrating the advantages of weights-induced bases over standard Lagrangian ones. Open problems and future steps are listed in the conclusive section, especially regarding the multidimensional case.

The optimization of open-loop shallow geothermal systems, which includes both design and operational aspects, is an important research area aimed at improving their efficiency and sustainability and the effective management of groundwater as a shallow geothermal resource. This paper investigates various approaches to address optimization problems arising from these research and implementation questions about GWHP systems. The identified optimization approaches are thoroughly analyzed based on criteria such as computational cost and applicability. Moreover, a novel classification scheme is introduced that categorizes the approaches according to the types of groundwater simulation model and the optimization algorithm used. Simulation models are divided into two types: numerical and simplified (analytical or data-driven) models, while optimization algorithms are divided into gradient-based and derivative-free algorithms. Finally, a comprehensive review of existing approaches in the literature is provided, highlighting their strengths and limitations and offering recommendations for both the use of existing approaches and the development of new, improved ones in this field.

We propose three test criteria each of which is appropriate for testing, respectively, the equivalence hypotheses of symmetry, of homogeneity, and of independence, with multivariate data. All quantities have the common feature of involving weighted--type distances between characteristic functions and are convenient from the computational point of view if the weight function is properly chosen. The asymptotic behavior of the tests under the null hypothesis is investigated, and numerical studies are conducted in order to examine the performance of the criteria in finite samples.

Block majorization-minimization (BMM) is a simple iterative algorithm for nonconvex constrained optimization that sequentially minimizes majorizing surrogates of the objective function in each block coordinate while the other coordinates are held fixed. BMM entails a large class of optimization algorithms such as block coordinate descent and its proximal-point variant, expectation-minimization, and block projected gradient descent. We establish that for general constrained nonconvex optimization, BMM with strongly convex surrogates can produce an $\epsilon$-stationary point within $O(\epsilon^{-2}(\log \epsilon^{-1})^{2})$ iterations and asymptotically converges to the set of stationary points. Furthermore, we propose a trust-region variant of BMM that can handle surrogates that are only convex and still obtain the same iteration complexity and asymptotic stationarity. These results hold robustly even when the convex sub-problems are inexactly solved as long as the optimality gaps are summable. As an application, we show that a regularized version of the celebrated multiplicative update algorithm for nonnegative matrix factorization by Lee and Seung has iteration complexity of $O(\epsilon^{-2}(\log \epsilon^{-1})^{2})$. The same result holds for a wide class of regularized nonnegative tensor decomposition algorithms as well as the classical block projected gradient descent algorithm. These theoretical results are validated through various numerical experiments.

A new linear relaxation system for nonconservative hyperbolic systems is introduced, in which a nonlocal source term accounts for the nonconservative product of the original system. Using an asymptotic analysis the relaxation limit and its stability are investigated. It is shown that the path-conservative Lax-Friedrichs scheme arises from a discrete limit of an implicit-explicit scheme for the relaxation system. The relaxation approach is further employed to couple two nonconservative systems at a static interface. A coupling strategy motivated from conservative Kirchhoff conditions is introduced and a corresponding Riemann solver provided. A fully discrete scheme for coupled nonconservative products is derived and studied in terms of path-conservation. Numerical experiments applying the approach to a coupled model of vascular blood flow are presented.

Sparse polynomial approximation has become indispensable for approximating smooth, high- or infinite-dimensional functions from limited samples. This is a key task in computational science and engineering, e.g., surrogate modelling in uncertainty quantification where the function is the solution map of a parametric or stochastic differential equation (DE). Yet, sparse polynomial approximation lacks a complete theory. On the one hand, there is a well-developed theory of best $s$-term polynomial approximation, which asserts exponential or algebraic rates of convergence for holomorphic functions. On the other, there are increasingly mature methods such as (weighted) $\ell^1$-minimization for computing such approximations. While the sample complexity of these methods has been analyzed with compressed sensing, whether they achieve best $s$-term approximation rates is not fully understood. Furthermore, these methods are not algorithms per se, as they involve exact minimizers of nonlinear optimization problems. This paper closes these gaps. Specifically, we consider the following question: are there robust, efficient algorithms for computing approximations to finite- or infinite-dimensional, holomorphic and Hilbert-valued functions from limited samples that achieve best $s$-term rates? We answer this affirmatively by introducing algorithms and theoretical guarantees that assert exponential or algebraic rates of convergence, along with robustness to sampling, algorithmic, and physical discretization errors. We tackle both scalar- and Hilbert-valued functions, this being key to parametric or stochastic DEs. Our results involve significant developments of existing techniques, including a novel restarted primal-dual iteration for solving weighted $\ell^1$-minimization problems in Hilbert spaces. Our theory is supplemented by numerical experiments demonstrating the efficacy of these algorithms.

Nonlinear differential equations exhibit rich phenomena in many fields but are notoriously challenging to solve. Recently, Liu et al. [1] demonstrated the first efficient quantum algorithm for dissipative quadratic differential equations under the condition $R < 1$, where $R$ measures the ratio of nonlinearity to dissipation using the $\ell_2$ norm. Here we develop an efficient quantum algorithm based on [1] for reaction-diffusion equations, a class of nonlinear partial differential equations (PDEs). To achieve this, we improve upon the Carleman linearization approach introduced in [1] to obtain a faster convergence rate under the condition $R_D < 1$, where $R_D$ measures the ratio of nonlinearity to dissipation using the $\ell_{\infty}$ norm. Since $R_D$ is independent of the number of spatial grid points $n$ while $R$ increases with $n$, the criterion $R_D<1$ is significantly milder than $R<1$ for high-dimensional systems and can stay convergent under grid refinement for approximating PDEs. As applications of our quantum algorithm we consider the Fisher-KPP and Allen-Cahn equations, which have interpretations in classical physics. In particular, we show how to estimate the mean square kinetic energy in the solution by postprocessing the quantum state that encodes it to extract derivative information.

北京阿比特科技有限公司