亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Phase estimation is a quantum algorithm for measuring the eigenvalues of a Hamiltonian. We propose and rigorously analyse a randomized phase estimation algorithm with two distinctive features. First, our algorithm has complexity independent of the number of terms L in the Hamiltonian. Second, unlike previous L-independent approaches, such as those based on qDRIFT, all sources of error in our algorithm can be suppressed by collecting more data samples, without increasing the circuit depth.

相關內容

We establish estimates on the error made by the Deep Ritz Method for elliptic problems on the space $H^1(\Omega)$ with different boundary conditions. For Dirichlet boundary conditions, we estimate the error when the boundary values are approximately enforced through the boundary penalty method. Our results apply to arbitrary and in general non linear classes $V\subseteq H^1(\Omega)$ of ansatz functions and estimate the error in dependence of the optimization accuracy, the approximation capabilities of the ansatz class and -- in the case of Dirichlet boundary values -- the penalisation strength $\lambda$. For non-essential boundary conditions the error of the Ritz method decays with the same rate as the approximation rate of the ansatz classes. For essential boundary conditions, given an approximation rate of $r$ in $H^1(\Omega)$ and an approximation rate of $s$ in $L^2(\partial\Omega)$ of the ansatz classes, the optimal decay rate of the estimated error is $\min(s/2, r)$ and achieved by choosing $\lambda_n\sim n^{s}$. We discuss the implications for ansatz classes which are given through ReLU networks and the relation to existing estimates for finite element functions.

We apply digitized Quantum Annealing (QA) and Quantum Approximate Optimization Algorithm (QAOA) to a paradigmatic task of supervised learning in artificial neural networks: the optimization of synaptic weights for the binary perceptron. At variance with the usual QAOA applications to MaxCut, or to quantum spin-chains ground state preparation, the classical Hamiltonian is characterized by highly non-local multi-spin interactions. Yet, we provide evidence for the existence of optimal smooth solutions for the QAOA parameters, which are transferable among typical instances of the same problem, and we prove numerically an enhanced performance of QAOA over traditional QA. We also investigate on the role of the QAOA optimization landscape geometry in this problem, showing that the detrimental effect of a gap-closing transition encountered in QA is also negatively affecting the performance of our implementation of QAOA.

Scattered data fitting is a frequently encountered problem for reconstructing an unknown function from given scattered data. Radial basis function (RBF) methods have proven to be highly useful to deal with this problem. We describe two quantum algorithms to efficiently fit scattered data based on globally and compactly supported RBFs respectively. For the globally supported RBF method, the core of the quantum algorithm relies on using coherent states to calculate the radial functions and a nonsparse matrix exponentiation technique for efficiently performing a matrix inversion. A quadratic speedup is achieved in the number of data over the classical algorithms. For the compactly supported RBF method, we mainly use the HHL algorithm as a subroutine to design an efficient quantum procedure that runs in time logarithmic in the number of data, achieving an exponential improvement over the classical methods.

An important theme in modern inverse problems is the reconstruction of time-dependent data from only finitely many measurements. To obtain satisfactory reconstruction results in this setting it is essential to strongly exploit temporal consistency between the different measurement times. The strongest consistency can be achieved by reconstructing data directly in phase space, the space of positions and velocities. However, this space is usually too high-dimensional for feasible computations. We introduce a novel dimension reduction technique, based on projections of phase space onto lower-dimensional subspaces, which provably circumvents this curse of dimensionality: Indeed, in the exemplary framework of superresolution we prove that known exact reconstruction results stay true after dimension reduction, and we additionally prove new error estimates of reconstructions from noisy data in optimal transport metrics which are of the same quality as one would obtain in the non-dimension-reduced case.

Suppose a matrix $A \in \mathbb{R}^{m \times n}$ of rank $k$ with singular value decomposition $A = U_{A}\Sigma_{A} V_{A}^{T}$, where $U_{A} \in \mathbb{R}^{m \times k}$, $V_{A} \in \mathbb{R}^{n \times k}$ are orthonormal and $\Sigma_{A} \in \mathbb{R}^{k \times k}$ is a diagonal matrix. The statistical leverage scores of a matrix $A$ are the squared row-norms defined by $\ell_{i} = \|(U_{A})_{i,:}\|_2^2$, where $i \in [m]$, and the matrix coherence is the largest statistical leverage score. These quantities play an important role in machine learning algorithms such as matrix completion and Nystr\"{o}m-based low rank matrix approximation as well as large-scale statistical data analysis applications. The best known classical algorithm to approximate these values runs in time $O((mn + n^3){\rm log}\,m)$ in [P. Drineas, M. Magdon-Ismail, M. W. Mahoney and D. P. Woodruff. Fast approximation of matrix coherence and statistical leverage. J. Mach. Learn. Res., (2012)13: 3475-3506]. In this work, inspired by recent development on dequantization techniques, we propose a fast novel classical algorithm for approximating the statistical leverage scores. Our novel algorithm has query and time complexity $O\left({\rm poly} \left(k, \kappa, \frac{1}{\epsilon}, \frac{1}{\delta}, {\rm log}(mn)\right) \right)$, where $\kappa$ is the condition number of $A$, and $\delta$ is the failure probability.

Sparse representation of real-life images is a very effective approach in imaging applications, such as denoising. In recent years, with the growth of computing power, data-driven strategies exploiting the redundancy within patches extracted from one or several images to increase sparsity have become more prominent. This paper presents a novel image denoising algorithm exploiting such an image-dependent basis inspired by the quantum many-body theory. Based on patch analysis, the similarity measures in a local image neighborhood are formalized through a term akin to interaction in quantum mechanics that can efficiently preserve the local structures of real images. The versatile nature of this adaptive basis extends the scope of its application to image-independent or image-dependent noise scenarios without any adjustment. We carry out a rigorous comparison with contemporary methods to demonstrate the denoising capability of the proposed algorithm regardless of the image characteristics, noise statistics and intensity. We illustrate the properties of the hyperparameters and their respective effects on the denoising performance, together with automated rules of selecting their values close to the optimal one in experimental setups with ground truth not available. Finally, we show the ability of our approach to deal with practical images denoising problems such as medical ultrasound image despeckling applications.

We propose a new method of estimation in topic models, that is not a variation on the existing simplex finding algorithms, and that estimates the number of topics K from the observed data. We derive new finite sample minimax lower bounds for the estimation of A, as well as new upper bounds for our proposed estimator. We describe the scenarios where our estimator is minimax adaptive. Our finite sample analysis is valid for any number of documents (n), individual document length (N_i), dictionary size (p) and number of topics (K), and both p and K are allowed to increase with n, a situation not handled well by previous analyses. We complement our theoretical results with a detailed simulation study. We illustrate that the new algorithm is faster and more accurate than the current ones, although we start out with a computational and theoretical disadvantage of not knowing the correct number of topics K, while we provide the competing methods with the correct value in our simulations.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

Seam-cutting and seam-driven techniques have been proven effective for handling imperfect image series in image stitching. Generally, seam-driven is to utilize seam-cutting to find a best seam from one or finite alignment hypotheses based on a predefined seam quality metric. However, the quality metrics in most methods are defined to measure the average performance of the pixels on the seam without considering the relevance and variance among them. This may cause that the seam with the minimal measure is not optimal (perception-inconsistent) in human perception. In this paper, we propose a novel coarse-to-fine seam estimation method which applies the evaluation in a different way. For pixels on the seam, we develop a patch-point evaluation algorithm concentrating more on the correlation and variation of them. The evaluations are then used to recalculate the difference map of the overlapping region and reestimate a stitching seam. This evaluation-reestimation procedure iterates until the current seam changes negligibly comparing with the previous seams. Experiments show that our proposed method can finally find a nearly perception-consistent seam after several iterations, which outperforms the conventional seam-cutting and other seam-driven methods.

Many problems on signal processing reduce to nonparametric function estimation. We propose a new methodology, piecewise convex fitting (PCF), and give a two-stage adaptive estimate. In the first stage, the number and location of the change points is estimated using strong smoothing. In the second stage, a constrained smoothing spline fit is performed with the smoothing level chosen to minimize the MSE. The imposed constraint is that a single change point occurs in a region about each empirical change point of the first-stage estimate. This constraint is equivalent to requiring that the third derivative of the second-stage estimate has a single sign in a small neighborhood about each first-stage change point. We sketch how PCF may be applied to signal recovery, instantaneous frequency estimation, surface reconstruction, image segmentation, spectral estimation and multivariate adaptive regression.

北京阿比特科技有限公司