亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Quantum computers promise to revolutionize some of the most computationally challenging tasks by executing calculations faster than classical computers. Integral transforms, such as convolution, Laplace transform, or path integration in quantum mechanics, are indispensable operations of scientific and technological progress. They are used from solving integro-differential equations to system modeling and signal processing. With the rapidly growing amount of collected information and the development of more complex systems, faster computations of integral transforms could dramatically expand analysis, design and execution capabilities. Here we show that the use of quantum processors can reduce the time complexity of integral transform evaluations from quadratic to quasi-linear. We present an experimental demonstration of the quantum-enhanced strategy for matched filtering. We implemented the qubit-based matched filtering algorithm on noisy superconducting qubits to carry out the first quantum-based gravitational-wave data analysis. We obtained a signal-to-noise ratio with this analysis for a binary black hole merger similar to that achievable with classical computation, providing evidence for the utility of qubits for practically relevant tasks. The presented algorithm is generally applicable to any integral transform with any number of integrands in any dimensions.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志(zhi)。 Publisher:Elsevier。 SIT:

We introduce two new approximation methods for the numerical evaluation of the long-range Coulomb potential and the approximation of the resulting high dimensional Two-Electron Integrals tensor (TEI) with long-range interactions arising in molecular simulations. The first method exploits the tensorized structure of the compressed two-electron integrals obtained through two-dimensional Chebyshev interpolation combined with Gaussian quadrature. The second method is based on the Fast Multipole Method (FMM). Numerical experiments for different medium size molecules on high quality basis sets outline the efficiency of the two methods. Detailed algorithmic is provided in this paper as well as numerical comparison of the introduced approaches.

We present two (a decoupled and a coupled) integral-equation-based methods for the Morse-Ingard equations subject to Neumann boundary conditions on the exterior domain. Both methods are based on second-kind integral equation (SKIE) formulations. The coupled method is well-conditioned and can achieve high accuracy. The decoupled method has lower computational cost and more flexibility in dealing with the boundary layer; however, it is prone to the ill-conditioning of the decoupling transform and cannot achieve as high accuracy as the coupled method. We show numerical examples using a Nystr\"om method based on quadrature-by-expansion (QBX) with fast-multipole acceleration. We demonstrate the accuracy and efficiency of the solvers in both two and three dimensions with complex geometry.

Assessing the validity of a real-world system with respect to given quality criteria is a common yet costly task in industrial applications due to the vast number of required real-world tests. Validating such systems by means of simulation offers a promising and less expensive alternative, but requires an assessment of the simulation accuracy and therefore end-to-end measurements. Additionally, covariate shifts between simulations and actual usage can cause difficulties for estimating the reliability of such systems. In this work, we present a validation method that propagates bounds on distributional discrepancy measures through a composite system, thereby allowing us to derive an upper bound on the failure probability of the real system from potentially inaccurate simulations. Each propagation step entails an optimization problem, where -- for measures such as maximum mean discrepancy (MMD) -- we develop tight convex relaxations based on semidefinite programs. We demonstrate that our propagation method yields valid and useful bounds for composite systems exhibiting a variety of realistic effects. In particular, we show that the proposed method can successfully account for data shifts within the experimental design as well as model inaccuracies within the used simulation.

Even after decades of quantum computing development, examples of generally useful quantum algorithms with exponential speedups over classical counterparts are scarce. Recent progress in quantum algorithms for linear-algebra positioned quantum machine learning (QML) as a potential source of such useful exponential improvements. Yet, in an unexpected development, a recent series of "dequantization" results has equally rapidly removed the promise of exponential speedups for several QML algorithms. This raises the critical question whether exponential speedups of other linear-algebraic QML algorithms persist. In this paper, we study the quantum-algorithmic methods behind the algorithm for topological data analysis of Lloyd, Garnerone and Zanardi through this lens. We provide evidence that the problem solved by this algorithm is classically intractable by showing that its natural generalization is as hard as simulating the one clean qubit model -- which is widely believed to require superpolynomial time on a classical computer -- and is thus very likely immune to dequantizations. Based on this result, we provide a number of new quantum algorithms for problems such as rank estimation and complex network analysis, along with complexity-theoretic evidence for their classical intractability. Furthermore, we analyze the suitability of the proposed quantum algorithms for near-term implementations. Our results provide a number of useful applications for full-blown, and restricted quantum computers with a guaranteed exponential speedup over classical methods, recovering some of the potential for linear-algebraic QML to become one of quantum computing's killer applications.

Solving large systems of equations is a challenge for modeling natural phenomena, such as simulating subsurface flow. To avoid systems that are intractable on current computers, it is often necessary to neglect information at small scales, an approach known as coarse-graining. For many practical applications, such as flow in porous, homogenous materials, coarse-graining offers a sufficiently-accurate approximation of the solution. Unfortunately, fractured systems cannot be accurately coarse-grained, as critical network topology exists at the smallest scales, including topology that can push the network across a percolation threshold. Therefore, new techniques are necessary to accurately model important fracture systems. Quantum algorithms for solving linear systems offer a theoretically-exponential improvement over their classical counterparts, and in this work we introduce two quantum algorithms for fractured flow. The first algorithm, designed for future quantum computers which operate without error, has enormous potential, but we demonstrate that current hardware is too noisy for adequate performance. The second algorithm, designed to be noise resilient, already performs well for problems of small to medium size (order 10 to 1000 nodes), which we demonstrate experimentally and explain theoretically. We expect further improvements by leveraging quantum error mitigation and preconditioning.

This paper investigates an Internet of Things (IoT) system in which multiple devices are observing some object's physical parameters and then offloading their observations back to the BS in time with opportunistic channel access. Specifically, each device accesses the common channel through contention with a certain probability firstly and then the winner evaluates the instant channel condition and decides to accept the right of channel access or not. We analyze this system through the perspective of Age of Information (AoI), which describes the freshness of observed information. The target is to minimize average AoI by optimizing the probability of device participation in contention and the transmission rate threshold. The problem is hard to solve since the AoI expression in fractional form is complex. We first decompose the original problem into two single-variable optimization sub-problems through Dinkelbach method and Block Coordinate Descent (BCD) method. And then we transform them to Monotonic optimization problems by proving the monotonicity of the objective functions, whose global optimal solution is able to be found through Polyblock algorithm. Numerical results verify the validity of our proposed method.

High-value payment systems (HVPS) are typically liquidity-intensive as the payment requests are indivisible and settled on a gross basis. Finding the right order in which payments should be processed to maximize the liquidity efficiency of these systems is an $NP$-hard combinatorial optimization problem, which quantum algorithms may be able to tackle at meaningful scales. We developed an algorithm and ran it on a hybrid quantum annealing solver to find an ordering of payments that reduced the amount of system liquidity necessary without substantially increasing payment delays. Despite the limitations in size and speed of today's quantum computers, our algorithm provided quantifiable efficiency improvements when applied to the Canadian HVPS using a 30-day sample of transaction data. By reordering each batch of 70 payments as they entered the queue, we achieved an average of C\$240 million in daily liquidity savings, with a settlement delay of approximately 90 seconds. For a few days in the sample, the liquidity savings exceeded C\$1 billion. This algorithm could be incorporated as a centralized preprocessor into existing HVPS without entailing a fundamental change to their risk management models.

This report describes our approach to design and evaluate a software stack for a race car capable of achieving competitive driving performance in the different disciplines of the Formula Student Driverless. By using a 360{\deg} LiDAR and optionally three cameras, we reliably recognize the plastic cones that mark the track boundaries at distances of around 35 m, enabling us to drive at the physical limits of the car. Using a GraphSLAM algorithm, we are able to map these cones with a root-mean-square error of less than 15 cm while driving at speeds of over 70 kph on a narrow track. The high-precision map is used in the trajectory planning to detect the lane boundaries using Delaunay triangulation and a parametric cubic spline. We calculate an optimized trajectory using a minimum curvature approach together with a GGS-diagram that takes the aerodynamics at different velocities into account. To track the target path with accelerations of up to 1.6 g, the control system is split into a PI controller for longitudinal control and model predictive controller for lateral control. Additionally, a low-level optimal control allocation is used. The software is realized in ROS C++ and tested in a custom simulation, as well as on the actual race track.

The use of quantum computation for wireless network applications is emerging as a promising paradigm to bridge the performance gap between in-practice and optimal wireless algorithms. While today's quantum technology offers limited number of qubits and low fidelity gates, application-based quantum solutions help us to understand and improve the performance of such technology even further. This paper introduces QGateD-Polar, a novel Quantum Gate-based Maximum-Likelihood Decoder design for Polar error correction codes, which are becoming widespread in today's 5G and tomorrow's NextG wireless networks. QGateD-Polar uses quantum gates to dictate the time evolution of Polar code decoding -- from the received wireless soft data to the final decoded solution -- by leveraging quantum phenomena such as superposition, entanglement, and interference, making it amenable to quantum gate-based computers. Our early results show that QGateD-Polar achieves the Maximum Likelihood performance in ideal quantum simulations, demonstrating how performance varies with noise.

The field of quantum machine learning (QML) explores how quantum computers can be used to more efficiently solve machine learning problems. As an application of hybrid quantum-classical algorithms, it promises a potential quantum advantages in the near term. In this thesis, we use the ZXW-calculus to diagrammatically analyse two key problems that QML applications face. First, we discuss algorithms to compute gradients on quantum hardware that are needed to perform gradient-based optimisation for QML. Concretely, we give new diagrammatic proofs of the common 2- and 4-term parameter shift rules used in the literature. Additionally, we derive a novel, generalised parameter shift rule with 2n terms that is applicable to gates that can be represented with n parametrised spiders in the ZXW-calculus. Furthermore, to the best of our knowledge, we give the first proof of a conjecture by Anselmetti et al. by proving a no-go theorem ruling out more efficient alternatives to the 4-term shift rule. Secondly, we analyse the gradient landscape of quantum ans\"atze for barren plateaus using both empirical and analytical techniques. Concretely, we develop a tool that automatically calculates the variance of gradients and use it to detect likely barren plateaus in commonly used quantum ans\"atze. Furthermore, we formally prove the existence or absence of barren plateaus for a selection of ans\"atze using diagrammatic techniques from the ZXW-calculus.

北京阿比特科技有限公司