亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

High-performance classical simulator for quantum circuits, in particular the tensor network contraction algorithm, has become an important tool for the validation of noisy quantum computing. In order to address the memory limitations, the slicing technique is used to reduce the tensor dimensions, but it could also lead to additional computation overhead that greatly slows down the overall performance. This paper proposes novel lifetime-based methods to reduce the slicing overhead and improve the computing efficiency, including, an interpretation method to deal with slicing overhead, an inplace slicing strategy to find the smallest slicing set and an adaptive tensor network contraction path refiner customized for Sunway architecture. Experiments show that in most cases the slicing overhead with our inplace slicing strategy would be less than the Cotengra , which is the most used graph path optimization software at present. Finally, the resulting simulation time is reduced to 89.1s for the Sycamore quantum processor RQC, with a sustainable single-precision performance of 308.6Pflops using over 41M cores to generate 1M correlated samples, which is more than 5 times performance improvement compared to 60.4 Pflops in 2021 Gordon Bell Prize work.

相關內容

This paper presents a new method for quantum identity authentication (QIA) protocols. The logic of classical zero-knowledge proofs (ZKPs) due to Schnorr is applied in quantum circuits and algorithms. This novel approach gives an exact way with which a prover $P$ can prove they know some secret by encapsulating it in a quantum state before sending to a verifier $V$ by means of a quantum channel - allowing for a ZKP wherein an eavesdropper or manipulation can be detected with a fail-safe design. This is achieved by moving away from the hardness of the Discrete Logarithm Problem towards the hardness of estimating quantum states. This paper presents a method with which this can be achieved and some bounds for the security of the protocol provided. With the anticipated advent of a `quantum internet', such protocols and ideas may soon have utility and execution in the real world.

Federated Learning (FL) has emerged as a new paradigm for training machine learning models distributively without sacrificing data security and privacy. Learning models on edge devices such as mobile phones is one of the most common use cases for FL. However, Non-identical independent distributed~(non-IID) data in edge devices easily leads to training failures. Especially, over-parameterized machine learning models can easily be over-fitted on such data, hence, resulting in inefficient federated learning and poor model performance. To overcome the over-fitting issue, we proposed an adaptive dynamic pruning approach for FL, which can dynamically slim the model by dropping out unimportant parameters, hence, preventing over-fittings. Since the machine learning model's parameters react differently for different training samples, adaptive dynamic pruning will evaluate the salience of the model's parameter according to the input training sample, and only retain the salient parameter's gradients when doing back-propagation. We performed comprehensive experiments to evaluate our approach. The results show that our approach by removing the redundant parameters in neural networks can significantly reduce the over-fitting issue and greatly improves the training efficiency. In particular, when training the ResNet-32 on CIFAR-10, our approach reduces the communication cost by 57\%. We further demonstrate the inference acceleration capability of the proposed algorithm. Our approach reduces up to 50\% FLOPs inference of DNNs on edge devices while maintaining the model's quality.

With ever increasing depth and width in deep neural networks to achieve state-of-the-art performance, deep learning computation has significantly grown, and dot-products remain dominant in overall computation time. Most prior works are built on conventional dot-product where weighted input summation is used to represent the neuron operation. However, another implementation of dot-product based on the notion of angles and magnitudes in the Euclidean space has attracted limited attention. This paper proposes DeepCAM, an inference accelerator built on two critical innovations to alleviate the computation time bottleneck of convolutional neural networks. The first innovation is an approximate dot-product built on computations in the Euclidean space that can replace addition and multiplication with simple bit-wise operations. The second innovation is a dynamic size content addressable memory-based (CAM-based) accelerator to perform bit-wise operations and accelerate the CNNs with a lower computation time. Our experiments on benchmark image recognition datasets demonstrate that DeepCAM is up to 523x and 3498x faster than Eyeriss and traditional CPUs like Intel Skylake, respectively. Furthermore, the energy consumed by our DeepCAM approach is 2.16x to 109x less compared to Eyeriss.

We provide a new approach for compiling quantum simulation circuits that appear in Trotter, qDRIFT and multi-product formulas to Clifford and non-Clifford operations that can reduce the number of non-Clifford operations by a factor of up to $4$. In fact, the total number of gates reduce in many cases. We show that it is possible to implement an exponentiated sum of commuting Paulis with at most $m$ (controlled)-rotation gates, where $m$ is the number of distinct non-zero eigenvalues (ignoring sign). Thus we can collect mutually commuting Hamiltonian terms into groups that satisfy one of several symmetries identified in this work which allow an inexpensive simulation of the entire group of terms. We further show that the cost can in some cases be reduced by partially allocating Hamiltonian terms to several groups and provide a polynomial time classical algorithm that can greedily allocate the terms to appropriate groupings. We further specifically discuss these optimizations for the case of fermionic dynamics and provide extensive numerical simulations for qDRIFT of our grouping strategy to 6 and 4-qubit Heisenberg models, $LiH$, $H_2$ and observe a factor of 1.8-3.2 reduction in the number of non-Clifford gates. This suggests Trotter-based simulation of chemistry in second quantization may be even more practical than previously believed.

The identification of choice models is crucial for understanding consumer behavior and informing marketing or operational strategies, policy design, and product development. The identification of parametric choice-based demand models is typically straightforward. However, nonparametric models, which are highly effective and flexible in explaining customer choice, may encounter the challenge of the dimensionality curse, hindering their identification. A prominent example of a nonparametric model is the ranking-based model, which mirrors the random utility maximization (RUM) class and is known to be nonidentifiable from the collection of choice probabilities alone. Our objective in this paper is to develop a new class of nonparametric models that is not subject to the problem of nonidentifiability. Our model assumes bounded rationality of consumers, which results in symmetric demand cannibalization and intriguingly enables full identification. Additionally, our choice model demonstrates competitive prediction accuracy compared to the state-of-the-art benchmarks in a real-world case study, despite incorporating the assumption of bounded rationality which could, in theory, limit the representation power of our model. In addition, we tackle the important problem of finding the optimal assortment under the proposed choice model. We demonstrate the NP-hardness of this problem and provide a fully polynomial-time approximation scheme through dynamic programming. Additionally, we propose an efficient estimation framework using a combination of column generation and expectation-maximization algorithms, which proves to be more tractable than the estimation algorithm of the aforementioned ranking-based model.

Most datasets suffer from partial or complete missing values, which has downstream limitations on the available models on which to test the data and on any statistical inferences that can be made from the data. Several imputation techniques have been designed to replace missing data with stand in values. The various approaches have implications for calculating clinical scores, model building and model testing. The work showcased here offers a novel means for categorical imputation based on item response theory (IRT) and compares it against several methodologies currently used in the machine learning field including k-nearest neighbors (kNN), multiple imputed chained equations (MICE) and Amazon Web Services (AWS) deep learning method, Datawig. Analyses comparing these techniques were performed on three different datasets that represented ordinal, nominal and binary categories. The data were modified so that they also varied on both the proportion of data missing and the systematization of the missing data. Two different assessments of performance were conducted: accuracy in reproducing the missing values, and predictive performance using the imputed data. Results demonstrated that the new method, Item Response Theory for Categorical Imputation (IRTCI), fared quite well compared to currently used methods, outperforming several of them in many conditions. Given the theoretical basis for the new approach, and the unique generation of probabilistic terms for determining category belonging for missing cells, IRTCI offers a viable alternative to current approaches.

We study the problem of optimizing a recommender system for outcomes that occur over several weeks or months. We begin by drawing on reinforcement learning to formulate a comprehensive model of users' recurring relationships with a recommender system. Measurement, attribution, and coordination challenges complicate algorithm design. We describe careful modeling -- including a new representation of user state and key conditional independence assumptions -- which overcomes these challenges and leads to simple, testable recommender system prototypes. We apply our approach to a podcast recommender system that makes personalized recommendations to hundreds of millions of listeners. A/B tests demonstrate that purposefully optimizing for long-term outcomes leads to large performance gains over conventional approaches that optimize for short-term proxies.

We present an efficient quantum algorithm to simulate nonlinear differential equations with polynomial vector fields of arbitrary degree on quantum platforms. Models of physical systems that are governed by ordinary differential equations (ODEs) or partial differential equation (PDEs) can be challenging to solve on classical computers due to high dimensionality, stiffness, nonlinearities, and sensitive dependence to initial conditions. For sparse $n$-dimensional linear ODEs, quantum algorithms have been developed which can produce a quantum state proportional to the solution in poly(log(nx)) time using the quantum linear systems algorithm (QLSA). Recently, this framework was extended to systems of nonlinear ODEs with quadratic polynomial vector fields by applying Carleman linearization that enables the embedding of the quadratic system into an approximate linear form. A detailed complexity analysis was conducted which showed significant computational advantage under certain conditions. We present an extension of this algorithm to deal with systems of nonlinear ODEs with $k$-th degree polynomial vector fields for arbitrary (finite) values of $k$. The steps involve: 1) mapping the $k$-th degree polynomial ODE to a higher dimensional quadratic polynomial ODE; 2) applying Carleman linearization to transform the quadratic ODE to an infinite-dimensional system of linear ODEs; 3) truncating and discretizing the linear ODE and solving using the forward Euler method and QLSA. Alternatively, one could apply Carleman linearization directly to the $k$-th degree polynomial ODE, resulting in a system of infinite-dimensional linear ODEs, and then apply step 3. This solution route can be computationally more efficient. We present detailed complexity analysis of the proposed algorithms, prove polynomial scaling of runtime on $k$ and demonstrate the framework on an example.

Efficiently querying data on embedded sensor and IoT devices is challenging given the very limited memory and CPU resources. With the increasing volumes of collected data, it is critical to process, filter, and manipulate data on the edge devices where it is collected to improve efficiency and reduce network transmissions. Existing embedded index structures do not adapt to the data distribution and characteristics. This paper demonstrates how applying learned indexes that develop space efficient summaries of the data can dramatically improve the query performance and predictability. Learned indexes based on linear approximations can reduce the query I/O by 50 to 90% and improve query throughput by a factor of 2 to 5, while only requiring a few kilobytes of RAM. Experimental results on a variety of time series data sets demonstrate the advantages of learned indexes that considerably improve over the state-of-the-art index algorithms.

Classic algorithms and machine learning systems like neural networks are both abundant in everyday life. While classic computer science algorithms are suitable for precise execution of exactly defined tasks such as finding the shortest path in a large graph, neural networks allow learning from data to predict the most likely answer in more complex tasks such as image classification, which cannot be reduced to an exact algorithm. To get the best of both worlds, this thesis explores combining both concepts leading to more robust, better performing, more interpretable, more computationally efficient, and more data efficient architectures. The thesis formalizes the idea of algorithmic supervision, which allows a neural network to learn from or in conjunction with an algorithm. When integrating an algorithm into a neural architecture, it is important that the algorithm is differentiable such that the architecture can be trained end-to-end and gradients can be propagated back through the algorithm in a meaningful way. To make algorithms differentiable, this thesis proposes a general method for continuously relaxing algorithms by perturbing variables and approximating the expectation value in closed form, i.e., without sampling. In addition, this thesis proposes differentiable algorithms, such as differentiable sorting networks, differentiable renderers, and differentiable logic gate networks. Finally, this thesis presents alternative training strategies for learning with algorithms.

北京阿比特科技有限公司