亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Quantum machine learning has the potential for a transformative impact across industry sectors and in particular in finance. In our work we look at the problem of hedging where deep reinforcement learning offers a powerful framework for real markets. We develop quantum reinforcement learning methods based on policy-search and distributional actor-critic algorithms that use quantum neural network architectures with orthogonal and compound layers for the policy and value functions. We prove that the quantum neural networks we use are trainable, and we perform extensive simulations that show that quantum models can reduce the number of trainable parameters while achieving comparable performance and that the distributional approach obtains better performance than other standard approaches, both classical and quantum. We successfully implement the proposed models on a trapped-ion quantum processor, utilizing circuits with up to $16$ qubits, and observe performance that agrees well with noiseless simulation. Our quantum techniques are general and can be applied to other reinforcement learning problems beyond hedging.

相關內容

We propose a set of goodness-of-fit tests for the semiparametric accelerated failure time (AFT) model, including an omnibus test, a link function test, and a functional form test. This set of tests is derived from a multi-parameter cumulative sum process shown to follow asymptotically a zero-mean Gaussian process. Its evaluation is based on the asymptotically equivalent perturbed version, which enables both graphical and numerical evaluations of the assumed AFT model. Empirical p-values are obtained using the Kolmogorov-type supremum test, which provides a reliable approach for estimating the significance of both proposed un-standardized and standardized test statistics. The proposed procedure is illustrated using the induced smoothed rank-based estimator but is directly applicable to other popular estimators such as non-smooth rank-based estimator or least-squares estimator.Our proposed methods are rigorously evaluated using extensive simulation experiments that demonstrate their effectiveness in maintaining a Type I error rate and detecting departures from the assumed AFT model in practical sample sizes and censoring rates. Furthermore, the proposed approach is applied to the analysis of the Primary Biliary Cirrhosis data, a widely studied dataset in survival analysis, providing further evidence of the practical usefulness of the proposed methods in real-world scenarios. To make the proposed methods more accessible to researchers, we have implemented them in the R package afttest, which is publicly available on the Comprehensive R Archieve Network.

As we progress towards physical implementation of quantum algorithms it is vital to determine the explicit resource costs needed to run them. Solving linear systems of equations is a fundamental problem with a wide variety of applications across many fields of science, and there is increasing effort to develop quantum linear solver algorithms. Here we introduce a quantum linear solver algorithm combining ideas from adiabatic quantum computing with filtering techniques based on quantum signal processing. We give a closed formula for the non-asymptotic query complexity $Q$ -- the exact number of calls to a block-encoding of the linear system matrix -- as a function of condition number $\kappa$, error tolerance $\epsilon$ and block-encoding scaling factor $\alpha$. Our protocol reduces the cost of quantum linear solvers over state-of-the-art close to an order of magnitude for early implementations. The asymptotic scaling is $O(\kappa \log(\kappa/\epsilon))$, slightly looser than the $O(\kappa \log(1/\epsilon))$ scaling of the asymptotically optimal algorithm of Costa et al. However, our algorithm outperforms the latter for all condition numbers up to $\kappa \approx 10^{32}$, at which point $Q$ is comparably large, and both algorithms are anyway practically unfeasible. The present optimized analysis is both problem-agnostic and architecture-agnostic, and hence can be deployed in any quantum algorithm that uses linear solvers as a subroutine.

Learning reinforcement learning (RL)-based recommenders from historical user-item interaction sequences is vital to generate high-reward recommendations and improve long-term cumulative benefits. However, existing RL recommendation methods encounter difficulties (i) to estimate the value functions for states which are not contained in the offline training data, and (ii) to learn effective state representations from user implicit feedback due to the lack of contrastive signals. In this work, we propose contrastive state augmentations (CSA) for the training of RL-based recommender systems. To tackle the first issue, we propose four state augmentation strategies to enlarge the state space of the offline data. The proposed method improves the generalization capability of the recommender by making the RL agent visit the local state regions and ensuring the learned value functions are similar between the original and augmented states. For the second issue, we propose introducing contrastive signals between augmented states and the state randomly sampled from other sessions to improve the state representation learning further. To verify the effectiveness of the proposed CSA, we conduct extensive experiments on two publicly accessible datasets and one dataset collected from a real-life e-commerce platform. We also conduct experiments on a simulated environment as the online evaluation setting. Experimental results demonstrate that CSA can effectively improve recommendation performance.

The quantum perceptron, the variational circuit, and the Grover algorithm have been proposed as promising components for quantum machine learning. This paper presents a new quantum perceptron that combines the quantum variational circuit and the Grover algorithm. However, this does not guarantee that this quantum variational perceptron with Grover's algorithm (QVPG) will have any advantage over its quantum variational (QVP) and classical counterparts. Here, we examine the performance of QVP and QVP-G by computing their loss function and analyzing their accuracy on the classification task, then comparing these two quantum models to the classical perceptron (CP). The results show that our two quantum models are more efficient than CP, and our novel suggested model QVP-G outperforms the QVP, demonstrating that the Grover can be applied to the classification task and even makes the model more accurate, besides the unstructured search problems.

Works in quantum machine learning (QML) over the past few years indicate that QML algorithms can function just as well as their classical counterparts, and even outperform them in some cases. Among the corpus of recent work, many current QML models take advantage of variational quantum algorithm (VQA) circuits, given that their scale is typically small enough to be compatible with NISQ devices and the method of automatic differentiation for optimizing circuit parameters is familiar to machine learning (ML). While the results bear interesting promise for an era when quantum machines are more readily accessible, if one can achieve similar results through non-quantum methods then there may be a more near-term advantage available to practitioners. To this end, the nature of this work is to investigate the utilization of stochastic methods inspired by a variational quantum version of the long short-term memory (LSTM) model in an attempt to approach the reported successes in performance and rapid convergence. By analyzing the performance of classical, stochastic, and quantum methods, this work aims to elucidate if it is possible to achieve some of QML's major reported benefits on classical machines by incorporating aspects of its stochasticity.

Attention-based models have been a key element of many recent breakthroughs in deep learning. Two key components of Attention are the structure of its input (which consists of keys, values and queries) and the computations by which these three are combined. In this paper we explore the space of models that share said input structure but are not restricted to the computations of Attention. We refer to this space as Keys-Values-Queries (KVQ) Space. Our goal is to determine whether there are any other stackable models in KVQ Space that Attention cannot efficiently approximate, which we can implement with our current deep learning toolbox and that solve problems that are interesting to the community. Maybe surprisingly, the solution to the standard least squares problem satisfies these properties. A neural network module that is able to compute this solution not only enriches the set of computations that a neural network can represent but is also provably a strict generalisation of Linear Attention. Even more surprisingly the computational complexity of this module is exactly the same as that of Attention, making it a suitable drop in replacement. With this novel connection between classical machine learning (least squares) and modern deep learning (Attention) established we justify a variation of our model which generalises regular Attention in the same way. Both new modules are put to the test an a wide spectrum of tasks ranging from few-shot learning to policy distillation that confirm their real-worlds applicability.

Long-distance quantum communication presents a significant challenge as maintaining the fidelity of qubits can be difficult. This issue can be addressed through the use of quantum repeaters to transmit entanglement information through Bell measurements. However, despite its necessity to enable wide-area quantum internet, the deployment cost of quantum repeaters can be prohibitively expensive, thus it is important to develop a quantum repeater deployment model that can strike a balance between cost and effectiveness. In this work, we present novel heuristic models to quickly determine a minimum number of quantum repeaters to deploy in large-scale networks to provide end-to-end connectivity between all end hosts. The results show that, compared to the linear programming approach, the heuristic methods can find near-optimal solutions while reducing the execution time from days to seconds when evaluated against several synthetic and real-world networks such as SURFnet and ESnet. As reliability is key for any network, we also demonstrate that the heuristic method can determine deployment models that can endure up to two link/node failures.

L-moments are expected values of linear combinations of order statistics that provide robust alternatives to traditional moments. The estimation of parametric models by matching sample L-moments -- a procedure known as ``method of L-moments'' -- has been shown to outperform maximum likelihood estimation (MLE) in small samples from popular distributions. The choice of the number of L-moments to be used in estimation remains \textit{ad-hoc}, though: researchers typically set the number of L-moments equal to the number of parameters, as to achieve an order condition for identification. This approach is generally inefficient in larger sample sizes. In this paper, we show that, by properly choosing the number of L-moments and weighting these accordingly, we are able to construct an estimator that outperforms MLE in finite samples, and yet does not suffer from efficiency losses asymptotically. We do so by considering a ``generalised'' method of L-moments estimator and deriving its asymptotic properties in a framework where the number of L-moments varies with sample size. We then propose methods to automatically select the number of L-moments in a given sample. Monte Carlo evidence shows our proposed approach is able to outperform (in a mean-squared error sense) MLE in smaller samples, whilst working as well as it in larger samples.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6% to 93.2% and from 88.0% to 93.8% on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank.

北京阿比特科技有限公司