亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Learning about physical systems from quantum-enhanced experiments, relying on a quantum memory and quantum processing, can outperform learning from experiments in which only classical memory and processing are available. Whereas quantum advantages have been established for a variety of state learning tasks, quantum process learning allows for comparable advantages only with a careful problem formulation and is less understood. We establish an exponential quantum advantage for learning an unknown $n$-qubit quantum process $\mathcal{N}$. We show that a quantum memory allows to efficiently solve the following tasks: (a) learning the Pauli transfer matrix of an arbitrary $\mathcal{N}$, (b) predicting expectation values of bounded Pauli-sparse observables measured on the output of an arbitrary $\mathcal{N}$ upon input of a Pauli-sparse state, and (c) predicting expectation values of arbitrary bounded observables measured on the output of an unknown $\mathcal{N}$ with sparse Pauli transfer matrix upon input of an arbitrary state. With quantum memory, these tasks can be solved using linearly-in-$n$ many copies of the Choi state of $\mathcal{N}$, and even time-efficiently in the case of (b). In contrast, any learner without quantum memory requires exponentially-in-$n$ many queries, even when querying $\mathcal{N}$ on subsystems of adaptively chosen states and performing adaptively chosen measurements. In proving this separation, we extend existing shadow tomography upper and lower bounds from states to channels via the Choi-Jamiolkowski isomorphism. Moreover, we combine Pauli transfer matrix learning with polynomial interpolation techniques to develop a procedure for learning arbitrary Hamiltonians, which may have non-local all-to-all interactions, from short-time dynamics. Our results highlight the power of quantum-enhanced experiments for learning highly complex quantum dynamics.

相關內容

The Fr\'{e}chet distance is one of the most studied distance measures between curves $P$ and $Q$. The data structure variant of the problem is a longstanding open problem: Efficiently preprocess $P$, so that for any $Q$ given at query time, one can efficiently approximate their Fr\'{e}chet distance. There exist conditional lower bounds that prohibit $(1 + \varepsilon)$-approximate Fr\'{e}chet distance computations in subquadratic time, even when preprocessing $P$ using any polynomial amount of time and space. As a consequence, the problem has been studied under various restrictions: restricting $Q$ to be a (horizontal) segment, or requiring $P$ and $Q$ to be so-called \emph{realistic} input curves. We give a data structure for $(1+\varepsilon)$-approximate discrete Fr\'{e}chet distance in any metric space $\mathcal{X}$ between a realistic input curve $P$ and any query curve $Q$. After preprocessing the input curve $P$ (of length $|P|=n$) in $O(n \log n)$ time, we may answer queries specifying a query curve $Q$ and an $\varepsilon$, and output a value $d(P,Q)$ which is at most a $(1+\varepsilon)$-factor away from the true Fr\'{e}chet distance between $Q$ and $P$. Our query time is asymptotically linear in $|Q|=m$, $\frac{1}{\varepsilon}$, $\log n$, and the realism parameter $c$ or $\kappa$. Our data structure is the first to: adapt to the approximation parameter $\varepsilon$ at query time, handle query curves with arbitrarily many vertices, work for any ambient space of the curves, or be dynamic. The method presented in this paper simplifies and generalizes previous contributions to the static problem variant. We obtain efficient queries (and therefore static algorithms) for Fr\'{e}chet distance computation in high-dimensional spaces and other ambient metric spaces.

Overparametrization is one of the most surprising and notorious phenomena in machine learning. Recently, there have been several efforts to study if, and how, Quantum Neural Networks (QNNs) acting in the absence of hardware noise can be overparametrized. In particular, it has been proposed that a QNN can be defined as overparametrized if it has enough parameters to explore all available directions in state space. That is, if the rank of the Quantum Fisher Information Matrix (QFIM) for the QNN's output state is saturated. Here, we explore how the presence of noise affects the overparametrization phenomenon. Our results show that noise can "turn on" previously-zero eigenvalues of the QFIM. This enables the parametrized state to explore directions that were otherwise inaccessible, thus potentially turning an overparametrized QNN into an underparametrized one. For small noise levels, the QNN is quasi-overparametrized, as large eigenvalues coexists with small ones. Then, we prove that as the magnitude of noise increases all the eigenvalues of the QFIM become exponentially suppressed, indicating that the state becomes insensitive to any change in the parameters. As such, there is a pull-and-tug effect where noise can enable new directions, but also suppress the sensitivity to parameter updates. Finally, our results imply that current QNN capacity measures are ill-defined when hardware noise is present.

Learning in games has emerged as a powerful tool for Machine Learning with numerous applications. Several recent works have studied quantum zero-sum games, an extension of classical games where players have access to quantum resources, from a learning perspective. Going beyond the competitive regime, this work introduces quantum potential games as well as learning algorithms for this class of games. We introduce non-commutative extensions of the continuous-time replicator dynamics and the discrete-time Baum-Eagon/linear multiplicative weights update and study their convergence properties. Finally, we establish connections between quantum potential games and quantum separability, allowing us to reinterpret our learning dynamics as algorithms for the Best Separable State problem. We validate our theoretical findings through extensive experiments.

We demonstrate quantum advantage with several basic assumptions, specifically based on only the existence of OWFs. We introduce inefficient-verifier proofs of quantumness (IV-PoQ), and construct it from classical bit commitments. IV-PoQ is an interactive protocol between a verifier and a quantum prover consisting of two phases. In the first phase, the verifier is probabilistic polynomial-time, and it interacts with the prover. In the second phase, the verifier becomes inefficient, and makes its decision based on the transcript of the first phase. If the prover is honest, the inefficient verifier accepts with high probability, but any classical malicious prover only has a small probability of being accepted by the inefficient verifier. Our construction demonstrates the following results: (1)If one-way functions exist, then IV-PoQ exist. (2)If distributional collision-resistant hash functions exist (which exist if hard-on-average problems in $\mathbf{SZK}$ exist), then constant-round IV-PoQ exist. We also demonstrate quantum advantage based on worst-case-hard assumptions. We define auxiliary-input IV-PoQ (AI-IV-PoQ) that only require that for any malicious prover, there exist infinitely many auxiliary inputs under which the prover cannot cheat. We construct AI-IV-PoQ from an auxiliary-input version of commitments in a similar way, showing that (1)If auxiliary-input one-way functions exist (which exist if $\mathbf{CZK}\not\subseteq\mathbf{BPP}$), then AI-IV-PoQ exist. (2)If auxiliary-input collision-resistant hash functions exist (which is equivalent to $\mathbf{PWPP}\nsubseteq \mathbf{FBPP}$) or $\mathbf{SZK}\nsubseteq \mathbf{BPP}$, then constant-round AI-IV-PoQ exist.

In this article, we are proposing a closed-form solution for the capacity of the single quantum channel. The Gaussian distributed input has been considered for the analytical calculation of the capacity. In our previous couple of papers, we invoked models for joint quantum noise and the corresponding received signals; in this current research, we proved that these models are Gaussian mixtures distributions. In this paper, we showed how to deal with both of cases, namely (I)the Gaussian mixtures distribution for scalar variables and (II) the Gaussian mixtures distribution for random vectors. Our target is to calculate the entropy of the joint noise and the entropy of the received signal in order to calculate the capacity expression of the quantum channel. The main challenge is to work with the function type of the Gaussian mixture distribution. The entropy of the Gaussian mixture distributions cannot be calculated in the closed-form solution due to the logarithm of a sum of exponential functions. As a solution, we proposed a lower bound and a upper bound for each of the entropies of joint noise and the received signal, and finally upper inequality and lower inequality lead to the upper bound for the mutual information and hence the maximum achievable data rate as the capacity. In this paper reader will able to visualize an closed-form capacity experssion which make this paper distinct from our previous works. These capacity experssion and coresses ponding bounds are calculated for both the cases: the Gaussian mixtures distribution for scalar variables and the Gaussian mixtures distribution for random vectors as well.

The complexity of free games with two or more classical players was essentially settled by Aaronson, Impagliazzo, and Moshkovitz (CCC'14). There are two complexity classes that can be considered quantum analogues of classical free games: (1) AM*, the multiprover interactive proof class corresponding to free games with entangled players, and, somewhat less obviously, (2) BellQMA(2), the class of quantum Merlin-Arthur proof systems with two unentangled Merlins, whose proof states are separately measured by Arthur. In this work, we make significant progress towards a tight characterization of both of these classes. 1. We show a BellQMA(2) protocol for 3SAT on $n$ variables, where the total amount of communication is $\tilde{O}(\sqrt{n})$. This answers an open question of Chen and Drucker (2010) and also shows, conditional on ETH, that the algorithm of Brand\~{a}o, Christandl and Yard (STOC'11) is tight up to logarithmic factors. 2. We show that $\mathsf{AM}^*[n_{\text{provers}} = 2, q = O(1), a =\mathrm{poly}\log(n)] = \mathsf{RE}$, i.e. that free entangled games with constant-sized questions are as powerful as general entangled games. Our result is a significant improvement over the headline result of Ji et al. (2020), whose MIP* protocol for the halting problem has $\mathrm{poly}(n)$-sized questions and answers. 3. We obtain a zero-gap AM* protocol for a $\Pi_2$ complete language with constant-size questions and almost logarithmically large answers, improving on the headline result of Mousavi, Nezhadi and Yuen (STOC'22). 4. Using a connection to the nonuniform complexity of the halting problem we show that any MIP* protocol for RE requires $\Omega(\log n)$ bits of communication. It follows that our results in item 3 are optimal up to an $O(\log^* n)$ factor, and that the gapless compression theorems of MNY'22 are asymptotically optimal.

Our goal is to develop theory and algorithms for establishing fundamental limits on performance for a given task imposed by a robot's sensors. In order to achieve this, we define a quantity that captures the amount of task-relevant information provided by a sensor. Using a novel version of the generalized Fano inequality from information theory, we demonstrate that this quantity provides an upper bound on the highest achievable expected reward for one-step decision making tasks. We then extend this bound to multi-step problems via a dynamic programming approach. We present algorithms for numerically computing the resulting bounds, and demonstrate our approach on three examples: (i) the lava problem from the literature on partially observable Markov decision processes, (ii) an example with continuous state and observation spaces corresponding to a robot catching a freely-falling object, and (iii) obstacle avoidance using a depth sensor with non-Gaussian noise. We demonstrate the ability of our approach to establish strong limits on achievable performance for these problems by comparing our upper bounds with achievable lower bounds (computed by synthesizing or learning concrete control policies).

Differential private optimization for nonconvex smooth objective is considered. In the previous work, the best known utility bound is $\widetilde O(\sqrt{d}/(n\varepsilon_\mathrm{DP}))$ in terms of the squared full gradient norm, which is achieved by Differential Private Gradient Descent (DP-GD) as an instance, where $n$ is the sample size, $d$ is the problem dimensionality and $\varepsilon_\mathrm{DP}$ is the differential privacy parameter. To improve the best known utility bound, we propose a new differential private optimization framework called \emph{DIFF2 (DIFFerential private optimization via gradient DIFFerences)} that constructs a differential private global gradient estimator with possibly quite small variance based on communicated \emph{gradient differences} rather than gradients themselves. It is shown that DIFF2 with a gradient descent subroutine achieves the utility of $\widetilde O(d^{2/3}/(n\varepsilon_\mathrm{DP})^{4/3})$, which can be significantly better than the previous one in terms of the dependence on the sample size $n$. To the best of our knowledge, this is the first fundamental result to improve the standard utility $\widetilde O(\sqrt{d}/(n\varepsilon_\mathrm{DP}))$ for nonconvex objectives. Additionally, a more computational and communication efficient subroutine is combined with DIFF2 and its theoretical analysis is also given. Numerical experiments are conducted to validate the superiority of DIFF2 framework.

The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.

Graph Neural Networks (GNNs) have recently become increasingly popular due to their ability to learn complex systems of relations or interactions arising in a broad spectrum of problems ranging from biology and particle physics to social networks and recommendation systems. Despite the plethora of different models for deep learning on graphs, few approaches have been proposed thus far for dealing with graphs that present some sort of dynamic nature (e.g. evolving features or connectivity over time). In this paper, we present Temporal Graph Networks (TGNs), a generic, efficient framework for deep learning on dynamic graphs represented as sequences of timed events. Thanks to a novel combination of memory modules and graph-based operators, TGNs are able to significantly outperform previous approaches being at the same time more computationally efficient. We furthermore show that several previous models for learning on dynamic graphs can be cast as specific instances of our framework. We perform a detailed ablation study of different components of our framework and devise the best configuration that achieves state-of-the-art performance on several transductive and inductive prediction tasks for dynamic graphs.

北京阿比特科技有限公司