亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This manuscript gathers and subsumes a long series of works on using QW to simulate transport phenomena. Quantum Walks (QWs) consist of single and isolated quantum systems, evolving in discrete or continuous time steps according to a causal, shift-invariant unitary evolution in discrete space. We start reminding some necessary fundamentals of linear algebra, including the definitions of Hilbert space, tensor state, the definition of linear operator and then we briefly present the principles of quantum mechanics on which this thesis is grounded. After having reviewed the literature of QWs and the main historical approaches to their study, we then move on to consider a new property of QWs, the plasticity. Plastic QWs are those ones admitting both continuous time-discrete space and continuous spacetime time limit. We show that such QWs can be used to quantum simulate a large class of physical phenomena described by transport equations. We investigate this new family of QWs in one and two spatial dimensions, showing that in two dimensions, the PDEs we can simulate are more general and include dispersive terms. We show that the above results do not need to rely on the grid and we prove that such QW-based quantum simulators can be defined on 2-complex simplicia, i.e. triangular lattices. Finally, we extend the above result to any arbitrary triangulation, proving that such QWs coincide in the continuous limit to a transport equation on a general curved surface, including the curved Dirac equation in 2+1 spacetime dimensions.

相關內容

讓 iOS 8 和 OS X Yosemite 無縫切換的一個新特性。 > Apple products have always been designed to work together beautifully. But now they may really surprise you. With iOS 8 and OS X Yosemite, you’ll be able to do more wonderful things than ever before.

Source:

In this expository article we present an overview of the current state-of-the-art in post-quantum group-based cryptography. We describe several families of groups that have been proposed as platforms, with special emphasis in polycyclic groups and graph groups, dealing in particular with their algorithmic properties and cryptographic applications. We then, describe some applications of combinatorial algebra in fully homomorphic encryption. In the end we discussing several open problems in this direction.

In this research, we present a quantum circuit design and implementation for a parallel universal linear bounded automata. This circuit is able to accelerate the inference of algorithmic structures in data for discovering causal generative models. The computation model is practically restricted in time and space resources. A classical exhaustive enumeration of all possible programs on the automata is shown for a couple of example cases. The precise quantum circuit design that allows executing a superposition of programs, along with a superposition of inputs as in the standard quantum Turing machine formulation, is presented. This is the first time, a superposition of classical automata is implemented on the circuit model of quantum computation, having the corresponding mechanistic parts of a classical Turing machine. The superposition of programs allows our model to be used for experimenting with the space of program-output behaviors in algorithmic information theory. Our implementations on OpenQL and Qiskit quantum programming language is copy-left and is publicly available on GitHub.

We study the multi-marginal partial optimal transport (POT) problem between $m$ discrete (unbalanced) measures with at most $n$ supports. We first prove that we can obtain two equivalence forms of the multimarginal POT problem in terms of the multimarginal optimal transport problem via novel extensions of cost tensor. The first equivalence form is derived under the assumptions that the total masses of each measure are sufficiently close while the second equivalence form does not require any conditions on these masses but at the price of more sophisticated extended cost tensor. Our proof techniques for obtaining these equivalence forms rely on novel procedures of moving mass in graph theory to push transportation plan into appropriate regions. Finally, based on the equivalence forms, we develop optimization algorithm, named ApproxMPOT algorithm, that builds upon the Sinkhorn algorithm for solving the entropic regularized multimarginal optimal transport. We demonstrate that the ApproxMPOT algorithm can approximate the optimal value of multimarginal POT problem with a computational complexity upper bound of the order $\tilde{\mathcal{O}}(m^3(n+1)^{m}/ \varepsilon^2)$ where $\varepsilon > 0$ stands for the desired tolerance.

Quantum computing is evolving so quickly that forces us to revisit, rewrite, and update the basis of the theory. Basic Quantum Algorithms revisits the first quantum algorithms. It started in 1985 with Deutsch trying to evaluate a function at two domain points simultaneously. Then, Deutsch and Jozsa created in 1992 a quantum algorithm that determines whether a Boolean function is constant or balanced. In the next year, Bernstein and Vazirani realized that the same algorithm can be used to find a specific Boolean function in the set of linear Boolean functions. In 1994, Simon presented a new quantum algorithm that determines whether a function is one-to-one or two-to-one exponentially faster than any classical algorithm for the same problem. In the same year, Shor created two new quantum algorithms for factoring integers and calculating discrete logarithms, threatening the cryptography methods widely used nowadays. In 1995, Kitaev described an alternative version for Shor's algorithms that proved useful in many other applications. In the following year, Grover created a quantum search algorithm quadratically faster than its classical counterpart. In this work, all those remarkable algorithms are described in detail with a focus on the circuit model.

From the minimal assumption of post-quantum semi-honest oblivious transfers, we build the first $\epsilon$-simulatable two-party computation (2PC) against quantum polynomial-time (QPT) adversaries that is both constant-round and black-box (for both the construction and security reduction). A recent work by Chia, Chung, Liu, and Yamakawa (FOCS'21) shows that post-quantum 2PC with standard simulation-based security is impossible in constant rounds, unless either $\mathbf{NP} \subseteq \mathbf{BQP}$ or relying on non-black-box simulation. The $\epsilon$-simulatability we target is a relaxation of the standard simulation-based security that allows for an arbitrarily small noticeable simulation error $\epsilon$. Moreover, when quantum communication is allowed, we can further weaken the assumption to post-quantum secure one-way functions (PQ-OWFs), while maintaining the constant-round and black-box property. Our techniques also yield the following set of constant-round and black-box two-party protocols secure against QPT adversaries, only assuming black-box access to PQ-OWFs: - extractable commitments for which the extractor is also an $\epsilon$-simulator; - $\epsilon$-zero-knowledge commit-and-prove whose commit stage is extractable with $\epsilon$-simulation; - $\epsilon$-simulatable coin-flipping; - $\epsilon$-zero-knowledge arguments of knowledge for $\mathbf{NP}$ for which the knowledge extractor is also an $\epsilon$-simulator; - $\epsilon$-zero-knowledge arguments for $\mathbf{QMA}$. At the heart of the above results is a black-box extraction lemma showing how to efficiently extract secrets from QPT adversaries while disturbing their quantum state in a controllable manner, i.e., achieving $\epsilon$-simulatability of the post-extraction state of the adversary.

We study the robust maximum flow problem and the robust maximum flow over time problem where a given number of arcs $\Gamma$ may fail or may be delayed. Two prominent models have been introduced for these problems: either one assigns flow to arcs fulfilling weak flow conservation in any scenario, or one assigns flow to paths where an arc failure or delay affects a whole path. We provide a unifying framework by presenting novel general models, in which we assign flow to subpaths. These models contain the known models as special cases and unify their advantages in order to obtain less conservative robust solutions. We give a thorough analysis with respect to complexity of the general models. In particular, we show that the general models are essentially NP-hard, whereas, e.g. in the static case with $\Gamma = 1$ an optimal solution can be computed in polynomial time. Further, we answer the open question about the complexity of the dynamic path model for $\Gamma = 1$. We also compare the solution quality of the different models. In detail, we show that the general models have better robust optimal values than the known models and we prove bounds on these gaps.

Min-max optimization problems arise in several key machine learning setups, including adversarial learning and generative modeling. In their general form, in absence of convexity/concavity assumptions, finding pure equilibria of the underlying two-player zero-sum game is computationally hard [Daskalakis et al., 2021]. In this work we focus instead in finding mixed equilibria, and consider the associated lifted problem in the space of probability measures. By adding entropic regularization, our main result establishes global convergence towards the global equilibrium by using simultaneous gradient ascent-descent with respect to the Wasserstein metric -- a dynamics that admits efficient particle discretization in high-dimensions, as opposed to entropic mirror descent. We complement this positive result with a related entropy-regularized loss which is not bilinear but still convex-concave in the Wasserstein geometry, and for which simultaneous dynamics do not converge yet timescale separation does. Taken together, these results showcase the benign geometry of bilinear games in the space of measures, enabling particle dynamics with global qualitative convergence guarantees.

This dissertation studies a fundamental open challenge in deep learning theory: why do deep networks generalize well even while being overparameterized, unregularized and fitting the training data to zero error? In the first part of the thesis, we will empirically study how training deep networks via stochastic gradient descent implicitly controls the networks' capacity. Subsequently, to show how this leads to better generalization, we will derive {\em data-dependent} {\em uniform-convergence-based} generalization bounds with improved dependencies on the parameter count. Uniform convergence has in fact been the most widely used tool in deep learning literature, thanks to its simplicity and generality. Given its popularity, in this thesis, we will also take a step back to identify the fundamental limits of uniform convergence as a tool to explain generalization. In particular, we will show that in some example overparameterized settings, {\em any} uniform convergence bound will provide only a vacuous generalization bound. With this realization in mind, in the last part of the thesis, we will change course and introduce an {\em empirical} technique to estimate generalization using unlabeled data. Our technique does not rely on any notion of uniform-convergece-based complexity and is remarkably precise. We will theoretically show why our technique enjoys such precision. We will conclude by discussing how future work could explore novel ways to incorporate distributional assumptions in generalization bounds (such as in the form of unlabeled data) and explore other tools to derive bounds, perhaps by modifying uniform convergence or by developing completely new tools altogether.

Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.

Quantum machine learning is expected to be one of the first potential general-purpose applications of near-term quantum devices. A major recent breakthrough in classical machine learning is the notion of generative adversarial training, where the gradients of a discriminator model are used to train a separate generative model. In this work and a companion paper, we extend adversarial training to the quantum domain and show how to construct generative adversarial networks using quantum circuits. Furthermore, we also show how to compute gradients -- a key element in generative adversarial network training -- using another quantum circuit. We give an example of a simple practical circuit ansatz to parametrize quantum machine learning models and perform a simple numerical experiment to demonstrate that quantum generative adversarial networks can be trained successfully.

北京阿比特科技有限公司