亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A well-known feature of quantum information is that it cannot, in general, be cloned. Recently, a number of quantum-enabled information-processing tasks have demonstrated various forms of uncloneability; among these forms, piracy is an adversarial model that gives maximal power to the adversary, in controlling both a cloning-type attack, as well as the evaluation/verification stage. Here, we initiate the study of anti-piracy proof systems, which are proof systems that inherently prevent piracy attacks. We define anti-piracy proof systems, demonstrate such a proof system for an oracle problem, and also describe a candidate anti-piracy proof system for NP. We also study quantum proof systems that are cloneable and settle the famous QMA vs. QMA(2) debate in this setting. Lastly, we discuss how one can approach the QMA vs. QCMA question, by studying its cloneable variants.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · 穩健性 · 操作 · 近似 · 設計 ·
2024 年 11 月 12 日

Robust and stable high order numerical methods for solving partial differential equations are attractive because they are efficient on modern and next generation hardware architectures. However, the design of provably stable numerical methods for nonlinear hyperbolic conservation laws pose a significant challenge. We present the dual-pairing (DP) and upwind summation-by-parts (SBP) finite difference (FD) framework for accurate and robust numerical approximations of nonlinear conservation laws. The framework has an inbuilt "limiter" whose goal is to detect and effectively resolve regions where the solution is poorly resolved and/or discontinuities are found. The DP SBP FD operators are a dual-pair of backward and forward FD stencils, which together preserve the SBP property. In addition, the DP SBP FD operators are designed to be upwind, that is they come with some innate dissipation everywhere, as opposed to traditional SBP and collocated discontinuous Galerkin spectral element methods which can only induce dissipation through numerical fluxes acting at element interfaces. We combine the DP SBP operators together with skew-symmetric and upwind flux splitting of nonlinear hyperbolic conservation laws. Our semi-discrete approximation is provably entropy-stable for arbitrary nonlinear hyperbolic conservation laws. The framework is high order accurate, provably entropy-stable, convergent, and avoids several pitfalls of current state-of-the-art high order methods. We give specific examples using the in-viscid Burger's equation, nonlinear shallow water equations and compressible Euler equations of gas dynamics. Numerical experiments are presented to verify accuracy and demonstrate the robustness of our numerical framework.

In this work, we initiate the study of Hamiltonian learning for positive temperature bosonic Gaussian states, the quantum generalization of the widely studied problem of learning Gaussian graphical models. We obtain efficient protocols, both in sample and computational complexity, for the task of inferring the parameters of their underlying quadratic Hamiltonian under the assumption of bounded temperature, squeezing, displacement and maximal degree of the interaction graph. Our protocol only requires heterodyne measurements, which are often experimentally feasible, and has a sample complexity that scales logarithmically with the number of modes. Furthermore, we show that it is possible to learn the underlying interaction graph in a similar setting and sample complexity. Taken together, our results put the status of the quantum Hamiltonian learning problem for continuous variable systems in a much more advanced state when compared to spins, where state-of-the-art results are either unavailable or quantitatively inferior to ours. In addition, we use our techniques to obtain the first results on learning Gaussian states in trace distance with a quadratic scaling in precision and polynomial in the number of modes, albeit imposing certain restrictions on the Gaussian states. Our main technical innovations are several continuity bounds for the covariance and Hamiltonian matrix of a Gaussian state, which are of independent interest, combined with what we call the local inversion technique. In essence, the local inversion technique allows us to reliably infer the Hamiltonian of a Gaussian state by only estimating in parallel submatrices of the covariance matrix whose size scales with the desired precision, but not the number of modes. This way we bypass the need to obtain precise global estimates of the covariance matrix, controlling the sample complexity.

This work is the first exploration of proof-theoretic semantics for a substructural logic. It focuses on the base-extension semantics (B-eS) for intuitionistic multiplicative linear logic (IMLL). The starting point is a review of Sandqvist's B-eS for intuitionistic propositional logic (IPL), for which we propose an alternative treatment of conjunction that takes the form of the generalized elimination rule for the connective. The resulting semantics is shown to be sound and complete. This motivates our main contribution, a B-eS for IMLL, in which the definitions of the logical constants all take the form of their elimination rule and for which soundness and completeness are established.

We perform a quantitative assessment of different strategies to compute the contribution due to surface tension in incompressible two-phase flows using a conservative level set (CLS) method. More specifically, we compare classical approaches, such as the direct computation of the curvature from the level set or the Laplace-Beltrami operator, with an evolution equation for the mean curvature recently proposed in literature. We consider the test case of a static bubble, for which an exact solution for the pressure jump across the interface is available, and the test case of an oscillating bubble, showing pros and cons of the different approaches.

The logic of bunched implications (BI) can be seen as the free combination of intuitionistic propositional logic (IPL) and intuitionistic multiplicative linear logic (IMLL). We present here a base-extension semantics (B-eS) for BI in the spirit of Sandqvist's B-eS for IPL, deferring an analysis of proof-theoretic validity, in the sense of Dummett and Prawitz, to another occasion. Essential to BI's formulation in proof-theoretic terms is the concept of a `bunch' of hypotheses that is familiar from relevance logic. Bunches amount to trees whose internal vertices are labelled with either the IMLL context-former or the IPL context-former and whose leaves are labelled with propositions or units for the context-formers. This structure presents significant technical challenges in setting up a base-extension semantics for BI. Our approach starts from the B-eS for IPL and the B-eS for IMLL and provides a systematic combination. Such a combination requires that base rules carry bunched structure, and so requires a more complex notion of derivability in a base and a correspondingly richer notion of support in a base. One reason why BI is a substructural logic of interest is that the `resource interpretation' of its semantics, given in terms of sharing and separation and which gives rise to Separation Logic in the field of program verification, is quite distinct from the `number-of-uses' reading of the propositions of linear logic as resources. This resource reading of BI provides useful intuitions in the formulation of its proof-theoretic semantics. We discuss a simple example of the use of the given B-eS in security modelling.

Polynomial approximations of functions are widely used in scientific computing. In certain applications, it is often desired to require the polynomial approximation to be non-negative (resp. non-positive), or bounded within a given range, due to constraints posed by the underlying physical problems. Efficient numerical methods are thus needed to enforce such conditions. In this paper, we discuss effective numerical algorithms for polynomial approximation under non-negativity constraints. We first formulate the constrained optimization problem, its primal and dual forms, and then discuss efficient first-order convex optimization methods, with a particular focus on high dimensional problems. Numerical examples are provided, for up to $200$ dimensions, to demonstrate the effectiveness and scalability of the methods.

We present a novel variational derivation of the Maxwell-GLM system, which augments the original vacuum Maxwell equations via a generalized Lagrangian multiplier approach (GLM) by adding two supplementary acoustic subsystems and which was originally introduced by Munz et al. for purely numerical purposes in order to treat the divergence constraints of the magnetic and the electric field in the vacuum Maxwell equations within general-purpose and non-structure-preserving numerical schemes for hyperbolic PDE. Among the many mathematically interesting features of the model are: i) its symmetric hyperbolicity, ii) the extra conservation law for the total energy density and, most importantly, iii) the very peculiar combination of the basic differential operators, since both, curl-curl and div-grad combinations are mixed within this kind of system. A similar mixture of Maxwell-type and acoustic-type subsystems has recently been also forwarded by Buchman et al. in the context of a reformulation of the Einstein field equations of general relativity in terms of tetrads. This motivates our interest in this class of PDE, since the system is by itself very interesting from a mathematical point of view and can therefore serve as useful prototype system for the development of new structure-preserving numerical methods. Up to now, to the best of our knowledge, there exists neither a rigorous variational derivation of this class of hyperbolic PDE systems, nor do exactly energy-conserving and asymptotic-preserving schemes exist for them. The objectives of this paper are to derive the Maxwell-GLM system from an underlying variational principle, show its consistency with Hamiltonian mechanics and special relativity, extend it to the general nonlinear case and to develop new exactly energy-conserving and asymptotic-preserving finite volume schemes for its discretization.

The rapid development of modern artificial intelligence (AI) systems has created an urgent need for their scientific quantification. While their fluency across a variety of domains is impressive, modern AI systems fall short on tests requiring symbolic processing and abstraction - a glaring limitation given the necessity for interpretable and reliable technology. Despite a surge of reasoning benchmarks emerging from the academic community, no comprehensive and theoretically-motivated framework exists to quantify reasoning (and more generally, symbolic ability) in AI systems. Here, we adopt a framework from computational complexity theory to explicitly quantify symbolic generalization: algebraic circuit complexity. Many symbolic reasoning problems can be recast as algebraic expressions. Thus, algebraic circuit complexity theory - the study of algebraic expressions as circuit models (i.e., directed acyclic graphs) - is a natural framework to study the complexity of symbolic computation. The tools of algebraic circuit complexity enable the study of generalization by defining benchmarks in terms of their complexity-theoretic properties (i.e., the difficulty of a problem). Moreover, algebraic circuits are generic mathematical objects; for a given algebraic circuit, an arbitrarily large number of samples can be generated for a specific circuit, making it an optimal testbed for the data-hungry machine learning algorithms that are used today. Here, we adopt tools from algebraic circuit complexity theory, apply it to formalize a science of symbolic generalization, and address key theoretical and empirical challenges for its successful application to AI science and its impact on the broader community.

Obtaining an accurate estimate of the underlying covariance matrix from finite sample size data is challenging due to sample size noise. In recent years, sophisticated covariance-cleaning techniques based on random matrix theory have been proposed to address this issue. Most of these methods aim to achieve an optimal covariance matrix estimator by minimizing the Frobenius norm distance as a measure of the discrepancy between the true covariance matrix and the estimator. However, this practice offers limited interpretability in terms of information theory. To better understand this relationship, we focus on the Kullback-Leibler divergence to quantify the information lost by the estimator. Our analysis centers on rotationally invariant estimators, which are state-of-art in random matrix theory, and we derive an analytical expression for their Kullback-Leibler divergence. Due to the intricate nature of the calculations, we use genetic programming regressors paired with human intuition. Ultimately, using this approach, we formulate a conjecture validated through extensive simulations, showing that the Frobenius distance corresponds to a first-order expansion term of the Kullback-Leibler divergence, thus establishing a more defined link between the two measures.

The unpredictability of random numbers is fundamental to both digital security and applications that fairly distribute resources. However, existing random number generators have limitations-the generation processes cannot be fully traced, audited, and certified to be unpredictable. The algorithmic steps used in pseudorandom number generators are auditable, but they cannot guarantee that their outputs were a priori unpredictable given knowledge of the initial seed. Device-independent quantum random number generators can ensure that the source of randomness was unknown beforehand, but the steps used to extract the randomness are vulnerable to tampering. Here, for the first time, we demonstrate a fully traceable random number generation protocol based on device-independent techniques. Our protocol extracts randomness from unpredictable non-local quantum correlations, and uses distributed intertwined hash chains to cryptographically trace and verify the extraction process. This protocol is at the heart of a public traceable and certifiable quantum randomness beacon that we have launched. Over the first 40 days of operation, we completed the protocol 7434 out of 7454 attempts -- a success rate of 99.7%. Each time the protocol succeeded, the beacon emitted a pulse of 512 bits of traceable randomness. The bits are certified to be uniform with error times actual success probability bounded by $2^{-64}$. The generation of certifiable and traceable randomness represents one of the first public services that operates with an entanglement-derived advantage over comparable classical approaches.

北京阿比特科技有限公司