亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this work, we study the computational complexity of quantum determinants, a $q$-deformation of matrix permanents: Given a complex number $q$ on the unit circle in the complex plane and an $n\times n$ matrix $X$, the $q$-permanent of $X$ is defined as $$\mathrm{Per}_q(X) = \sum_{\sigma\in S_n} q^{\ell(\sigma)}X_{1,\sigma(1)}\ldots X_{n,\sigma(n)},$$ where $\ell(\sigma)$ is the inversion number of permutation $\sigma$ in the symmetric group $S_n$ on $n$ elements. The function family generalizes determinant and permanent, which correspond to the cases $q=-1$ and $q=1$ respectively. For worst-case hardness, by Liouville's approximation theorem and facts from algebraic number theory, we show that for primitive $m$-th root of unity $q$ for odd prime power $m=p^k$, exactly computing $q$-permanent is $\mathsf{Mod}_p\mathsf{P}$-hard. This implies that an efficient algorithm for computing $q$-permanent results in a collapse of the polynomial hierarchy. Next, we show that computing $q$-permanent can be achieved using an oracle that approximates to within a polynomial multiplicative error and a membership oracle for a finite set of algebraic integers. From this, an efficient approximation algorithm would also imply a collapse of the polynomial hierarchy. By random self-reducibility, computing $q$-permanent remains to be hard for a wide range of distributions satisfying a property called the strong autocorrelation property. Specifically, this is proved via a reduction from $1$-permanent to $q$-permanent for $O(1/n^2)$ points $z$ on the unit circle. Since the family of permanent functions shares common algebraic structure, various techniques developed for the hardness of permanent can be generalized to $q$-permanents.

相關內容

CC在計算復雜性方面表現突出。它的學科處于數學與計算機理論科學的交叉點,具有清晰的數學輪廓和嚴格的數學格式。官網鏈接: · 回合 · 零知識 · 量子計算 · 可驗證性 ·
2023 年 4 月 7 日

Given that reliable cloud quantum computers are becoming closer to reality, the concept of delegation of quantum computations and its verifiability is of central interest. Many models have been proposed, each with specific strengths and weaknesses. Here, we put forth a new model where the client trusts only its classical processing, makes no computational assumptions, and interacts with a quantum server in a single round. In addition, during a set-up phase, the client specifies the size $n$ of the computation and receives an untrusted, off-the-shelf (OTS) quantum device that is used to report the outcome of a single constant-sized measurement from a predetermined logarithmic-sized input. In the OTS model, we thus picture that a single quantum server does the bulk of the computations, while the OTS device is used as an untrusted and generic verification device, all in a single round. We show how to delegate polynomial-time quantum computations in the OTS model. Scaling up the technique also yields an interactive proof system for all of QMA, which, furthermore, we show can be accomplished in statistical zero-knowledge. This yields the first relativistic (one-round), two-prover zero-knowledge proof system for QMA. As a proof approach, we provide a new self-test for $n$-EPR pairs using only constant-sized Pauli measurements, and show how it provides a new avenue for the use of simulatable codes for local Hamiltonian verification. Along the way, we also provide an enhanced version of a well-known stability result due to Gowers and Hatami and show how it completes a common argument used in self-testing.

Given a Hilbert space $\mathcal H$ and a finite measure space $\Omega$, the approximation of a vector-valued function $f: \Omega \to \mathcal H$ by a $k$-dimensional subspace $\mathcal U \subset \mathcal H$ plays an important role in dimension reduction techniques, such as reduced basis methods for solving parameter-dependent partial differential equations. For functions in the Lebesgue--Bochner space $L^2(\Omega;\mathcal H)$, the best possible subspace approximation error $d_k^{(2)}$ is characterized by the singular values of $f$. However, for practical reasons, $\mathcal U$ is often restricted to be spanned by point samples of $f$. We show that this restriction only has a mild impact on the attainable error; there always exist $k$ samples such that the resulting error is not larger than $\sqrt{k+1} \cdot d_k^{(2)}$. Our work extends existing results by Binev at al. (SIAM J. Math. Anal., 43(3):1457--1472, 2011) on approximation in supremum norm and by Deshpande et al. (Theory Comput., 2:225--247, 2006) on column subset selection for matrices.

We settle the complexity of dynamic least-squares regression (LSR), where rows and labels $(\mathbf{A}^{(t)}, \mathbf{b}^{(t)})$ can be adaptively inserted and/or deleted, and the goal is to efficiently maintain an $\epsilon$-approximate solution to $\min_{\mathbf{x}^{(t)}} \| \mathbf{A}^{(t)} \mathbf{x}^{(t)} - \mathbf{b}^{(t)} \|_2$ for all $t\in [T]$. We prove sharp separations ($d^{2-o(1)}$ vs. $\sim d$) between the amortized update time of: (i) Fully vs. Partially dynamic $0.01$-LSR; (ii) High vs. low-accuracy LSR in the partially-dynamic (insertion-only) setting. Our lower bounds follow from a gap-amplification reduction -- reminiscent of iterative refinement -- rom the exact version of the Online Matrix Vector Conjecture (OMv) [HKNS15], to constant approximate OMv over the reals, where the $i$-th online product $\mathbf{H}\mathbf{v}^{(i)}$ only needs to be computed to $0.1$-relative error. All previous fine-grained reductions from OMv to its approximate versions only show hardness for inverse polynomial approximation $\epsilon = n^{-\omega(1)}$ (additive or multiplicative) . This result is of independent interest in fine-grained complexity and for the investigation of the OMv Conjecture, which is still widely open.

Differential privacy has been an exceptionally successful concept when it comes to providing provable security guarantees for classical computations. More recently, the concept was generalized to quantum computations. While classical computations are essentially noiseless and differential privacy is often achieved by artificially adding noise, near-term quantum computers are inherently noisy and it was observed that this leads to natural differential privacy as a feature. In this work we discuss quantum differential privacy in an information theoretic framework by casting it as a quantum divergence. A main advantage of this approach is that differential privacy becomes a property solely based on the output states of the computation, without the need to check it for every measurement. This leads to simpler proofs and generalized statements of its properties as well as several new bounds for both, general and specific, noise models. In particular, these include common representations of quantum circuits and quantum machine learning concepts. Here, we focus on the difference in the amount of noise required to achieve certain levels of differential privacy versus the amount that would make any computation useless. Finally, we also generalize the classical concepts of local differential privacy, Renyi differential privacy and the hypothesis testing interpretation to the quantum setting, providing several new properties and insights.

Control of the ordering of transactions in modern blockchains can be extremely profitable. Rather than allow one central actor to control this revenue source, recent research has studied mechanisms for decentralizing the process of computing an ordering among multiple, distributed replicas. This problem is akin to the classic problem from social choice theory of aggregating ordinal votes, applied to a streaming setting. Prior work proposes a ``$\gamma$-batch-order-fairness'' requirement on the aggregate ordering. Under this requirement, the ordering should be divisible into contiguous batches, and when a $\gamma$ fraction of replicas receive $tx$ before $tx^\prime$, then $tx^\prime$ cannot be in an earlier batch than $tx$. We extend this definition to formalize the notion that these batches should have minimal size, thereby giving the first notion of order fairness that cannot be vacuously satisfied (by arbitrarily large batches) and that can be satisfied in the presence of faulty replicas. We then show that the Ranked Pairs aggregation method produces an ordering that satisfies our fairness definition for every choice of parameter $\gamma$ simultaneously and for any number of faulty replicas (where fairness guarantees linearly degrade as the fraction of faulty replicas increases). We then instantiate our protocol in the streaming setting. Careful analysis of the interactions between ordering dependencies enables our protocol to simulate Ranked Pairs voting in this setting, and adjustments to ordering algorithm give a protocol that (under synchronous network assumptions) always appends a transaction to the output ordering after a bounded amount of time.

In this paper, we present efficient quantum algorithms that are exponentially faster than classical algorithms for solving the quantum optimal control problem. This problem involves finding the control variable that maximizes a physical quantity at time $T$, where the system is governed by a time-dependent Schr\"odinger equation. This type of control problem also has an intricate relation with machine learning. Our algorithms are based on a time-dependent Hamiltonian simulation method and a fast gradient-estimation algorithm. We also provide a comprehensive error analysis to quantify the total error from various steps, such as the finite-dimensional representation of the control function, the discretization of the Schr\"odinger equation, the numerical quadrature, and optimization. Our quantum algorithms require fault-tolerant quantum computers.

We consider the classic 1-center problem: Given a set $P$ of $n$ points in a metric space find the point in $P$ that minimizes the maximum distance to the other points of $P$. We study the complexity of this problem in $d$-dimensional $\ell_p$-metrics and in edit and Ulam metrics over strings of length $d$. Our results for the 1-center problem may be classified based on $d$ as follows. $\bullet$ Small $d$: Assuming the hitting set conjecture (HSC), we show that when $d=\omega(\log n)$, no subquadratic algorithm can solve 1-center problem in any of the $\ell_p$-metrics, or in edit or Ulam metrics. $\bullet$ Large $d$: When $d=\Omega(n)$, we extend our conditional lower bound to rule out subquartic algorithms for 1-center problem in edit metric (assuming Quantified SETH). On the other hand, we give a $(1+\epsilon)$-approximation for 1-center in Ulam metric with running time $\tilde{O_{\varepsilon}}(nd+n^2\sqrt{d})$. We also strengthen some of the above lower bounds by allowing approximations or by reducing the dimension $d$, but only against a weaker class of algorithms which list all requisite solutions. Moreover, we extend one of our hardness results to rule out subquartic algorithms for the well-studied 1-median problem in the edit metric, where given a set of $n$ strings each of length $n$, the goal is to find a string in the set that minimizes the sum of the edit distances to the rest of the strings in the set.

We analyze the bit complexity of efficient algorithms for fundamental optimization problems, such as linear regression, $p$-norm regression, and linear programming (LP). State-of-the-art algorithms are iterative, and in terms of the number of arithmetic operations, they match the current time complexity of multiplying two $n$-by-$n$ matrices (up to polylogarithmic factors). However, previous work has typically assumed infinite precision arithmetic, and due to complicated inverse maintenance techniques, the actual running times of these algorithms are unknown. To settle the running time and bit complexity of these algorithms, we demonstrate that a core common subroutine, known as \emph{inverse maintenance}, is backward-stable. Additionally, we show that iterative approaches for solving constrained weighted regression problems can be accomplished with bounded-error pre-conditioners. Specifically, we prove that linear programs can be solved approximately in matrix multiplication time multiplied by polylog factors that depend on the condition number $\kappa$ of the matrix and the inner and outer radius of the LP problem. $p$-norm regression can be solved approximately in matrix multiplication time multiplied by polylog factors in $\kappa$. Lastly, linear regression can be solved approximately in input-sparsity time multiplied by polylog factors in $\kappa$. Furthermore, we present results for achieving lower than matrix multiplication time for $p$-norm regression by utilizing faster solvers for sparse linear systems.

We propose the predictive forward-forward (PFF) algorithm for conducting credit assignment in neural systems. Specifically, we design a novel, dynamic recurrent neural system that learns a directed generative circuit jointly and simultaneously with a representation circuit. Notably, the system integrates learnable lateral competition, noise injection, and elements of predictive coding, an emerging and viable neurobiological process theory of cortical function, with the forward-forward (FF) adaptation scheme. Furthermore, PFF efficiently learns to propagate learning signals and updates synapses with forward passes only, eliminating key structural and computational constraints imposed by backpropagation-based schemes. Besides computational advantages, the PFF process could prove useful for understanding the learning mechanisms behind biological neurons that use local signals despite missing feedback connections. We run experiments on image data and demonstrate that the PFF procedure works as well as backpropagation, offering a promising brain-inspired algorithm for classifying, reconstructing, and synthesizing data patterns.

Two genomes over the same set of gene families form a canonical pair when each of them has exactly one gene from each family. Different distances of canonical genomes can be derived from a structure called breakpoint graph, which represents the relation between the two given genomes as a collection of cycles of even length and paths. Then, the breakpoint distance is equal to n - (c_2 + p_0/2), where n is the number of genes, c_2 is the number of cycles of length 2 and p_0 is the number of paths of length 0. Similarly, when the considered rearrangements are those modeled by the double-cut-and-join (DCJ) operation, the rearrangement distance is n - (c + p_e/2), where c is the total number of cycles and p_e is the total number of even paths. The distance formulation is a basic unit for several other combinatorial problems related to genome evolution and ancestral reconstruction, such as median or double distance. Interestingly, both median and double distance problems can be solved in polynomial time for the breakpoint distance, while they are NP-hard for the rearrangement distance. One way of exploring the complexity space between these two extremes is to consider the {\sigma}_k distance, defined to be n - [c_2 + c_4 + ... + c_k + (p_0 + p_2 + ... +p_k)/2], and increasingly investigate the complexities of median and double distance for the {\sigma}_4 distance, then the {\sigma}_6 distance, and so on. While for the median much effort was done in our and in other research groups but no progress was obtained even for the {\sigma}_4 distance, for solving the double distance under {\sigma}_4 and {\sigma}_6 distances we could devise linear time algorithms, which we present here.

北京阿比特科技有限公司