亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the problem of decoding corrupted error correcting codes with NC$^0[\oplus]$ circuits in the classical and quantum settings. We show that any such classical circuit can correctly recover only a vanishingly small fraction of messages, if the codewords are sent over a noisy channel with positive error rate. Previously this was known only for linear codes with large dual distance, whereas our result applies to any code. By contrast, we give a simple quantum circuit that correctly decodes the Hadamard code with probability $\Omega(\varepsilon^2)$ even if a $(1/2 - \varepsilon)$-fraction of a codeword is adversarially corrupted. Our classical hardness result is based on an equidistribution phenomenon for multivariate polynomials over a finite field under biased input-distributions. This is proved using a structure-versus-randomness strategy based on a new notion of rank for high-dimensional polynomial maps that may be of independent interest. Our quantum circuit is inspired by a non-local version of the Bernstein-Vazirani problem, a technique to generate ``poor man's cat states'' by Watts et al., and a constant-depth quantum circuit for the OR function by Takahashi and Tani.

相關內容

One of the most fundamental problems in tiling theory is the domino problem: given a set of tiles and tiling rules, decide if there exists a way to tile the plane using copies of tiles and following their rules. The problem is known to be undecidable in general and even for sets of Wang tiles, which are unit square tiles wearing colours on their edges which can be assembled provided they share the same colour on their common edge, as proven by Berger in the 1960s. In this paper, we focus on Wang tilesets. We prove that the domino problem is decidable for robust tilesets, i.e. tilesets that either cannot tile the plane or can but, if so, satisfy some particular invariant provably. We establish that several famous tilesets considered in the literature are robust. We give arguments that this is true for all tilesets unless they are produced from non-robust Turing machines: a Turing machine is said to be non-robust if it does not halt and furthermore does so non-provably. As a side effect of our work, we provide a sound and relatively complete method for proving that a tileset can tile the plane. Our analysis also provides explanations for the observed similarities between proofs in the literature for various tilesets, as well as of phenomena that have been observed experimentally in the systematic study of tilesets using computer methods.

Input-output conformance simulation (iocos) has been proposed by Gregorio-Rodr\'iguez, Llana and Mart\'inez-Torres as a simulation-based behavioural preorder underlying model-based testing. This relation is inspired by Tretmans' classic ioco relation, but has better worst-case complexity than ioco and supports stepwise refinement. The goal of this paper is to develop the theory of iocos by studying logical characterisations of this relation, rule formats for it and its compositionality. More specifically, this article presents characterisations of iocos in terms of modal logics and compares them with an existing logical characterisation for ioco proposed by Beohar and Mousavi. It also offers a characteristic-formula construction for iocos over finite processes in an extension of the proposed modal logics with greatest fixed points. A precongruence rule format for iocos and a rule format ensuring that operations take quiescence properly into account are also given. Both rule formats are based on the GSOS format by Bloom, Istrail and Meyer. The general modal decomposition methodology of Fokkink and van Glabbeek is used to show how to check the satisfaction of properties expressed in the logic for iocos in a compositional way for operations specified by rules in the precongruence rule format for iocos .

We consider the estimation of the cumulative hazard function, and equivalently the distribution function, with censored data under a setup that preserves the privacy of the survival database. This is done through a $\alpha$-locally differentially private mechanism for the failure indicators and by proposing a non-parametric kernel estimator for the cumulative hazard function that remains consistent under the privatization. Under mild conditions, we also prove lowers bounds for the minimax rates of convergence and show that estimator is minimax optimal under a well-chosen bandwidth.

We consider finite element approximations to the optimal constant for the Hardy inequality with exponent $p=2$ in bounded domains of dimension $n=1$ or $n \geq 3$. For finite element spaces of piecewise linear and continuous functions on a mesh of size $h$, we prove that the approximate Hardy constant converges to the optimal Hardy constant at a rate proportional to $1/| \log h |^2$. This result holds in dimension $n=1$, in any dimension $n \geq 3$ if the domain is the unit ball and the finite element discretization exploits the rotational symmetry of the problem, and in dimension $n=3$ for general finite element discretizations of the unit ball. In the first two cases, our estimates show excellent quantitative agreement with values of the discrete Hardy constant obtained computationally.

We consider the problem of optimising the expected value of a loss functional over a nonlinear model class of functions, assuming that we have only access to realisations of the gradient of the loss. This is a classical task in statistics, machine learning and physics-informed machine learning. A straightforward solution is to replace the exact objective with a Monte Carlo estimate before employing standard first-order methods like gradient descent, which yields the classical stochastic gradient descent method. But replacing the true objective with an estimate ensues a ``generalisation error''. Rigorous bounds for this error typically require strong compactness and Lipschitz continuity assumptions while providing a very slow decay with sample size. We propose a different optimisation strategy relying on a natural gradient descent in which the true gradient is approximated in local linearisations of the model class via (quasi-)projections based on optimal sampling methods. Under classical assumptions on the loss and the nonlinear model class, we prove that this scheme converges almost surely monotonically to a stationary point of the true objective and we provide convergence rates.

A roadmap for an algebraic set $V$ defined by polynomials with coefficients in some real field, say $\mathbb{R}$, is an algebraic curve contained in $V$ whose intersection with all connected components of $V\cap\mathbb{R}^{n}$ is connected. These objects, introduced by Canny, can be used to answer connectivity queries over $V\cap \mathbb{R}^{n}$ provided that they are required to contain the finite set of query points $\mathcal{P}\subset V$; in this case,we say that the roadmap is associated to $(V, \mathcal{P})$. In this paper, we make effective a connectivity result we previously proved, to design a Monte Carlo algorithm which, on input (i) a finite sequence of polynomials defining $V$ (and satisfying some regularity assumptions) and (ii) an algebraic representation of finitely many query points $\mathcal{P}$ in $V$, computes a roadmap for $(V, \mathcal{P})$. This algorithm generalizes the nearly optimal one introduced by the last two authors by dropping a boundedness assumption on the real trace of $V$. The output size and running times of our algorithm are both polynomial in $(nD)^{n\log d}$, where $D$ is the maximal degree of the input equations and $d$ is the dimension of $V$. As far as we know, the best previously known algorithm dealing with such sets has an output size and running time polynomial in $(nD)^{n\log^2 n}$.

A (left) group code of length n is a linear code which is the image of a (left) ideal of a group algebra via an isomorphism from FG to Fn which maps G to the standard basis of Fn. Many classical linear codes have been shown to be group codes. In this paper we obtain a criterion to decide when a linear code is a group code in terms of its intrinsical properties in the ambient space Fn, which does not assume an a priori group algebra structure on Fn. As an application we provide a family of groups (including metacyclic groups) for which every two-sided group code is an abelian group code. It is well known that Reed-Solomon codes are cyclic and its parity check extensions are elementary abelian group codes. These two classes of codes are included in the class of Cauchy codes. Using our criterion we classify the Cauchy codes of some lengths which are left group codes and the possible group code structures on these codes.

We study the problem of identifying a small set $k\sim n^\theta$, $0<\theta<1$, of infected individuals within a large population of size $n$ by testing groups of individuals simultaneously. All tests are conducted concurrently. The goal is to minimise the total number of tests required. In this paper we make the (realistic) assumption that tests are noisy, i.e.\ that a group that contains an infected individual may return a negative test result or one that does not contain an infected individual may return a positive test results with a certain probability. The noise need not be symmetric. We develop an algorithm called SPARC that correctly identifies the set of infected individuals up to $o(k)$ errors with high probability with the asymptotically minimum number of tests. Additionally, we develop an algorithm called SPEX that exactly identifies the set of infected individuals w.h.p. with a number of tests that matches the information-theoretic lower bound for the constant column design, a powerful and well-studied test design.

The joint bidiagonalization (JBD) process iteratively reduces a matrix pair $\{A,L\}$ to two bidiagonal forms simultaneously, which can be used for computing a partial generalized singular value decomposition (GSVD) of $\{A,L\}$. The process has a nested inner-outer iteration structure, where the inner iteration usually can not be computed exactly. In this paper, we study the inaccurately computed inner iterations of JBD by first investigating influence of computational error of the inner iteration on the outer iteration, and then proposing a reorthogonalized JBD (rJBD) process to keep orthogonality of a part of Lanczos vectors. An error analysis of the rJBD is carried out to build up connections with Lanczos bidiagonalizations. The results are then used to investigate convergence and accuracy of the rJBD based GSVD computation. It is shown that the accuracy of computed GSVD components depend on the computing accuracy of inner iterations and condition number of $(A^T,L^T)^T$ while the convergence rate is not affected very much. For practical JBD based GSVD computations, our results can provide a guideline for choosing a proper computing accuracy of inner iterations in order to obtain approximate GSVD components with a desired accuracy. Numerical experiments are made to confirm our theoretical results.

Motivated by the desire to understand stochastic algorithms for nonconvex optimization that are robust to their hyperparameter choices, we analyze a mini-batched prox-linear iterative algorithm for the problem of recovering an unknown rank-1 matrix from rank-1 Gaussian measurements corrupted by noise. We derive a deterministic recursion that predicts the error of this method and show, using a non-asymptotic framework, that this prediction is accurate for any batch-size and a large range of step-sizes. In particular, our analysis reveals that this method, though stochastic, converges linearly from a local initialization with a fixed step-size to a statistical error floor. Our analysis also exposes how the batch-size, step-size, and noise level affect the (linear) convergence rate and the eventual statistical estimation error, and we demonstrate how to use our deterministic predictions to perform hyperparameter tuning (e.g. step-size and batch-size selection) without ever running the method. On a technical level, our analysis is enabled in part by showing that the fluctuations of the empirical iterates around our deterministic predictions scale with the error of the previous iterate.

北京阿比特科技有限公司