亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We construct a new family of permutationally invariant codes that correct $t$ Pauli errors for any $t\ge 1$. We also show that codes in the new family correct spontaneous decay errors as well as deletion errors. In many cases the codes in this family are shorter than the best previously known explicit families of permutationally invariant codes both for Pauli errors, deletions, and for the amplitude damping channel. As a separate result, we generalize the conditions for permutationally invariant codes to correct $t$ Pauli errors from the previously known results for $t=1$ to any number of errors. For small $t$, these conditions can be used to construct new examples of codes by computer.

相關內容

A simple way of obtaining robust estimates of the "center" (or the "location") and of the "scatter" of a dataset is to use the maximum likelihood estimate with a class of heavy-tailed distributions, regardless of the "true" distribution generating the data. We observe that the maximum likelihood problem for the Cauchy distributions, which have particularly heavy tails, is geodesically convex and therefore efficiently solvable (Cauchy distributions are parametrized by the upper half plane, i.e. by the hyperbolic plane). Moreover, it has an appealing geometrical meaning: the datapoints, living on the boundary of the hyperbolic plane, are attracting the parameter by unit forces, and we search the point where these forces are in equilibrium. This picture generalizes to several classes of multivariate distributions with heavy tails, including, in particular, the multivariate Cauchy distributions. The hyperbolic plane gets replaced by symmetric spaces of noncompact type. Geodesic convexity gives us an efficient numerical solution of the maximum likelihood problem for these distribution classes. This can then be used for robust estimates of location and spread, thanks to the heavy tails of these distributions.

The characterization of the solution set for a class of algebraic Riccati inequalities is studied. This class arises in the passivity analysis of linear time invariant control systems. Eigenvalue perturbation theory for the Hamiltonian matrix associated with the Riccati inequality is used to analyze the extremal points of the solution set.

We construct and analyze finite element approximations of the Einstein tensor in dimension $N \ge 3$. We focus on the setting where a smooth Riemannian metric tensor $g$ on a polyhedral domain $\Omega \subset \mathbb{R}^N$ has been approximated by a piecewise polynomial metric $g_h$ on a simplicial triangulation $\mathcal{T}$ of $\Omega$ having maximum element diameter $h$. We assume that $g_h$ possesses single-valued tangential-tangential components on every codimension-1 simplex in $\mathcal{T}$. Such a metric is not classically differentiable in general, but it turns out that one can still attribute meaning to its Einstein curvature in a distributional sense. We study the convergence of the distributional Einstein curvature of $g_h$ to the Einstein curvature of $g$ under refinement of the triangulation. We show that in the $H^{-2}(\Omega)$-norm, this convergence takes place at a rate of $O(h^{r+1})$ when $g_h$ is an optimal-order interpolant of $g$ that is piecewise polynomial of degree $r \ge 1$. We provide numerical evidence to support this claim.

We provide numerical bounds on the Crouzeix ratiofor KLS matrices $A$ which have a line segment on the boundary of the numerical range. The Crouzeix ratio is the supremum over all polynomials $p$ of the spectral norm of $p(A)$ dividedby the maximum absolute value of $p$ on the numerical range of $A$.Our bounds confirm the conjecture that this ratiois less than or equal to $2$. We also give a precise description of these numerical ranges.

We explore the maximum likelihood degree of a homogeneous polynomial $F$ on a projective variety $X$, $\mathrm{MLD}_F(X)$, which generalizes the concept of Gaussian maximum likelihood degree. We show that $\mathrm{MLD}_F(X)$ is equal to the count of critical points of a rational function on $X$, and give different geometric characterizations of it via topological Euler characteristic, dual varieties, and Chern classes.

Model averaging has received much attention in the past two decades, which integrates available information by averaging over potential models. Although various model averaging methods have been developed, there are few literatures on the theoretical properties of model averaging from the perspective of stability, and the majority of these methods constrain model weights to a simplex. The aim of this paper is to introduce stability from statistical learning theory into model averaging. Thus, we define the stability, asymptotic empirical risk minimizer, generalization, and consistency of model averaging and study the relationship among them. Our results indicate that stability can ensure that model averaging has good generalization performance and consistency under reasonable conditions, where consistency means model averaging estimator can asymptotically minimize the mean squared prediction error. We also propose a L2-penalty model averaging method without limiting model weights and prove that it has stability and consistency. In order to reduce the impact of tuning parameter selection, we use 10-fold cross-validation to select a candidate set of tuning parameters and perform a weighted average of the estimators of model weights based on estimation errors. The Monte Carlo simulation and an illustrative application demonstrate the usefulness of the proposed method.

We present two new positive results for reliable computation using formulas over physical alphabets of size $q > 2$. First, we show that for logical alphabets of size $\ell = q$ the threshold for denoising using gates subject to $q$-ary symmetric noise with error probability $\varepsilon$ is strictly larger than that for Boolean computation, and is possible as long as signals remain distinguishable, i.e. $\epsilon < (q - 1) / q$, in the limit of large fan-in $k \rightarrow \infty$. We also determine the point at which generalized majority gates with bounded fan-in fail, and show in particular that reliable computation is possible for $\epsilon < (q - 1) / (q (q + 1))$ in the case of $q$ prime and fan-in $k = 3$. Secondly, we provide an example where $\ell < q$, showing that reliable Boolean computation can be performed using $2$-input ternary logic gates subject to symmetric ternary noise of strength $\varepsilon < 1/6$ by using the additional alphabet element for error signaling.

In this paper, we introduce an Abaqus UMAT subroutine for a family of constitutive models for the viscoelastic response of isotropic elastomers of any compressibility -- including fully incompressible elastomers -- undergoing finite deformations. The models can be chosen to account for a wide range of non-Gaussian elasticities, as well as for a wide range of nonlinear viscosities. From a mathematical point of view, the structure of the models is such that the viscous dissipation is characterized by an internal variable $\textbf{C}^v$, subject to the physically-based constraint $\det\textbf{C}^v=1$, that is solution of a nonlinear first-order ODE in time. This ODE is solved by means of an explicit Runge-Kutta scheme of high order capable of preserving the constraint $\det\textbf{C}^v=1$ identically. The accuracy and convergence of the code is demonstrated numerically by comparison with an exact solution for several of the Abaqus built-in hybrid finite elements, including the simplicial elements C3D4H and C3D10H and the hexahedral elements C3D8H and C3D20H. The last part of this paper is devoted to showcasing the capabilities of the code by deploying it to compute the homogenized response of a bicontinuous rubber blend.

The classical Zarankiewicz's problem asks for the maximum number of edges in a bipartite graph on $n$ vertices which does not contain the complete bipartite graph $K_{t,t}$. In one of the cornerstones of extremal graph theory, K\H{o}v\'ari S\'os and Tur\'an proved an upper bound of $O(n^{2-\frac{1}{t}})$. In a celebrated result, Fox et al. obtained an improved bound of $O(n^{2-\frac{1}{d}})$ for graphs of VC-dimension $d$ (where $d<t$). Basit, Chernikov, Starchenko, Tao and Tran improved the bound for the case of semilinear graphs. At SODA'23, Chan and Har-Peled further improved Basit et al.'s bounds and presented (quasi-)linear upper bounds for several classes of geometrically-defined incidence graphs, including a bound of $O(n \log \log n)$ for the incidence graph of points and pseudo-discs in the plane. In this paper we present a new approach to Zarankiewicz's problem, via $\epsilon$-t-nets - a recently introduced generalization of the classical notion of $\epsilon$-nets. We show that the existence of `small'-sized $\epsilon$-t-nets implies upper bounds for Zarankiewicz's problem. Using the new approach, we obtain a sharp bound of $O(n)$ for the intersection graph of two families of pseudo-discs, thus both improving and generalizing the result of Chan and Har-Peled from incidence graphs to intersection graphs. We also obtain a short proof of the $O(n^{2-\frac{1}{d}})$ bound of Fox et al., and show improved bounds for several other classes of geometric intersection graphs, including a sharp $O(n\frac{\log n}{\log \log n})$ bound for the intersection graph of two families of axis-parallel rectangles.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

北京阿比特科技有限公司