亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The error threshold of a one-parameter family of quantum channels is defined as the largest noise level such that the quantum capacity of the channel remains positive. This in turn guarantees the existence of a quantum error correction code for noise modeled by that channel. Discretizing the single-qubit errors leads to the important family of Pauli quantum channels; curiously, multipartite entangled states can increase the threshold of these channels beyond the so-called hashing bound, an effect termed superadditivity of coherent information. In this work, we divide the simplex of Pauli channels into one-parameter families and compute numerical lower bounds on their error thresholds. We find substantial increases of error thresholds relative to the hashing bound for large regions in the Pauli simplex corresponding to biased noise, which is a realistic noise model in promising quantum computing architectures. The error thresholds are computed on the family of graph states, a special type of stabilizer state. In order to determine the coherent information of a graph state, we devise an algorithm that exploits the symmetries of the underlying graph, resulting in a substantial computational speed-up. This algorithm uses tools from computational group theory and allows us to consider symmetric graph states on a large number of vertices. Our algorithm works particularly well for repetition codes and concatenated repetition codes (or cat codes), for which our results provide the first comprehensive study of superadditivity for arbitrary Pauli channels. In addition, we identify a novel family of quantum codes based on tree graphs. The error thresholds of these tree graph states outperform repetition and cat codes in large regions of the Pauli simplex, and hence form a new code family with desirable error correction properties.

相關內容

Polar codes are normally designed based on the reliability of the sub-channels in the polarized vector channel. There are various methods with diverse complexity and accuracy to evaluate the reliability of the sub-channels. However, designing polar codes solely based on the sub-channel reliability may result in poor Hamming distance properties. In this work, we propose a different approach to design the information set for polar codes and PAC codes where the objective is to reduce the number of codewords with minimum weight (a.k.a. error coefficient) of a code designed for maximum reliability. This approach is based on the coset-wise characterization of the rows of polar transform $\mathbf{G}_N$ involved in the formation of the minimum-weight codewords. Our analysis capitalizes on the properties of the polar transform based on its row and column indices. The numerical results show that the designed codes outperform PAC codes and CRC-Polar codes at the practical block error rate of $10^{-2}-10^{-3}$. Furthermore, a by-product of the combinatorial properties analyzed in this paper is an alternative enumeration method of the minimum-weight codewords.

We develop an a posteriori error analysis for a novel quantity of interest (QoI) evolutionary partial differential equations (PDEs). Specifically, the QoI is the first time at which a functional of the solution to the PDE achieves a threshold value signifying a particular event, and differs from classical QoIs which are modeled as bounded linear functionals. We use Taylor's theorem and adjoint based analysis to derive computable and accurate error estimates for linear parabolic and hyperbolic PDEs. Specifically, the heat equation and linearized shallow water equations (SWE) are used for the parabolic and hyperbolic cases, respectively. Numerical examples illustrate the accuracy of the error estimates.

We propose the first near-optimal quantum algorithm for estimating in Euclidean norm the mean of a vector-valued random variable with finite mean and covariance. Our result aims at extending the theory of multivariate sub-Gaussian estimators to the quantum setting. Unlike classically, where any univariate estimator can be turned into a multivariate estimator with at most a logarithmic overhead in the dimension, no similar result can be proved in the quantum setting. Indeed, Heinrich ruled out the existence of a quantum advantage for the mean estimation problem when the sample complexity is smaller than the dimension. Our main result is to show that, outside this low-precision regime, there is a quantum estimator that outperforms any classical estimator. Our approach is substantially more involved than in the univariate setting, where most quantum estimators rely only on phase estimation. We exploit a variety of additional algorithmic techniques such as amplitude amplification, the Bernstein-Vazirani algorithm, and quantum singular value transformation. Our analysis also uses concentration inequalities for multivariate truncated statistics. We develop our quantum estimators in two different input models that showed up in the literature before. The first one provides coherent access to the binary representation of the random variable and it encompasses the classical setting. In the second model, the random variable is directly encoded into the phases of quantum registers. This model arises naturally in many quantum algorithms but it is often incomparable to having classical samples. We adapt our techniques to these two settings and we show that the second model is strictly weaker for solving the mean estimation problem. Finally, we describe several applications of our algorithms, notably in measuring the expectation values of commuting observables and in the field of machine learning.

Graphical models are useful tools for describing structured high-dimensional probability distributions. Development of efficient algorithms for learning graphical models with least amount of data remains an active research topic. Reconstruction of graphical models that describe the statistics of discrete variables is a particularly challenging problem, for which the maximum likelihood approach is intractable. In this work, we provide the first sample-efficient method based on the Interaction Screening framework that allows one to provably learn fully general discrete factor models with node-specific discrete alphabets and multi-body interactions, specified in an arbitrary basis. We identify a single condition related to model parametrization that leads to rigorous guarantees on the recovery of model structure and parameters in any error norm, and is readily verifiable for a large class of models. Importantly, our bounds make explicit distinction between parameters that are proper to the model and priors used as an input to the algorithm. Finally, we show that the Interaction Screening framework includes all models previously considered in the literature as special cases, and for which our analysis shows a systematic improvement in sample complexity.

There has been a rich development of vector autoregressive (VAR) models for modeling temporally correlated multivariate outcomes. However, the existing VAR literature has largely focused on single subject parametric analysis, with some recent extensions to multi-subject modeling with known subgroups. Motivated by the need for flexible Bayesian methods that can pool information across heterogeneous samples in an unsupervised manner, we develop a novel class of non-parametric Bayesian VAR models based on heterogeneous multi-subject data. In particular, we propose a product of Dirichlet process mixture priors that enables separate clustering at multiple scales, which result in partially overlapping clusters that provide greater flexibility. We develop several variants of the method to cater to varying levels of heterogeneity. We implement an efficient posterior computation scheme and illustrate posterior consistency properties under reasonable assumptions on the true density. Extensive numerical studies show distinct advantages over competing methods in terms of estimating model parameters and identifying the true clustering and sparsity structures. Our analysis of resting state fMRI data from the Human Connectome Project reveals biologically interpretable differences between distinct fluid intelligence groups, and reproducible parameter estimates. In contrast, single-subject VAR analyses followed by permutation testing result in negligible differences, which is biologically implausible.

Physical platforms such as trapped ions suffer from coherent noise where errors manifest as rotations about a particular axis and can accumulate over time. We investigate passive mitigation through decoherence free subspaces, requiring the noise to preserve the code space of a stabilizer code, and to act as the logical identity operator on the protected information. Thus, we develop necessary and sufficient conditions for all transversal $Z$-rotations to preserve the code space of a stabilizer code, which require the weight-$2$ $Z$-stabilizers to cover all the qubits that are in the support of some $X$-component. Further, the weight-$2$ $Z$-stabilizers generate a direct product of single-parity-check codes with even block length. By adjusting the size of these components, we are able to construct a large family of QECC codes, oblivious to coherent noise, that includes the $[[4L^2, 1, 2L]]$ Shor codes. Moreover, given $M$ even and any $[[n,k,d]]$ stabilizer code, we can construct an $[[Mn, k, \ge d]]$ stabilizer code that is oblivious to coherent noise. If we require that transversal $Z$-rotations preserve the code space only up to some finite level $l$ in the Clifford hierarchy, then we can construct higher level gates necessary for universal quantum computation. The $Z$-stabilizers supported on each non-zero $X$-component form a classical binary code C, which is required to contain a self-dual code, and the classical Gleason's theorem constrains its weight enumerator. The conditions for a stabilizer code being preserved by transversal $2\pi/2^l$ $Z$-rotations at $4 \le l \le l_{\max} <\infty$ level in the Clifford hierarchy lead to generalizations of Gleason's theorem that may be of independent interest to classical coding theorists.

We present an analytical framework for the channel estimation and the data detection in massive multiple-input multiple-output uplink systems with 1-bit analog-to-digital converters (ADCs) and i.i.d. Rayleigh fading. First, we provide closed-form expressions of the mean squared error (MSE) of the channel estimation considering the state-of-the-art linear minimum MSE estimator and the class of scaled least-squares estimators. For the data detection, we provide closed-form expressions of the expected value and the variance of the estimated symbols when maximum ratio combining is adopted, which can be exploited to efficiently implement minimum distance detection and, potentially, to design the set of transmit symbols. Our analytical findings explicitly depend on key system parameters such as the signal-to-noise ratio (SNR), the number of user equipments, and the pilot length, thus enabling a precise characterization of the performance of the channel estimation and the data detection with 1-bit ADCs. The proposed analysis highlights a fundamental SNR trade-off, according to which operating at the right noise level significantly enhances the system performance.

This article is motivated by studying multisensory effects on brain activities in intracranial electroencephalography (iEEG) experiments. Differential brain activities to multisensory stimulus presentations are zero in most regions and non-zero in some local regions, yielding locally sparse functions. Such studies are essentially a function-on-scalar regression problem, with interest being focused not only on estimating nonparametric functions but also on recovering the function supports. We propose a weighted group bridge approach for simultaneous function estimation and support recovery in function-on-scalar mixed effect models, while accounting for heterogeneity present in functional data. We use B-splines to transform sparsity of functions to its sparse vector counterpart of increasing dimension, and propose a fast non-convex optimization algorithm using nested alternative direction method of multipliers (ADMM) for estimation. Large sample properties are established. In particular, we show that the estimated coefficient functions are rate optimal in the minimax sense under the $L_2$ norm and resemble a phase transition phenomenon. For support estimation, we derive a convergence rate under the $L_{\infty}$ norm that leads to a sparsistency property under $\delta$-sparsity, and provide a simple sufficient regularity condition under which a strict sparsistency property is established. An adjusted extended Bayesian information criterion is proposed for parameter tuning. The developed method is illustrated through simulation and an application to a novel iEEG dataset to study multisensory integration. We integrate the proposed method into RAVE, an R package that gains increasing popularity in the iEEG community.

Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.

We propose the Gaussian Error Linear Unit (GELU), a high-performing neural network activation function. The GELU nonlinearity is the expected transformation of a stochastic regularizer which randomly applies the identity or zero map to a neuron's input. The GELU nonlinearity weights inputs by their magnitude, rather than gates inputs by their sign as in ReLUs. We perform an empirical evaluation of the GELU nonlinearity against the ReLU and ELU activations and find performance improvements across all considered computer vision, natural language processing, and speech tasks.

北京阿比特科技有限公司