亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we revisit one of the prototypical tasks for characterizing the structure of noise in quantum devices, estimating the eigenvalues of an $n$-qubit Pauli noise channel. Prior work (Chen et al., 2022) established exponential lower bounds for this task for algorithms with limited quantum memory. We first improve upon their lower bounds and show: (1) Any algorithm without quantum memory must make $\Omega(2^n/\epsilon^2)$ measurements to estimate each eigenvalue within error $\epsilon$. This is tight and implies the randomized benchmarking protocol is optimal, resolving an open question of (Flammia and Wallman, 2020). (2) Any algorithm with $\le k$ ancilla qubits of quantum memory must make $\Omega(2^{(n-k)/3})$ queries to the unknown channel. Crucially, unlike in (Chen et al., 2022), our bound holds even if arbitrary adaptive control and channel concatenation are allowed. In fact these lower bounds, like those of (Chen et al., 2022), hold even for the easier hypothesis testing problem of determining whether the underlying channel is completely depolarizing or has exactly one other nontrivial eigenvalue. Surprisingly, we show that: (3) With only $k=2$ ancilla qubits of quantum memory, there is an algorithm that solves this hypothesis testing task with high probability using a single measurement. Note that (3) does not contradict (2) as the protocol concatenates exponentially many queries to the channel before the measurement. This result suggests a novel mechanism by which channel concatenation and $O(1)$ qubits of quantum memory could work in tandem to yield striking speedups for quantum process learning that are not possible for quantum state learning.

相關內容

We investigate the so-called "MMSE conjecture" from Guo et al. (2011) which asserts that two distributions on the real line with the same entropy along the heat flow coincide up to translation and symmetry. Our approach follows the path breaking contribution Ledoux (1995) which gave algebraic representations of the derivatives of said entropy in terms of multivariate polynomials. The main contributions in this note are (i) we obtain the leading terms in the polynomials from Ledoux (1995), and (ii) we provide new conditions on the source distributions ensuring the MMSE conjecture holds. As illustrating examples, our findings cover the cases of uniform and Rademacher distributions, for which previous results in the literature were inapplicable.

In this paper we offer a new perspective on the well established agglomerative clustering algorithm, focusing on recovery of hierarchical structure. We recommend a simple variant of the standard algorithm, in which clusters are merged by maximum average dot product and not, for example, by minimum distance or within-cluster variance. We demonstrate that the tree output by this algorithm provides a bona fide estimate of generative hierarchical structure in data, under a generic probabilistic graphical model. The key technical innovations are to understand how hierarchical information in this model translates into tree geometry which can be recovered from data, and to characterise the benefits of simultaneously growing sample size and data dimension. We demonstrate superior tree recovery performance with real data over existing approaches such as UPGMA, Ward's method, and HDBSCAN.

In order to coordinate successfully individuals must first identify a target pattern of behaviour. In this paper we investigate the difficulty of identifying prominent outcomes in two kinds of binary action coordination problems in social networks: pure coordination games and anti-coordination games. For both environments, we determine the computational complexity of finding a strategy profile that (i) maximises welfare, (ii) maximises welfare subject to being an equilibrium, and (iii) maximises potential. We show that the complexity of these objectives can vary with the type of coordination problem. Objectives (i) and (iii) are tractable problems in pure coordination games, but for anti-coordination games are NP-hard. Objective (ii), finding the best Nash equilibrium, is NP-hard for both. Our results support the idea that environments in which actions are strategic complements facilitate successful coordination more readily than those in which actions are strategic substitutes.

In this paper, we consider a model reduction technique for stabilizable and detectable stochastic systems. It is based on a pair of Gramians that we analyze in terms of well-posedness. Subsequently, dominant subspaces of the stochastic systems are identified exploiting these Gramians. An associated balancing related scheme is proposed that removes unimportant information from the stochastic dynamics in order to obtain a reduced system. We show that this reduced model preserves important features like stabilizability and detectability. Additionally, a comprehensive error analysis based on eigenvalues of the Gramian pair product is conducted. This provides an a-priori criterion for the reduction quality which we illustrate in numerical experiments.

Completely random measures (CRMs) and their normalizations (NCRMs) offer flexible models in Bayesian nonparametrics. But their infinite dimensionality presents challenges for inference. Two popular finite approximations are truncated finite approximations (TFAs) and independent finite approximations (IFAs). While the former have been well-studied, IFAs lack similarly general bounds on approximation error, and there has been no systematic comparison between the two options. In the present work, we propose a general recipe to construct practical finite-dimensional approximations for homogeneous CRMs and NCRMs, in the presence or absence of power laws. We call our construction the automated independent finite approximation (AIFA). Relative to TFAs, we show that AIFAs facilitate more straightforward derivations and use of parallel computing in approximate inference. We upper bound the approximation error of AIFAs for a wide class of common CRMs and NCRMs -- and thereby develop guidelines for choosing the approximation level. Our lower bounds in key cases suggest that our upper bounds are tight. We prove that, for worst-case choices of observation likelihoods, TFAs are more efficient than AIFAs. Conversely, we find that in real-data experiments with standard likelihoods, AIFAs and TFAs perform similarly. Moreover, we demonstrate that AIFAs can be used for hyperparameter estimation even when other potential IFA options struggle or do not apply.

In this paper we propose an automatic trajectory data reconciliation to correct common errors in vision-based vehicle trajectory data. Given "raw" vehicle detection and tracking information from automatic video processing algorithms, we propose a pipeline including (a) an online data association algorithm to match fragments that describe the same object (vehicle), which is formulated as a min-cost network circulation problem of a graph, and (b) a one-step trajectory rectification procedure formulated as a quadratic program to enhance raw detection data. The pipeline leverages vehicle dynamics and physical constraints to associate tracked objects when they become fragmented, remove measurement noises and outliers and impute missing data due to fragmentations. We assess the capability of the proposed two-step pipeline to reconstruct three benchmarking datasets: (1) a microsimulation dataset that is artificially downgraded to replicate upstream errors, (2) a 15-min NGSIM data that is manually perturbed, and (3) tracking data consists of 3 scenes from collections of video data recorded from 16-17 cameras on a section of the I-24 MOTION system, and compare with the corresponding manually-labeled ground truth vehicle bounding boxes. All of the experiments show that the reconciled trajectories improve the accuracy on all the tested input data for a wide range of measures. Lastly, we show the design of a software architecture that is currently deployed on the full-scale I-24 MOTION system consisting of 276 cameras that covers 4.2 miles of I-24. We demonstrate the scalability of the proposed reconciliation pipeline to process high-volume data on a daily basis.

In this paper, we introduce a new coding and decoding structure for enhancing the reliability and performance of polar codes, specifically at low error rates. We achieve this by concatenating two polar codes in series to create robust error-correcting codes. The primary objective here is to optimize the behavior of individual elementary codes within polar codes. In this structure, we incorporate interleaving, a technique that rearranges bits to maximize the separation between originally neighboring symbols. This rearrangement is instrumental in converting error clusters into distributed errors across the entire sequence. To evaluate their performance, we proposed to model a communication system with seven components: an information source, a channel encoder, a modulator, a channel, a demodulator, a channel decoder, and a destination. This work focuses on evaluating the bit error rate (BER) of codes for different block lengths and code rates. Next, we compare the bit error rate (BER) performance between our proposed method and polar codes.

In this paper we propose a definition of the distributional Riemann curvature tensor in dimension $N\geq 2$ if the underlying metric tensor $g$ defined on a triangulation $\mathcal{T}$ possesses only single-valued tangential-tangential components on codimension 1 simplices. We analyze the convergence of the curvature approximation in the $H^{-2}$-norm if a sequence of interpolants $g_h$ of polynomial order $k\geq 0$ of a smooth metric $g$ is given. We show that for dimension $N=2$ convergence rates of order $\mathcal{O}(h^{k+1})$ are obtained. For $N\geq 3$ convergence holds only in the case $k\geq 1$. Numerical examples demonstrate that our theoretical results are sharp. By choosing appropriate test functions we show that the distributional Gauss and scalar curvature in 2D respectively any dimension are obtained. Further, a first definition of the distributional Ricci curvature tensor in arbitrary dimension is derived, for which our analysis is applicable.

To improve the predictability of complex computational models in the experimentally-unknown domains, we propose a Bayesian statistical machine learning framework utilizing the Dirichlet distribution that combines results of several imperfect models. This framework can be viewed as an extension of Bayesian stacking. To illustrate the method, we study the ability of Bayesian model averaging and mixing techniques to mine nuclear masses. We show that the global and local mixtures of models reach excellent performance on both prediction accuracy and uncertainty quantification and are preferable to classical Bayesian model averaging. Additionally, our statistical analysis indicates that improving model predictions through mixing rather than mixing of corrected models leads to more robust extrapolations.

This paper does not describe a working system. Instead, it presents a single idea about representation which allows advances made by several different groups to be combined into an imaginary system called GLOM. The advances include transformers, neural fields, contrastive representation learning, distillation and capsules. GLOM answers the question: How can a neural network with a fixed architecture parse an image into a part-whole hierarchy which has a different structure for each image? The idea is simply to use islands of identical vectors to represent the nodes in the parse tree. If GLOM can be made to work, it should significantly improve the interpretability of the representations produced by transformer-like systems when applied to vision or language

北京阿比特科技有限公司