亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The performance of a quantum information processing protocol is ultimately judged by distinguishability measures that quantify how distinguishable the actual result of the protocol is from the ideal case. The most prominent distinguishability measures are those based on the fidelity and trace distance, due to their physical interpretations. In this paper, we propose and review several algorithms for estimating distinguishability measures based on trace distance and fidelity. The algorithms can be used for distinguishing quantum states, channels, and strategies (the last also known in the literature as ``quantum combs''). The fidelity-based algorithms offer novel physical interpretations of these distinguishability measures in terms of the maximum probability with which a single prover (or competing provers) can convince a verifier to accept the outcome of an associated computation. We simulate many of these algorithms by using a variational approach with parameterized quantum circuits. We find that the simulations converge well in both the noiseless and noisy scenarios, for all examples considered. Furthermore, the noisy simulations exhibit a parameter noise resilience. Finally, we establish a strong relationship between various quantum computational complexity classes and distance estimation problems.

相關內容

There has recently been much interest in Gaussian processes on linear networks and more generally on compact metric graphs. One proposed strategy for defining such processes on a metric graph $\Gamma$ is through a covariance function that is isotropic in a metric on the graph. Another is through a fractional order differential equation $L^\alpha (\tau u) = \mathcal{W}$ on $\Gamma$, where $L = \kappa^2 - \nabla(a\nabla)$ for (sufficiently nice) functions $\kappa, a$, and $\mathcal{W}$ is Gaussian white noise. We study Markov properties of these two types of fields. We first show that there are no Gaussian random fields on general metric graphs that are both isotropic and Markov. We then show that the second type of fields, the generalized Whittle--Mat\'ern fields, are Markov if and only if $\alpha\in\mathbb{N}$, and if $\alpha\in\mathbb{N}$, the field is Markov of order $\alpha$, which essentially means that the process in one region $S\subset\Gamma$ is conditionally independent the process in $\Gamma\setminus S$ given the values of the process and its $\alpha-1$ derivatives on $\partial S$. Finally, we show that the Markov property implies an explicit characterization of the process on a fixed edge $e$, which in particular shows that the conditional distribution of the process on $e$ given the values at the two vertices connected to $e$ is independent of the geometry of $\Gamma$.

Quantum key distribution (QKD) allows Alice and Bob to agree on a shared secret key, while communicating over a public (untrusted) quantum channel. Compared to classical key exchange, it has two main advantages: (i) The key is unconditionally hidden to the eyes of any attacker, and (ii) its security assumes only the existence of authenticated classical channels which, in practice, can be realized using Minicrypt assumptions, such as the existence of digital signatures. On the flip side, QKD protocols typically require multiple rounds of interactions, whereas classical key exchange can be realized with the minimal amount of two messages. A long-standing open question is whether QKD requires more rounds of interaction than classical key exchange. In this work, we propose a two-message QKD protocol that satisfies everlasting security, assuming only the existence of quantum-secure one-way functions. That is, the shared key is unconditionally hidden, provided computational assumptions hold during the protocol execution. Our result follows from a new quantum cryptographic primitive that we introduce in this work: the quantum-public-key one-time pad, a public-key analogue of the well-known one-time pad.

Differential privacy has been an exceptionally successful concept when it comes to providing provable security guarantees for classical computations. More recently, the concept was generalized to quantum computations. While classical computations are essentially noiseless and differential privacy is often achieved by artificially adding noise, near-term quantum computers are inherently noisy and it was observed that this leads to natural differential privacy as a feature. In this work we discuss quantum differential privacy in an information theoretic framework by casting it as a quantum divergence. A main advantage of this approach is that differential privacy becomes a property solely based on the output states of the computation, without the need to check it for every measurement. This leads to simpler proofs and generalized statements of its properties as well as several new bounds for both, general and specific, noise models. In particular, these include common representations of quantum circuits and quantum machine learning concepts. Here, we focus on the difference in the amount of noise required to achieve certain levels of differential privacy versus the amount that would make any computation useless. Finally, we also generalize the classical concepts of local differential privacy, Renyi differential privacy and the hypothesis testing interpretation to the quantum setting, providing several new properties and insights.

In this paper, we present efficient quantum algorithms that are exponentially faster than classical algorithms for solving the quantum optimal control problem. This problem involves finding the control variable that maximizes a physical quantity at time $T$, where the system is governed by a time-dependent Schr\"odinger equation. This type of control problem also has an intricate relation with machine learning. Our algorithms are based on a time-dependent Hamiltonian simulation method and a fast gradient-estimation algorithm. We also provide a comprehensive error analysis to quantify the total error from various steps, such as the finite-dimensional representation of the control function, the discretization of the Schr\"odinger equation, the numerical quadrature, and optimization. Our quantum algorithms require fault-tolerant quantum computers.

While extensive work has been done to correct for biases due to measurement error in scalar-valued covariates prone to errors in generalized linear regression models, limited work has been done to address biases associated with functional covariates prone to errors or the combination of scalar and functional covariates prone to errors in these models. We propose Simulation Extrapolation (SIMEX) and Regression Calibration approaches to correct measurement errors associated with a mixture of functional and scalar covariates prone to classical measurement errors in generalized functional linear regression. The simulation extrapolation method is developed to handle the functional and scalar covariates prone to errors. We also develop methods based on regression calibration extended to our current measurement error settings. Extensive simulation studies are conducted to assess the finite sample performance of our developed methods. The methods are applied to the 2011-2014 cycles of the National Health and Examination Survey data to assess the relationship between physical activity and total caloric intake with type 2 diabetes among community-dwelling adults living in the United States. We treat the device-based measures of physical activity as error-prone functional covariates prone to complex arbitrary heteroscedastic errors, while the total caloric intake is considered a scalar-valued covariate prone to error. We also examine the characteristics of observed measurement errors in device-based physical activity by important demographic subgroups including age, sex, and race.

We study the approximate state preparation problem on noisy intermediate-scale quantum (NISQ) computers by applying a genetic algorithm to generate quantum circuits for state preparation. The algorithm can account for the specific characteristics of the physical machine in the evaluation of circuits, such as the native gate set and qubit connectivity. We use our genetic algorithm to optimize the circuits provided by the low-rank state preparation algorithm introduced by Araujo et al., and find substantial improvements to the fidelity in preparing Haar random states with a limited number of CNOT gates. Moreover, we observe that already for a 5-qubit quantum processor with limited qubit connectivity and significant noise levels (IBM Falcon 5T), the maximal fidelity for Haar random states is achieved by a short approximate state preparation circuit instead of the exact preparation circuit. We also present a theoretical analysis of approximate state preparation circuit complexity to motivate our findings. Our genetic algorithm for quantum circuit discovery is freely available at //github.com/beratyenilen/qc-ga .

Directed networks are conveniently represented as graphs in which ordered edges encode interactions between vertices. Despite their wide availability, there is a shortage of statistical models amenable for inference, specially when contextual information and degree heterogeneity are present. This paper presents an annotated graph model with parameters explicitly accounting for these features. To overcome the curse of dimensionality due to modelling degree heterogeneity, we introduce a sparsity assumption and propose a penalized likelihood approach with $\ell_1$-regularization for parameter estimation. We study the estimation and selection consistency of this approach under a sparse network assumption, and show that inference on the covariate parameter is straightforward, thus bypassing the need for the kind of debiasing commonly employed in $\ell_1$-penalized likelihood estimation. Simulation and data analysis corroborate our theoretical findings.

Continuous-time measurements are instrumental for a multitude of tasks in quantum engineering and quantum control, including the estimation of dynamical parameters of open quantum systems monitored through the environment. However, such measurements do not extract the maximum amount of information available in the output state, so finding alternative optimal measurement strategies is a major open problem. In this paper we solve this problem in the setting of discrete-time input-output quantum Markov chains. We present an efficient algorithm for optimal estimation of one-dimensional dynamical parameters which consists of an iterative procedure for updating a `measurement filter' operator and determining successive measurement bases for the output units. A key ingredient of the scheme is the use of a coherent quantum absorber as a way to post-process the output after the interaction with the system. This is designed adaptively such that the joint system and absorber stationary state is pure at a reference parameter value. The scheme offers an exciting prospect for optimal continuous-time adaptive measurements, but more work is needed to find realistic practical implementations.

In survival contexts, substantial literature exists on estimating optimal treatment regimes, where treatments are assigned based on personal characteristics for the purpose of maximizing the survival probability. These methods assume that a set of covariates is sufficient to deconfound the treatment-outcome relationship. Nevertheless, the assumption can be limiting in observational studies or randomized trials in which noncompliance occurs. Thus, we advance a novel approach for estimating the optimal treatment regime when certain confounders are not observable and a binary instrumental variable is available. Specifically, via a binary instrumental variable, we propose two semiparametric estimators for the optimal treatment regime, one of which possesses the desirable property of double robustness, by maximizing Kaplan-Meier-like estimators within a pre-defined class of regimes. Because the Kaplan-Meier-like estimators are jagged, we incorporate kernel smoothing methods to enhance their performance. Under appropriate regularity conditions, the asymptotic properties are rigorously established. Furthermore, the finite sample performance is assessed through simulation studies. We exemplify our method using data from the National Cancer Institute's (NCI) prostate, lung, colorectal, and ovarian cancer screening trial.

As an intrinsic and fundamental property of big data, data heterogeneity exists in a variety of real-world applications, such as precision medicine, autonomous driving, financial applications, etc. For machine learning algorithms, the ignorance of data heterogeneity will greatly hurt the generalization performance and the algorithmic fairness, since the prediction mechanisms among different sub-populations are likely to differ from each other. In this work, we focus on the data heterogeneity that affects the prediction of machine learning models, and firstly propose the \emph{usable predictive heterogeneity}, which takes into account the model capacity and computational constraints. We prove that it can be reliably estimated from finite data with probably approximately correct (PAC) bounds. Additionally, we design a bi-level optimization algorithm to explore the usable predictive heterogeneity from data. Empirically, the explored heterogeneity provides insights for sub-population divisions in income prediction, crop yield prediction and image classification tasks, and leveraging such heterogeneity benefits the out-of-distribution generalization performance.

北京阿比特科技有限公司