亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We provide more sample-efficient versions of some basic routines in quantum data analysis, along with simpler proofs. Particularly, we give a quantum "Threshold Search" algorithm that requires only $O((\log^2 m)/\epsilon^2)$ samples of a $d$-dimensional state $\rho$. That is, given observables $0 \le A_1, A_2, ..., A_m \le 1$ such that $\mathrm{tr}(\rho A_i) \ge 1/2$ for at least one $i$, the algorithm finds $j$ with $\mathrm{tr}(\rho A_j) \ge 1/2-\epsilon$. As a consequence, we obtain a Shadow Tomography algorithm requiring only $\tilde{O}((\log^2 m)(\log d)/\epsilon^4)$ samples, which simultaneously achieves the best known dependence on each parameter $m$, $d$, $\epsilon$. This yields the same sample complexity for quantum Hypothesis Selection among $m$ states; we also give an alternative Hypothesis Selection method using $\tilde{O}((\log^3 m)/\epsilon^2)$ samples.

相關內容

In this work, a family of symmetric interpolation points are generated on the four-dimensional simplex (i.e. the pentatope). These points are optimized in order to minimize the Lebesgue constant. The process of generating these points closely follows that outlined by Warburton in "An explicit construction of interpolation nodes on the simplex," Journal of Engineering Mathematics, 2006. Here, Warburton generated optimal interpolation points on the triangle and tetrahedron by formulating explicit geometric warping and blending functions, and applying these functions to equidistant nodal distributions. The locations of the resulting points were Lebesgue-optimized. In our work, we extend this procedure to four dimensions, and construct interpolation points on the pentatope up to order ten. The Lebesgue constants of our nodal sets are calculated, and are shown to outperform those of equidistant nodal distributions.

Higher-dimensional automata, i.e., pointed labeled precubical sets, are a powerful combinatorial-topological model for concurrent systems. In this paper, we show that for every (nonempty) connected polyhedron there exists a shared-variable system such that the higher-dimensional automaton modeling the state space of the system has the homotopy type of the polyhedron.

In 1994, Shor introduced his famous quantum algorithm to factor integers and compute discrete logarithms in polynomial time. In 2023, Regev proposed a multi-dimensional version of Shor's algorithm that requires far fewer quantum gates. His algorithm relies on a number-theoretic conjecture on the elements in $(\mathbb{Z}/N\mathbb{Z})^{\times}$ that can be written as short products of very small prime numbers. We prove a version of this conjecture using tools from analytic number theory such as zero-density estimates. As a result, we obtain an unconditional proof of correctness of this improved quantum algorithm and of subsequent variants.

Incomplete factorizations have long been popular general-purpose algebraic preconditioners for solving large sparse linear systems of equations. Guaranteeing the factorization is breakdown free while computing a high quality preconditioner is challenging. A resurgence of interest in using low precision arithmetic makes the search for robustness more urgent and tougher. In this paper, we focus on symmetric positive definite problems and explore a number of approaches: a look-ahead strategy to anticipate break down as early as possible, the use of global shifts, and a modification of an idea developed in the field of numerical optimization for the complete Cholesky factorization of dense matrices. Our numerical simulations target highly ill-conditioned sparse linear systems with the goal of computing the factors in half precision arithmetic and then achieving double precision accuracy using mixed precision refinement.

In this work, we developed a new Bayesian method for variable selection in function-on-scalar regression (FOSR). Our method uses a hierarchical Bayesian structure and latent variables to enable an adaptive covariate selection process for FOSR. Extensive simulation studies show the proposed method's main properties, such as its accuracy in estimating the coefficients and high capacity to select variables correctly. Furthermore, we conducted a substantial comparative analysis with the main competing methods, the BGLSS (Bayesian Group Lasso with Spike and Slab prior) method, the group LASSO (Least Absolute Shrinkage and Selection Operator), the group MCP (Minimax Concave Penalty), and the group SCAD (Smoothly Clipped Absolute Deviation). Our results demonstrate that the proposed methodology is superior in correctly selecting covariates compared with the existing competing methods while maintaining a satisfactory level of goodness of fit. In contrast, the competing methods could not balance selection accuracy with goodness of fit. We also considered a COVID-19 dataset and some socioeconomic data from Brazil as an application and obtained satisfactory results. In short, the proposed Bayesian variable selection model is highly competitive, showing significant predictive and selective quality.

The implication problem for conditional independence (CI) asks whether the fact that a probability distribution obeys a given finite set of CI relations implies that a further CI statement also holds in this distribution. This problem has a long and fascinating history, cumulating in positive results about implications now known as the semigraphoid axioms as well as impossibility results about a general finite characterization of CI implications. Motivated by violation of faithfulness assumptions in causal discovery, we study the implication problem in the special setting where the CI relations are obtained from a directed acyclic graphical (DAG) model along with one additional CI statement. Focusing on the Gaussian case, we give a complete characterization of when such an implication is graphical by using algebraic techniques. Moreover, prompted by the relevance of strong faithfulness in statistical guarantees for causal discovery algorithms, we give a graphical solution for an approximate CI implication problem, in which we ask whether small values of one additional partial correlation entail small values for yet a further partial correlation.

Even though novel imaging techniques have been successful in studying brain structure and function, the measured biological signals are often contaminated by multiple sources of noise, arising due to e.g. head movements of the individual being scanned, limited spatial/temporal resolution, or other issues specific to each imaging technology. Data preprocessing (e.g. denoising) is therefore critical. Preprocessing pipelines have become increasingly complex over the years, but also more flexible, and this flexibility can have a significant impact on the final results and conclusions of a given study. This large parameter space is often referred to as multiverse analyses. Here, we provide conceptual and practical tools for statistical analyses that can aggregate multiple pipeline results along with a new sensitivity analysis testing for hypotheses across pipelines such as "no effect across all pipelines" or "at least one pipeline with no effect". The proposed framework is generic and can be applied to any multiverse scenario, but we illustrate its use based on positron emission tomography data.

The study of neural operators has paved the way for the development of efficient approaches for solving partial differential equations (PDEs) compared with traditional methods. However, most of the existing neural operators lack the capability to provide uncertainty measures for their predictions, a crucial aspect, especially in data-driven scenarios with limited available data. In this work, we propose a novel Neural Operator-induced Gaussian Process (NOGaP), which exploits the probabilistic characteristics of Gaussian Processes (GPs) while leveraging the learning prowess of operator learning. The proposed framework leads to improved prediction accuracy and offers a quantifiable measure of uncertainty. The proposed framework is extensively evaluated through experiments on various PDE examples, including Burger's equation, Darcy flow, non-homogeneous Poisson, and wave-advection equations. Furthermore, a comparative study with state-of-the-art operator learning algorithms is presented to highlight the advantages of NOGaP. The results demonstrate superior accuracy and expected uncertainty characteristics, suggesting the promising potential of the proposed framework.

Dimension reduction techniques are among the most essential analytical tools in the analysis of high-dimensional data. Generalized principal component analysis (PCA) is an extension to standard PCA that has been widely used to identify low-dimensional features in high-dimensional discrete data, such as binary, multi-category and count data. For microbiome count data in particular, the multinomial PCA is a natural counterpart of the standard PCA. However, this technique fails to account for the excessive number of zero values, which is frequently observed in microbiome count data. To allow for sparsity, zero-inflated multivariate distributions can be used. We propose a zero-inflated probabilistic PCA model for latent factor analysis. The proposed model is a fully Bayesian factor analysis technique that is appropriate for microbiome count data analysis. In addition, we use the mean-field-type variational family to approximate the marginal likelihood and develop a classification variational approximation algorithm to fit the model. We demonstrate the efficiency of our procedure for predictions based on the latent factors and the model parameters through simulation experiments, showcasing its superiority over competing methods. This efficiency is further illustrated with two real microbiome count datasets. The method is implemented in R.

This paper studies a quantum simulation technique for solving the Fokker-Planck equation. Traditional semi-discretization methods often fail to preserve the underlying Hamiltonian dynamics and may even modify the Hamiltonian structure, particularly when incorporating boundary conditions. We address this challenge by employing the Schrodingerization method-it converts any linear partial and ordinary differential equation with non-Hermitian dynamics into systems of Schrodinger-type equations. We explore the application in two distinct forms of the Fokker-Planck equation. For the conservation form, we show that the semi-discretization-based Schrodingerization is preferable, especially when dealing with non-periodic boundary conditions. Additionally, we analyze the Schrodingerization approach for unstable systems that possess positive eigenvalues in the real part of the coefficient matrix or differential operator. Our analysis reveals that the direct use of Schrodingerization has the same effect as a stabilization procedure. For the heat equation form, we propose a quantum simulation procedure based on the time-splitting technique. We discuss the relationship between operator splitting in the Schrodingerization method and its application directly to the original problem, illustrating how the Schrodingerization method accurately reproduces the time-splitting solutions at each step. Furthermore, we explore finite difference discretizations of the heat equation form using shift operators. Utilizing Fourier bases, we diagonalize the shift operators, enabling efficient simulation in the frequency space. Providing additional guidance on implementing the diagonal unitary operators, we conduct a comparative analysis between diagonalizations in the Bell and the Fourier bases, and show that the former generally exhibits greater efficiency than the latter.

北京阿比特科技有限公司