亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We revisit the problem of estimating an unknown parameter of a pure quantum state, and investigate `null-measurement' strategies in which the experimenter aims to measure in a basis that contains a vector close to the true system state. Such strategies are known to approach the quantum Fisher information for models where the quantum Cram\'{e}r-Rao bound is achievable but a detailed adaptive strategy for achieving the bound in the multi-copy setting has been lacking. We first show that the following naive null-measurement implementation fails to attain even the standard estimation scaling: estimate the parameter on a small sub-sample, and apply the null-measurement corresponding to the estimated value on the rest of the systems. This is due to non-identifiability issues specific to null-measurements, which arise when the true and reference parameters are close to each other. To avoid this, we propose the alternative displaced-null measurement strategy in which the reference parameter is altered by a small amount which is sufficient to ensure parameter identifiability. We use this strategy to devise asymptotically optimal measurements for models where the quantum Cram\'{e}r-Rao bound is achievable. More generally, we extend the method to arbitrary multi-parameter models and prove the asymptotic achievability of the the Holevo bound. An important tool in our analysis is the theory of quantum local asymptotic normality which provides a clear intuition about the design of the proposed estimators, and shows that they have asymptotically normal distributions.

相關內容

We propose a new, computationally efficient, sparsity adaptive changepoint estimator for detecting changes in unknown subsets of a high-dimensional data sequence. Assuming the data sequence is Gaussian, we prove that the new method successfully estimates the number and locations of changepoints with a given error rate and under minimal conditions, for all sparsities of the changing subset. Moreover, our method has computational complexity linear up to logarithmic factors in both the length and number of time series, making it applicable to large data sets. Through extensive numerical studies we show that the new methodology is highly competitive in terms of both estimation accuracy and computational cost. The practical usefulness of the method is illustrated by analysing sensor data from a hydro power plant. An efficient R implementation is available.

When modeling scientific and industrial problems, geometries are typically modeled by explicit boundary representations obtained from computer-aided design software. Unfitted (also known as embedded or immersed) finite element methods offer a significant advantage in dealing with complex geometries, eliminating the need for generating unstructured body-fitted meshes. However, current unfitted finite elements on nonlinear geometries are restricted to implicit (possibly high-order) level set geometries. In this work, we introduce a novel automatic computational pipeline to approximate solutions of partial differential equations on domains defined by explicit nonlinear boundary representations. For the geometrical discretization, we propose a novel algorithm to generate quadratures for the bulk and surface integration on nonlinear polytopes required to compute all the terms in unfitted finite element methods. The algorithm relies on a nonlinear triangulation of the boundary, a kd-tree refinement of the surface cells that simplify the nonlinear intersections of surface and background cells to simple cases that are diffeomorphically equivalent to linear intersections, robust polynomial root-finding algorithms and surface parameterization techniques. We prove the correctness of the proposed algorithm. We have successfully applied this algorithm to simulate partial differential equations with unfitted finite elements on nonlinear domains described by computer-aided design models, demonstrating the robustness of the geometric algorithm and showing high-order accuracy of the overall method.

We propose a type-theoretic framework for describing and proving properties of quantum computations, in particular those presented as quantum circuits. Our proposal is based on an observation that, in the polymorphic type system of Coq, currying on quantum states allows us to apply quantum gates directly inside a complex circuit. By introducing a discrete notion of lens to control this currying, we are further able to separate the combinatorics of the circuit structure from the computational content of gates. We apply our development to define quantum circuits recursively from the bottom up, and prove their correctness compositionally.

Refinement calculus provides a structured framework for the progressive and modular development of programs, ensuring their correctness throughout the refinement process. This paper introduces a refinement calculus tailored for quantum programs. To this end, we first study the partial correctness of nondeterministic programs within a quantum while language featuring prescription statements. Orthogonal projectors, which are equivalent to subspaces of the state Hilbert space, are taken as assertions for quantum states. In addition to the denotational semantics where a nondeterministic program is associated with a set of trace-nonincreasing super-operators, we also present their semantics in transforming a postcondition to the weakest liberal postconditions and, conversely, transforming a precondition to the strongest postconditions. Subsequently, refinement rules are introduced based on these dual semantics, offering a systematic approach to the incremental development of quantum programs applicable in various contexts. To illustrate the practical application of the refinement calculus, we examine examples such as the implementation of a $Z$-rotation gate, the repetition code, and the quantum-to-quantum Bernoulli factory. Furthermore, we present Quire, a Python-based interactive prototype tool that provides practical support to programmers engaged in the stepwise development of correct quantum programs.

Common regularization algorithms for linear regression, such as LASSO and Ridge regression, rely on a regularization hyperparameter that balances the tradeoff between minimizing the fitting error and the norm of the learned model coefficients. As this hyperparameter is scalar, it can be easily selected via random or grid search optimizing a cross-validation criterion. However, using a scalar hyperparameter limits the algorithm's flexibility and potential for better generalization. In this paper, we address the problem of linear regression with l2-regularization, where a different regularization hyperparameter is associated with each input variable. We optimize these hyperparameters using a gradient-based approach, wherein the gradient of a cross-validation criterion with respect to the regularization hyperparameters is computed analytically through matrix differential calculus. Additionally, we introduce two strategies tailored for sparse model learning problems aiming at reducing the risk of overfitting to the validation data. Numerical examples demonstrate that our multi-hyperparameter regularization approach outperforms LASSO, Ridge, and Elastic Net regression. Moreover, the analytical computation of the gradient proves to be more efficient in terms of computational time compared to automatic differentiation, especially when handling a large number of input variables. Application to the identification of over-parameterized Linear Parameter-Varying models is also presented.

We construct and analyze finite element approximations of the Einstein tensor in dimension $N \ge 3$. We focus on the setting where a smooth Riemannian metric tensor $g$ on a polyhedral domain $\Omega \subset \mathbb{R}^N$ has been approximated by a piecewise polynomial metric $g_h$ on a simplicial triangulation $\mathcal{T}$ of $\Omega$ having maximum element diameter $h$. We assume that $g_h$ possesses single-valued tangential-tangential components on every codimension-1 simplex in $\mathcal{T}$. Such a metric is not classically differentiable in general, but it turns out that one can still attribute meaning to its Einstein curvature in a distributional sense. We study the convergence of the distributional Einstein curvature of $g_h$ to the Einstein curvature of $g$ under refinement of the triangulation. We show that in the $H^{-2}(\Omega)$-norm, this convergence takes place at a rate of $O(h^{r+1})$ when $g_h$ is an optimal-order interpolant of $g$ that is piecewise polynomial of degree $r \ge 1$. We provide numerical evidence to support this claim.

When the signal does not have a sparse structure but has sparsity under a certain transformation domain, Nam et al. \cite{NS} introduced the cosparse analysis model, which provides a dual perspective on the sparse representation model. This paper mainly discusses the error estimation of non-convex $\ell_p(0<p<1)$ relaxation cosparse optimization model with noise condition. Compared with the existing literature, under the same conditions, the value range of the $\Omega$-RIP constant $\delta_{7s}$ given in this paper is wider. When $p=0.5$ and $\delta_{7s}=0.5$, the error constants $C_0$ and $C_1$ in this paper are better than those corresponding results in the literature \cite{Cand,LiSong1}. Moreover, when $0<p<1$, the error results of the non-convex relaxation method are significantly smaller than those of the convex relaxation method. The experimental results verify the correctness of the theoretical analysis and illustrate that the $\ell_p(0<p<1)$ method can provide robust reconstruction for cosparse optimization problems.

We propose and compare methods for the analysis of extreme events in complex systems governed by PDEs that involve random parameters, in situations where we are interested in quantifying the probability that a scalar function of the system's solution is above a threshold. If the threshold is large, this probability is small and its accurate estimation is challenging. To tackle this difficulty, we blend theoretical results from large deviation theory (LDT) with numerical tools from PDE-constrained optimization. Our methods first compute parameters that minimize the LDT-rate function over the set of parameters leading to extreme events, using adjoint methods to compute the gradient of this rate function. The minimizers give information about the mechanism of the extreme events as well as estimates of their probability. We then propose a series of methods to refine these estimates, either via importance sampling or geometric approximation of the extreme event sets. Results are formulated for general parameter distributions and detailed expressions are provided when Gaussian distributions. We give theoretical and numerical arguments showing that the performance of our methods is insensitive to the extremeness of the events we are interested in. We illustrate the application of our approach to quantify the probability of extreme tsunami events on shore. Tsunamis are typically caused by a sudden, unpredictable change of the ocean floor elevation during an earthquake. We model this change as a random process, which takes into account the underlying physics. We use the one-dimensional shallow water equation to model tsunamis numerically. In the context of this example, we present a comparison of our methods for extreme event probability estimation, and find which type of ocean floor elevation change leads to the largest tsunamis on shore.

We consider estimation of a normal mean matrix under the Frobenius loss. Motivated by the Efron--Morris estimator, a generalization of Stein's prior has been recently developed, which is superharmonic and shrinks the singular values towards zero. The generalized Bayes estimator with respect to this prior is minimax and dominates the maximum likelihood estimator. However, here we show that it is inadmissible by using Brown's condition. Then, we develop two types of priors that provide improved generalized Bayes estimators and examine their performance numerically. The proposed priors attain risk reduction by adding scalar shrinkage or column-wise shrinkage to singular value shrinkage. Parallel results for Bayesian predictive densities are also given.

We present the numerical analysis of a finite element method (FEM) for one-dimensional Dirichlet problems involving the logarithmic Laplacian (the pseudo-differential operator that appears as a first-order expansion of the fractional Laplacian as the exponent $s\to 0^+$). Our analysis exhibits new phenomena in this setting; in particular, using recently obtained regularity results, we prove rigorous error estimates and provide a logarithmic order of convergence in the energy norm using suitable \emph{log}-weighted spaces. Numerical evidence suggests that this type of rate cannot be improved. Moreover, we show that the stiffness matrix of logarithmic problems can be obtained as the derivative of the fractional stiffness matrix evaluated at $s=0$. Lastly, we investigate the relationship between the discrete eigenvalue problem and its convergence to the continuous one.

北京阿比特科技有限公司