亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the probabilistic sampling of a random variable, in which the variable is sampled only if it falls outside a given set, which is called the silence set. This helps us to understand optimal event-based sampling for the special case of IID random processes, and also to understand the design of a sub-optimal scheme for other cases. We consider the design of this probabilistic sampling for a scalar, log-concave random variable, to minimize either the mean square estimation error, or the mean absolute estimation error. We show that the optimal silence interval: (i) is essentially unique, and (ii) is the limit of an iterative procedure of centering. Further we show through numerical experiments that super-level intervals seem to be remarkably near-optimal for mean square estimation. Finally we use the Gauss inequality for scalar unimodal densities, to show that probabilistic sampling gives a mean square distortion that is less than a third of the distortion incurred by periodic sampling, if the average sampling rate is between 0.3 and 0.9 samples per tick.

相關內容

A stochastic-gradient-based interior-point algorithm for minimizing a continuously differentiable objective function (that may be nonconvex) subject to bound constraints is presented, analyzed, and demonstrated through experimental results. The algorithm is unique from other interior-point methods for solving smooth (nonconvex) optimization problems since the search directions are computed using stochastic gradient estimates. It is also unique in its use of inner neighborhoods of the feasible region -- defined by a positive and vanishing neighborhood-parameter sequence -- in which the iterates are forced to remain. It is shown that with a careful balance between the barrier, step-size, and neighborhood sequences, the proposed algorithm satisfies convergence guarantees in both deterministic and stochastic settings. The results of numerical experiments show that in both settings the algorithm can outperform a projected-(stochastic)-gradient method.

We present a registration method for model reduction of parametric partial differential equations with dominating advection effects and moving features. Registration refers to the use of a parameter-dependent mapping to make the set of solutions to these equations more amicable for approximation using classical reduced basis methods. The proposed approach utilizes concepts from optimal transport theory, as we utilize Monge embeddings to construct these mappings in a purely data-driven way. The method relies on one interpretable hyper-parameter. We discuss how our approach relates to existing works that combine model order reduction and optimal transport theory. Numerical results are provided to demonstrate the effect of the registration. This includes a model problem where the solution is itself a probability density and one where it is not.

Besov priors are nonparametric priors that model spatially inhomogeneous functions. They are routinely used in inverse problems and imaging, where they exhibit attractive sparsity-promoting and edge-preserving features. A recent line of work has initiated the study of their asymptotic frequentist convergence properties. In the present paper, we consider the theoretical recovery performance of the posterior distributions associated to Besov-Laplace priors in the density estimation model, under the assumption that the observations are generated by a possibly spatially inhomogeneous true density belonging to a Besov space. We improve on existing results and show that carefully tuned Besov-Laplace priors attain optimal posterior contraction rates. Furthermore, we show that hierarchical procedures involving a hyper-prior on the regularity parameter lead to adaptation to any smoothness level.

Chernoff approximations are a flexible and powerful tool of functional analysis, which can be used, in particular, to find numerically approximate solutions of some differential equations with variable coefficients. For many classes of equations such approximations have already been constructed since pioneering papers of Prof. O.G.Somlyanov in 2000, however, the speed of their convergence to the exact solution has not been properly studied. We select the heat equation (because its exact solutions are already known) as a simple yet informative model example for the study of the rate of convergence of Chernoff approximations. Examples illustrating the rate of convergence of Chernoff approximations to the solution of the Cauchy problem for the heat equation are constructed in the paper. Numerically we show that for initial conditions that are smooth enough the order of approximation is equal to the order of Chernoff tangency of the Chernoff function used. We also consider not smooth enough initial conditions and show how H\"older class of initial condition is related to the rate of convergence. This method of study in the future can be applied to general second order parabolic equation with variable coefficients by a slight modification of our Python 3 code. This arXiv version of the text is a supplementary material for our journal article. Here we include all the written text from the article and additionally all illustrations (Appendix A) and full text of the Python 3 code (Appendix B).

We study numerical integration over bounded regions in $\mathbb{R}^s, s\ge1$ with respect to some probability measure. We replace random sampling with quasi-Monte Carlo methods, where the underlying point set is derived from deterministic constructions that aim to fill the space more evenly than random points. Such quasi-Monte Carlo point sets are ordinarily designed for the uniform measure, and the theory only works for product measures when a coordinate-wise transformation is applied. Going beyond this setting, we first consider the case where the target density is a mixture distribution where each term in the mixture comes from a product distribution. Next we consider target densities which can be approximated with such mixture distributions. We require the approximation to be a sum of coordinate-wise products and the approximation to be positive everywhere (so that they can be re-scaled to probability density functions). We use tensor product hat function approximations for this purpose here, since a hat function approximation of a positive function is itself positive. We also study more complex algorithms, where we first approximate the target density with a general Gaussian mixture distribution and approximate the mixtures with an adaptive hat function approximation on rotated intervals. The Gaussian mixture approximation allows us to locate the essential parts of the target density, whereas the adaptive hat function approximation allows us to approximate the finer structure of the target density. We prove convergence rates for each of the integration techniques based on quasi-Monte Carlo sampling for integrands with bounded partial mixed derivatives. The employed algorithms are based on digital $(t,s)$-sequences over the finite field $\mathbb{F}_2$ and an inversion method. Numerical examples illustrate the performance of the algorithms for some target densities and integrands.

We propose the first online quantum algorithm for zero-sum games with $\tilde O(1)$ regret under the game setting. Moreover, our quantum algorithm computes an $\varepsilon$-approximate Nash equilibrium of an $m \times n$ matrix zero-sum game in quantum time $\tilde O(\sqrt{m+n}/\varepsilon^{2.5})$, yielding a quadratic improvement over classical algorithms in terms of $m, n$. Our algorithm uses standard quantum inputs and generates classical outputs with succinct descriptions, facilitating end-to-end applications. As an application, we obtain a fast quantum linear programming solver. Technically, our online quantum algorithm "quantizes" classical algorithms based on the optimistic multiplicative weight update method. At the heart of our algorithm is a fast quantum multi-sampling procedure for the Gibbs sampling problem, which may be of independent interest.

We study the complexity of learning quantum states in various models with respect to the stabilizer formalism and obtain the following results: - We prove that $\Omega(n)$ $T$-gates are necessary for any Clifford+$T$ circuit to prepare computationally pseudorandom quantum states, an exponential improvement over the previously known bound. This bound is asymptotically tight if linear-time quantum-secure pseudorandom functions exist. - Given an $n$-qubit pure quantum state $|\psi\rangle$ that has fidelity at least $\tau$ with some stabilizer state, we give an algorithm that outputs a succinct description of a stabilizer state that witnesses fidelity at least $\tau - \varepsilon$. The algorithm uses $O(n/(\varepsilon^2\tau^4))$ samples and $\exp\left(O(n/\tau^4)\right) / \varepsilon^2$ time. In the regime of $\tau$ constant, this algorithm estimates stabilizer fidelity substantially faster than the na\"ive $\exp(O(n^2))$-time brute-force algorithm over all stabilizer states. - We improve the soundness analysis of the stabilizer state property testing algorithm due to Gross, Nezami, and Walter [Comms. Math. Phys. 385 (2021)]. As an application, we exhibit a tolerant property testing algorithm for stabilizer states. The underlying algorithmic primitive in all of our results is Bell difference sampling. To prove our results, we establish and/or strengthen connections between Bell difference sampling, symplectic Fourier analysis, and graph theory.

The total correlation(TC) is a crucial index to measure the correlation between marginal distribution in multidimensional random variables, and it is frequently applied as an inductive bias in representation learning. Previous research has shown that the TC value can be estimated using mutual information boundaries through decomposition. However, we found through theoretical derivation and qualitative experiments that due to the use of importance sampling in the decomposition process, the bias of TC value estimated based on MI bounds will be amplified when the proposal distribution in the sampling differs significantly from the target distribution. To reduce estimation bias issues, we propose a TC estimation correction model based on supervised learning, which uses the training iteration loss sequence of the TC estimator based on MI bounds as input features to output the true TC value. Experiments show that our proposed method can improve the accuracy of TC estimation and eliminate the variance generated by the TC estimation process.

This paper focuses on the study of the order of power series that are linear combinations of a given finite set of power series. The order of a formal power series, known as $\textrm{ord}(f)$, is defined as the minimum exponent of $x$ that has a non-zero coefficient in $f(x)$. Our first result is that the order of the Wronskian of these power series is equivalent up to a polynomial factor, to the maximum order which occurs in the linear combination of these power series. This implies that the Wronskian approach used in (Kayal and Saha, TOCT'2012) to upper bound the order of sum of square roots is optimal up to a polynomial blowup. We also demonstrate similar upper bounds, similar to those of (Kayal and Saha, TOCT'2012), for the order of power series in a variety of other scenarios. We also solve a special case of the inequality testing problem outlined in (Etessami et al., TOCT'2014). In the second part of the paper, we study the equality variant of the sum of square roots problem, which is decidable in polynomial time due to (Bl\"omer, FOCS'1991). We investigate a natural generalization of this problem when the input integers are given as straight line programs. Under the assumption of the Generalized Riemann Hypothesis (GRH), we show that this problem can be reduced to the so-called ``one dimensional'' variant. We identify the key mathematical challenges for solving this ``one dimensional'' variant.

We propose a novel nonparametric regression framework subject to the positive definiteness constraint. It offers a highly modular approach for estimating covariance functions of stationary processes. Our method can impose positive definiteness, as well as isotropy and monotonicity, on the estimators, and its hyperparameters can be decided using cross validation. We define our estimators by taking integral transforms of kernel-based distribution surrogates. We then use the iterated density estimation evolutionary algorithm, a variant of estimation of distribution algorithms, to fit the estimators. We also extend our method to estimate covariance functions for point-referenced data. Compared to alternative approaches, our method provides more reliable estimates for long-range dependence. Several numerical studies are performed to demonstrate the efficacy and performance of our method. Also, we illustrate our method using precipitation data from the Spatial Interpolation Comparison 97 project.

北京阿比特科技有限公司