亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Characterized by an outer integral connected to an inner integral through a nonlinear function, nested integration is a challenging problem in various fields, such as engineering and mathematical finance. The available numerical methods for nested integration based on Monte Carlo (MC) methods can be prohibitively expensive owing to the error propagating from the inner to the outer integral. Attempts to enhance the efficiency of these approximations using the quasi-MC (QMC) or randomized QMC (rQMC) method have focused on either the inner or outer integral approximation. This work introduces a novel nested rQMC method that simultaneously addresses the approximation of the inner and outer integrals. This method leverages the unique nested integral structure to offer a more efficient approximation mechanism. By incorporating Owen's scrambling techniques, we address integrands exhibiting infinite variation in the Hardy--Krause sense, enabling theoretically sound error estimates. As the primary contribution, we derive asymptotic error bounds for the bias and variance of our estimator, along with the regularity conditions under which these bounds can be attained. In addition, we provide nearly optimal sample sizes for the rQMC approximations underlying the numerical implementation of the proposed method. Moreover, we derive a truncation scheme to make our estimator applicable in the context of expected information gain estimation and indicate how to use importance sampling to remedy the measure concentration arising in the inner integral. We verify the estimator quality through numerical experiments by comparing the computational efficiency of the nested rQMC method against standard nested MC integration for two case studies: one in thermomechanics and the other in pharmacokinetics. These examples highlight the computational savings and enhanced applicability of the proposed approach.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

We consider the problem of approximating an unknown function in a nonlinear model class from point evaluations. When obtaining these point evaluations is costly, minimising the required sample size becomes crucial. Recently, an increasing focus has been on employing adaptive sampling strategies to achieve this. These strategies are based on linear spaces related to the nonlinear model class, for which the optimal sampling measures are known. However, the resulting optimal sampling measures depend on an orthonormal basis of the linear space, which is known rarely. Consequently, sampling from these measures is challenging in practice. This manuscript presents a sampling strategy that iteratively refines an estimate of the optimal sampling measure by updating it based on previously drawn samples. This strategy can be performed offline and does not require evaluations of the sought function. We establish convergence and illustrate the practical performance through numerical experiments. Comparing the presented approach with standard Monte Carlo sampling demonstrates a significant reduction in the number of samples required to achieve a good estimation of an orthonormal basis.

We prove that random quantum circuits on any geometry, including a 1D line, can form approximate unitary designs over $n$ qubits in $\log n$ depth. In a similar manner, we construct pseudorandom unitaries (PRUs) in 1D circuits in $\text{poly} \log n $ depth, and in all-to-all-connected circuits in $\text{poly} \log \log n $ depth. In all three cases, the $n$ dependence is optimal and improves exponentially over known results. These shallow quantum circuits have low complexity and create only short-range entanglement, yet are indistinguishable from unitaries with exponential complexity. Our construction glues local random unitaries on $\log n$-sized or $\text{poly} \log n$-sized patches of qubits to form a global random unitary on all $n$ qubits. In the case of designs, the local unitaries are drawn from existing constructions of approximate unitary $k$-designs, and hence also inherit an optimal scaling in $k$. In the case of PRUs, the local unitaries are drawn from existing unitary ensembles conjectured to form PRUs. Applications of our results include proving that classical shadows with 1D log-depth Clifford circuits are as powerful as those with deep circuits, demonstrating superpolynomial quantum advantage in learning low-complexity physical systems, and establishing quantum hardness for recognizing phases of matter with topological order.

Anomaly detection is a branch of data analysis and machine learning which aims at identifying observations that exhibit abnormal behaviour. Be it measurement errors, disease development, severe weather, production quality default(s) (items) or failed equipment, financial frauds or crisis events, their on-time identification, isolation and explanation constitute an important task in almost any branch of science and industry. By providing a robust ordering, data depth - statistical function that measures belongingness of any point of the space to a data set - becomes a particularly useful tool for detection of anomalies. Already known for its theoretical properties, data depth has undergone substantial computational developments in the last decade and particularly recent years, which has made it applicable for contemporary-sized problems of data analysis and machine learning. In this article, data depth is studied as an efficient anomaly detection tool, assigning abnormality labels to observations with lower depth values, in a multivariate setting. Practical questions of necessity and reasonability of invariances and shape of the depth function, its robustness and computational complexity, choice of the threshold are discussed. Illustrations include use-cases that underline advantageous behaviour of data depth in various settings.

Mediation analyses allow researchers to quantify the effect of an exposure variable on an outcome variable through a mediator variable. If a binary mediator variable is misclassified, the resulting analysis can be severely biased. Misclassification is especially difficult to deal with when it is differential and when there are no gold standard labels available. Previous work has addressed this problem using a sensitivity analysis framework or by assuming that misclassification rates are known. We leverage a variable related to the misclassification mechanism to recover unbiased parameter estimates without using gold standard labels. The proposed methods require the reasonable assumption that the sum of the sensitivity and specificity is greater than 1. Three correction methods are presented: (1) an ordinary least squares correction for Normal outcome models, (2) a multi-step predictive value weighting method, and (3) a seamless expectation-maximization algorithm. We apply our misclassification correction strategies to investigate the mediating role of gestational hypertension on the association between maternal age and pre-term birth.

A combinatorial object is said to be quasirandom if it exhibits certain properties that are typically seen in a truly random object of the same kind. It is known that a permutation is quasirandom if and only if the pattern density of each of the twenty-four 4-point permutations is close to 1/24, which is its expected value in a random permutation. In other words, the set of all twenty-four 4-point permutations is quasirandom-forcing. Moreover, it is known that there exist sets of eight 4-point permutations that are also quasirandom-forcing. Breaking the barrier of linear dependency of perturbation gradients, we show that every quasirandom-forcing set of 4-point permutations must have cardinality at least five.

A number of recent studies have proposed that linear representations are appropriate for solving nonlinear dynamical systems with quantum computers, which fundamentally act linearly on a wave function in a Hilbert space. Linear representations, such as the Koopman representation and Koopman von Neumann mechanics, have regained attention from the dynamical-systems research community. Here, we aim to present a unified theoretical framework, currently missing in the literature, with which one can compare and relate existing methods, their conceptual basis, and their representations. We also aim to show that, despite the fact that quantum simulation of nonlinear classical systems may be possible with such linear representations, a necessary projection into a feasible finite-dimensional space will in practice eventually induce numerical artifacts which can be hard to eliminate or even control. As a result, a practical, reliable and accurate way to use quantum computation for solving general nonlinear dynamical systems is still an open problem.

Discrete choice models with non-monotonic response functions are important in many areas of application, especially political sciences and marketing. This paper describes a novel unfolding model for binary data that allows for heavy-tailed shocks to the underlying utilities. One of our key contributions is a Markov chain Monte Carlo algorithm that requires little or no parameter tuning, fully explores the support of the posterior distribution, and can be used to fit various extensions of our core model that involve (Bayesian) hypothesis testing on the latent construct. Our empirical evaluations of the model and the associated algorithm suggest that they provide better complexity-adjusted fit to voting data from the United States House of Representatives.

We discuss a connection between a generative model, called the diffusion model, and nonequilibrium thermodynamics for the Fokker-Planck equation, called stochastic thermodynamics. Based on the techniques of stochastic thermodynamics, we derive the speed-accuracy trade-off for the diffusion models, which is a trade-off relationship between the speed and accuracy of data generation in diffusion models. Our result implies that the entropy production rate in the forward process affects the errors in data generation. From a stochastic thermodynamic perspective, our results provide quantitative insight into how best to generate data in diffusion models. The optimal learning protocol is introduced by the conservative force in stochastic thermodynamics and the geodesic of space by the 2-Wasserstein distance in optimal transport theory. We numerically illustrate the validity of the speed-accuracy trade-off for the diffusion models with different noise schedules such as the cosine schedule, the conditional optimal transport, and the optimal transport.

We discuss a connection between a generative model, called the diffusion model, and nonequilibrium thermodynamics for the Fokker-Planck equation, called stochastic thermodynamics. Based on the techniques of stochastic thermodynamics, we derive the speed-accuracy trade-off for the diffusion models, which is a trade-off relationship between the speed and accuracy of data generation in diffusion models. Our result implies that the entropy production rate in the forward process affects the errors in data generation. From a stochastic thermodynamic perspective, our results provide quantitative insight into how best to generate data in diffusion models. The optimal learning protocol is introduced by the conservative force in stochastic thermodynamics and the geodesic of space by the 2-Wasserstein distance in optimal transport theory. We numerically illustrate the validity of the speed-accuracy trade-off for the diffusion models with different noise schedules such as the cosine schedule, the conditional optimal transport, and the optimal transport.

Network point processes often exhibit latent structure that govern the behaviour of the sub-processes. It is not always reasonable to assume that this latent structure is static, and detecting when and how this driving structure changes is often of interest. In this paper, we introduce a novel online methodology for detecting changes within the latent structure of a network point process. We focus on block-homogeneous Poisson processes, where latent node memberships determine the rates of the edge processes. We propose a scalable variational procedure which can be applied on large networks in an online fashion via a Bayesian forgetting factor applied to sequential variational approximations to the posterior distribution. The proposed framework is tested on simulated and real-world data, and it rapidly and accurately detects changes to the latent edge process rates, and to the latent node group memberships, both in an online manner. In particular, in an application on the Santander Cycles bike-sharing network in central London, we detect changes within the network related to holiday periods and lockdown restrictions between 2019 and 2020.

北京阿比特科技有限公司