The problem of computing the conditional expectation E[f (Y)|X] with least-square Monte-Carlo is of general importance and has been widely studied. To solve this problem, it is usually assumed that one has as many samples of Y as of X. However, when samples are generated by computer simulation and the conditional law of Y given X can be simulated, it may be relevant to sample K $\in$ N values of Y for each sample of X. The present work determines the optimal value of K for a given computational budget, as well as a way to estimate it. The main take away message is that the computational gain can be all the more important that the computational cost of sampling Y given X is small with respect to the computational cost of sampling X. Numerical illustrations on the optimal choice of K and on the computational gain are given on different examples including one inspired by risk management.
We derive general bounds on the probability that the empirical first-passage time $\overline{\tau}_n\equiv \sum_{i=1}^n\tau_i/n$ of a reversible ergodic Markov process inferred from a sample of $n$ independent realizations deviates from the true mean first-passage time by more than any given amount in either direction. We construct non-asymptotic confidence intervals that hold in the elusive small-sample regime and thus fill the gap between asymptotic methods and the Bayesian approach that is known to be sensitive to prior belief and tends to underestimate uncertainty in the small-sample setting. We prove sharp bounds on extreme first-passage times that control uncertainty even in cases where the mean alone does not sufficiently characterize the statistics. Our concentration-of-measure-based results allow for model-free error control and reliable error estimation in kinetic inference, and are thus important for the analysis of experimental and simulation data in the presence of limited sampling.
We address the computational efficiency in solving the A-optimal Bayesian design of experiments problems for which the observational model is based on partial differential equations and, consequently, is computationally expensive to evaluate. A-optimality is a widely used and easy-to-interpret criterion for the Bayesian design of experiments. The criterion seeks the optimal experiment design by minimizing the expected conditional variance, also known as the expected posterior variance. This work presents a novel likelihood-free method for seeking the A-optimal design of experiments without sampling or integrating the Bayesian posterior distribution. In our approach, the expected conditional variance is obtained via the variance of the conditional expectation using the law of total variance, while we take advantage of the orthogonal projection property to approximate the conditional expectation. Through an asymptotic error estimation, we show that the intractability of the posterior does not affect the performance of our approach. We use an artificial neural network (ANN) to approximate the nonlinear conditional expectation to implement our method. For dealing with continuous experimental design parameters, we integrate the training process of the ANN into minimizing the expected conditional variance. Specifically, we propose a non-local approximation of the conditional expectation and apply transfer learning to reduce the number of evaluations of the observation model. Through numerical experiments, we demonstrate that our method significantly reduces the number of observational model evaluations compared with common importance sampling-based approaches. This reduction is crucial, considering the computationally expensive nature of these models.
Univariate and multivariate normal probability distributions are widely used when modeling decisions under uncertainty. Computing the performance of such models requires integrating these distributions over specific domains, which can vary widely across models. Besides some special cases, there exist no general analytical expressions, standard numerical methods or software for these integrals. Here we present mathematical results and open-source software that provide (i) the probability in any domain of a normal in any dimensions with any parameters, (ii) the probability density, cumulative distribution, and inverse cumulative distribution of any function of a normal vector, (iii) the classification errors among any number of normal distributions, the Bayes-optimal discriminability index and relation to the operating characteristic, (iv) dimension reduction and visualizations for such problems, and (v) tests for how reliably these methods may be used on given data. We demonstrate these tools with vision research applications of detecting occluding objects in natural scenes, and detecting camouflage.
Local search is an effective method for solving large-scale combinatorial optimization problems, and it has made remarkable progress in recent years through several subtle mechanisms. In this paper, we found two ways to improve the local search algorithms in solving Pseudo-Boolean Optimization (PBO): Firstly, some of those mechanisms such as unit propagation are merely used in solving MaxSAT before, which can be generalized to solve PBO as well; Secondly, the existing local search algorithms utilize the heuristic on variables, so-called score, to mainly guide the search. We attempt to gain more insights into the clause, as it plays the role of a middleman who builds a bridge between variables and the given formula. Hence, we first extended the combination of unit propagation-based decimation algorithm to PBO problem, giving a further generalized definition of unit clause for PBO problem, and apply it to the existing solver LS-PBO for constructing an initial assignment; then, we introduced a new heuristic on clauses, dubbed care, to set a higher priority for the clauses that are less satisfied in current iterations. Experiments on benchmarks from the most recent PB Competition, as well as three real-world application benchmarks including minimum-width confidence band, wireless sensor network optimization, and seating arrangement problems show that our algorithm DeciLS-PBO has a promising performance compared to the state-of-the-art algorithms.
In survey sampling, survey data do not necessarily represent the target population, and the samples are often biased. However, information on the survey weights aids in the elimination of selection bias. The Horvitz-Thompson estimator is a well-known unbiased, consistent, and asymptotically normal estimator; however, it is not efficient. Thus, this study derives the semiparametric efficiency bound for various target parameters by considering the survey weight as a random variable and consequently proposes a semiparametric optimal estimator with certain working models on the survey weights. The proposed estimator is consistent, asymptotically normal, and efficient in a class of the regular and asymptotically linear estimators. Further, a limited simulation study is conducted to investigate the finite sample performance of the proposed method. The proposed method is applied to the 1999 Canadian Workplace and Employee Survey data.
It is well known that the Euler method for approximating the solutions of a random ordinary differential equation $\mathrm{d}X_t/\mathrm{d}t = f(t, X_t, Y_t)$ driven by a stochastic process $\{Y_t\}_t$ with $\theta$-H\"older sample paths is estimated to be of strong order $\theta$ with respect to the time step, provided $f=f(t, x, y)$ is sufficiently regular and with suitable bounds. Here, it is proved that, in many typical cases, further conditions on the noise can be exploited so that the strong convergence is actually of order 1, regardless of the H\"older regularity of the sample paths. This applies for instance to additive or multiplicative It\^o process noises (such as Wiener, Ornstein-Uhlenbeck, and geometric Brownian motion processes); to point-process noises (such as Poisson point processes and Hawkes self-exciting processes, which even have jump-type discontinuities); and to transport-type processes with sample paths of bounded variation. The result is based on a novel approach, estimating the global error as an iterated integral over both large and small mesh scales, and switching the order of integration to move the critical regularity to the large scale. The work is complemented with numerical simulations illustrating the strong order 1 convergence in those cases, and with an example with fractional Brownian motion noise with Hurst parameter $0 < H < 1/2$ for which the order of convergence is $H + 1/2$, hence lower than the attained order 1 in the examples above, but still higher than the order $H$ of convergence expected from previous works.
We consider the problem of estimating expectations with respect to a target distribution with an unknown normalizing constant, and where even the unnormalized target needs to be approximated at finite resolution. Under such an assumption, this work builds upon a recently introduced multi-index Sequential Monte Carlo (SMC) ratio estimator, which provably enjoys the complexity improvements of multi-index Monte Carlo (MIMC) and the efficiency of SMC for inference. The present work leverages a randomization strategy to remove bias entirely, which simplifies estimation substantially, particularly in the MIMC context, where the choice of index set is otherwise important. Under reasonable assumptions, the proposed method provably achieves the same canonical complexity of MSE$^{-1}$ as the original method (where MSE is mean squared error), but without discretization bias. It is illustrated on examples of Bayesian inverse and spatial statistics problems.
We present a novel numerical method for solving the anisotropic diffusion equation in toroidally confined magnetic fields which is efficient, accurate and provably stable. The continuous problem is written in terms of a derivative operator for the perpendicular transport and a linear operator, obtained through field line tracing, for the parallel transport. We derive energy estimates of the solution of the continuous initial boundary value problem. A discrete formulation is presented using operator splitting in time with the summation by parts finite difference approximation of spatial derivatives for the perpendicular diffusion operator. Weak penalty procedures are derived for implementing both boundary conditions and parallel diffusion operator obtained by field line tracing. We prove that the fully-discrete approximation is unconditionally stable and asymptotic preserving. Discrete energy estimates are shown to match the continuous energy estimate given the correct choice of penalty parameters. Convergence tests are shown for the perpendicular operator by itself, and the ``NIMROD benchmark" problem is used as a manufactured solution to show the full scheme converges even in the case where the perpendicular diffusion is zero. Finally, we present a magnetic field with chaotic regions and islands and show the contours of the anisotropic diffusion equation reproduce key features in the field.
In this paper we study a non-local Cahn-Hilliard equation with singular single-well potential and degenerate mobility. This results as a particular case of a more general model derived for a binary, saturated, closed and incompressible mixture, composed by a tumor phase and a healthy phase, evolving in a bounded domain. The general system couples a Darcy-type evolution for the average velocity field with a convective reaction-diffusion type evolution for the nutrient concentration and a non-local convective Cahn-Hilliard equation for the tumor phase. The main mathematical difficulties are related to the proof of the separation property for the tumor phase in the Cahn-Hilliard equation: up to our knowledge, such problem is indeed open in the literature. For this reason, in the present contribution we restrict the analytical study to the Cahn-Hilliard equation only. For the non-local Cahn- Hilliard equation with singular single-well potential and degenerate mobility, we study the existence and uniqueness of weak solutions for spatial dimensions $d\leq 3$. After showing existence, we prove the strict separation property in three spatial dimensions, implying the same property also for lower spatial dimensions, which opens the way to the proof of uniqueness of solutions. Finally, we propose a well posed and gradient stable continuous finite element approximation of the model for $d\leq 3$, which preserves the physical properties of the continuos solution and which is computationally efficient, and we show simulation results in two spatial dimensions which prove the consistency of the proposed scheme and which describe the phase ordering dynamics associated to the system.
PyVBMC is a Python implementation of the Variational Bayesian Monte Carlo (VBMC) algorithm for posterior and model inference for black-box computational models (Acerbi, 2018, 2020). VBMC is an approximate inference method designed for efficient parameter estimation and model assessment when model evaluations are mildly-to-very expensive (e.g., a second or more) and/or noisy. Specifically, VBMC computes: - a flexible (non-Gaussian) approximate posterior distribution of the model parameters, from which statistics and posterior samples can be easily extracted; - an approximation of the model evidence or marginal likelihood, a metric used for Bayesian model selection. PyVBMC can be applied to any computational or statistical model with up to roughly 10-15 continuous parameters, with the only requirement that the user can provide a Python function that computes the target log likelihood of the model, or an approximation thereof (e.g., an estimate of the likelihood obtained via simulation or Monte Carlo methods). PyVBMC is particularly effective when the model takes more than about a second per evaluation, with dramatic speed-ups of 1-2 orders of magnitude when compared to traditional approximate inference methods. Extensive benchmarks on both artificial test problems and a large number of real models from the computational sciences, particularly computational and cognitive neuroscience, show that VBMC generally - and often vastly - outperforms alternative methods for sample-efficient Bayesian inference, and is applicable to both exact and simulator-based models (Acerbi, 2018, 2019, 2020). PyVBMC brings this state-of-the-art inference algorithm to Python, along with an easy-to-use Pythonic interface for running the algorithm and manipulating and visualizing its results.