In this work we study systems consisting of a group of moving particles. In such systems, often some important parameters are unknown and have to be estimated from observed data. Such parameter estimation problems can often be solved via a Bayesian inference framework. However in many practical problems, only data at the aggregate level is available and as a result the likelihood function is not available, which poses challenge for Bayesian methods. In particular, we consider the situation where the distributions of the particles are observed. We propose a Wasserstein distance based sequential Monte Carlo sampler to solve the problem: the Wasserstein distance is used to measure the similarity between the observed and the simulated particle distributions and the sequential Monte Carlo samplers is used to deal with the sequentially available observations. Two real-world examples are provided to demonstrate the performance of the proposed method.
The random forest (RF) algorithm has become a very popular prediction method for its great flexibility and promising accuracy. In RF, it is conventional to put equal weights on all the base learners (trees) to aggregate their predictions. However, the predictive performances of different trees within the forest can be very different due to the randomization of the embedded bootstrap sampling and feature selection. In this paper, we focus on RF for regression and propose two optimal weighting algorithms, namely the 1 Step Optimal Weighted RF (1step-WRF$_\mathrm{opt}$) and 2 Steps Optimal Weighted RF (2steps-WRF$_\mathrm{opt}$), that combine the base learners through the weights determined by weight choice criteria. Under some regularity conditions, we show that these algorithms are asymptotically optimal in the sense that the resulting squared loss and risk are asymptotically identical to those of the infeasible but best possible model averaging estimator. Numerical studies conducted on real-world data sets indicate that these algorithms outperform the equal-weight forest and two other weighted RFs proposed in existing literature in most cases.
Denoising diffusion models are a class of generative models which have recently achieved state-of-the-art results across many domains. Gradual noise is added to the data using a diffusion process, which transforms the data distribution into a Gaussian. Samples from the generative model are then obtained by simulating an approximation of the time reversal of this diffusion initialized by Gaussian samples. Recent research has explored adapting diffusion models for sampling and inference tasks. In this paper, we leverage known connections to stochastic control akin to the F\"ollmer drift to extend established neural network approximation results for the F\"ollmer drift to denoising diffusion models and samplers.
The Fisher information matrix is a quantity of fundamental importance for information geometry and asymptotic statistics. In practice, it is widely used to quickly estimate the expected information available in a data set and guide experimental design choices. In many modern applications, it is intractable to analytically compute the Fisher information and Monte Carlo methods are used instead. The standard Monte Carlo method produces estimates of the Fisher information that can be biased when the Monte-Carlo noise is non-negligible. Most problematic is noise in the derivatives as this leads to an overestimation of the available constraining power, given by the inverse Fisher information. In this work we find another simple estimate that is oppositely biased and produces an underestimate of the constraining power. This estimator can either be used to give approximate bounds on the parameter constraints or can be combined with the standard estimator to give improved, approximately unbiased estimates. Both the alternative and the combined estimators are asymptotically unbiased so can be also used as a convergence check of the standard approach. We discuss potential limitations of these estimators and provide methods to assess their reliability. These methods accelerate the convergence of Fisher forecasts, as unbiased estimates can be achieved with fewer Monte Carlo samples, and so can be used to reduce the simulated data set size by several orders of magnitude.
The problem of computing the conditional expectation E[f (Y)|X] with least-square Monte-Carlo is of general importance and has been widely studied. To solve this problem, it is usually assumed that one has as many samples of Y as of X. However, when samples are generated by computer simulation and the conditional law of Y given X can be simulated, it may be relevant to sample K $\in$ N values of Y for each sample of X. The present work determines the optimal value of K for a given computational budget, as well as a way to estimate it. The main take away message is that the computational gain can be all the more important that the computational cost of sampling Y given X is small with respect to the computational cost of sampling X. Numerical illustrations on the optimal choice of K and on the computational gain are given on different examples including one inspired by risk management.
We derive the exact asymptotic distribution of the maximum likelihood estimator $(\hat{\alpha}_n, \hat{\theta}_n)$ of $(\alpha, \theta)$ for the Ewens--Pitman partition in the regime of $0<\alpha<1$ and $\theta>-\alpha$: we show that $\hat{\alpha}_n$ is $n^{\alpha/2}$-consistent and converges to a variance mixture of normal distributions, i.e., $\hat{\alpha}_n$ is asymptotically mixed normal, while $\hat{\theta}_n$ is not consistent and converges to a transformation of the generalized Mittag-Leffler distribution. As an application, we derive a confidence interval of $\alpha$ and propose a hypothesis testing of sparsity for network data. In our proof, we define an empirical measure induced by the Ewens--Pitman partition and prove a suitable convergence of the measure in some test functions, aiming to derive asymptotic behavior of the log likelihood.
Grid-free Monte Carlo methods based on the walk on spheres (WoS) algorithm solve fundamental partial differential equations (PDEs) like the Poisson equation without discretizing the problem domain or approximating functions in a finite basis. Such methods hence avoid aliasing in the solution, and evade the many challenges of mesh generation. Yet for problems with complex geometry, practical grid-free methods have been largely limited to basic Dirichlet boundary conditions. We introduce the walk on stars (WoSt) algorithm, which solves linear elliptic PDEs with arbitrary mixed Neumann and Dirichlet boundary conditions. The key insight is that one can efficiently simulate reflecting Brownian motion (which models Neumann conditions) by replacing the balls used by WoS with star-shaped domains. We identify such domains via the closest point on the visibility silhouette, by simply augmenting a standard bounding volume hierarchy with normal information. Overall, WoSt is an easy modification of WoS, and retains the many attractive features of grid-free Monte Carlo methods such as progressive and view-dependent evaluation, trivial parallelization, and sublinear scaling to increasing geometric detail.
In the field of statistical disclosure control, the tradeoff between data confidentiality and data utility is measured by comparing disclosure risk and information loss metrics. Distance based metrics such as the mean absolute error (MAE), mean squared error (MSE), mean variation (IL1), and its scaled alternative (IL1s) are popular information loss measures for numerical microdata. However, the fact that these measures are unbounded makes it is difficult to compare them against disclosure risk measures which are usually bounded between 0 and 1. In this note, we propose rank-based versions of the MAE and MSE metrics that are bounded in the same range as the disclosure risk metrics. We empirically compare the proposed bounded metrics against the distance-based metrics in a series of experiments where the metrics are evaluated over multiple masked datasets, generated by the application of increasing amounts of perturbation (e.g., by adding increasing amounts of noise). Our results show that the proposed bounded metrics produce similar rankings as the traditional ones (as measured by Spearman correlation), suggesting that they are a viable additions to the toolbox of distance-based information loss metrics currently in use in the SDC literature.
Deep learning models, including modern systems like large language models, are well known to offer unreliable estimates of the uncertainty of their decisions. In order to improve the quality of the confidence levels, also known as calibration, of a model, common approaches entail the addition of either data-dependent or data-independent regularization terms to the training loss. Data-dependent regularizers have been recently introduced in the context of conventional frequentist learning to penalize deviations between confidence and accuracy. In contrast, data-independent regularizers are at the core of Bayesian learning, enforcing adherence of the variational distribution in the model parameter space to a prior density. The former approach is unable to quantify epistemic uncertainty, while the latter is severely affected by model misspecification. In light of the limitations of both methods, this paper proposes an integrated framework, referred to as calibration-aware Bayesian neural networks (CA-BNNs), that applies both regularizers while optimizing over a variational distribution as in Bayesian learning. Numerical results validate the advantages of the proposed approach in terms of expected calibration error (ECE) and reliability diagrams.
Pre-trained language models (PLMs) are known to improve the generalization performance of natural language understanding models by leveraging large amounts of data during the pre-training phase. However, the out-of-distribution (OOD) generalization problem remains a challenge in many NLP tasks, limiting the real-world deployment of these methods. This paper presents the first attempt at creating a unified benchmark named \method for evaluating OOD robustness in NLP models, highlighting the importance of OOD robustness and providing insights on how to measure the robustness of a model and how to improve it. The benchmark includes 13 publicly available datasets for OOD testing, and evaluations are conducted on 8 classic NLP tasks over 21 popularly used PLMs, including GPT-3 and GPT-3.5. Our findings confirm the need for improved OOD accuracy in NLP tasks, as significant performance degradation was observed in all settings compared to in-distribution (ID) accuracy.
With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.