Uncertainty in data is certainly one of the main problems in epidemiology, as shown by the recent COVID-19 pandemic. The need for efficient methods capable of quantifying uncertainty in the mathematical model is essential in order to produce realistic scenarios of the spread of infection. In this paper, we introduce a bi-fidelity approach to quantify uncertainty in spatially dependent epidemic models. The approach is based on evaluating a high-fidelity model on a small number of samples properly selected from a large number of evaluations of a low-fidelity model. In particular, we will consider the class of multiscale transport models recently introduced in Bertaglia, Boscheri, Dimarco & Pareschi, Math. Biosci. Eng. (2021) and Boscheri, Dimarco & Pareschi, Math. Mod. Meth. App. Scie. (2021) as the high-fidelity reference and use simple two-velocity discrete models for low-fidelity evaluations. Both models share the same diffusive behavior and are solved with ad-hoc asymptotic-preserving numerical discretizations. A series of numerical experiments confirm the validity of the approach.
Uncertainty in physical parameters can make the solution of forward or inverse light scattering problems in astrophysical, biological, and atmospheric sensing applications, cost prohibitive for real-time applications. For example, given a probability density in the parametric space of dimensions, refractive index and wavelength, the number of required evaluations for the expected scattering increases dramatically. In the case of dielectric and weakly absorbing spherical particles (both homogeneous and layered), we begin with a Fraunhofer approximation of the scattering coefficients consisting of Riccati-Bessel functions, and reduce it into simpler nested trigonometric approximations. They provide further computational advantages when parameterized on lines of constant optical path lengths. This can reduce the cost of evaluations by large factors $\approx$ 50, without a loss of accuracy in the integrals of these scattering coefficients. We analyze the errors of the proposed approximation, and present numerical results for a set of forward problems as a demonstration.
Transformers are state-of-the-art in a wide range of NLP tasks and have also been applied to many real-world products. Understanding the reliability and certainty of transformer model predictions is crucial for building trustable machine learning applications, e.g., medical diagnosis. Although many recent transformer extensions have been proposed, the study of the uncertainty estimation of transformer models is under-explored. In this work, we propose a novel way to enable transformers to have the capability of uncertainty estimation and, meanwhile, retain the original predictive performance. This is achieved by learning a hierarchical stochastic self-attention that attends to values and a set of learnable centroids, respectively. Then new attention heads are formed with a mixture of sampled centroids using the Gumbel-Softmax trick. We theoretically show that the self-attention approximation by sampling from a Gumbel distribution is upper bounded. We empirically evaluate our model on two text classification tasks with both in-domain (ID) and out-of-domain (OOD) datasets. The experimental results demonstrate that our approach: (1) achieves the best predictive performance and uncertainty trade-off among compared methods; (2) exhibits very competitive (in most cases, improved) predictive performance on ID datasets; (3) is on par with Monte Carlo dropout and ensemble methods in uncertainty estimation on OOD datasets.
Isospectral flows appear in a variety of applications, e.g. the Toda lattice in solid state physics or in discrete models for two-dimensional hydrodynamics, with the isospectral property often corresponding to mathematically or physically important conservation laws. Their most prominent feature, i.e. the conservation of the eigenvalues of the matrix state variable, should therefore be retained when discretizing these systems. Recently, it was shown how isospectral Runge-Kutta methods can, in the Lie-Poisson case also considered in our work, be obtained through Hamiltonian reduction of symplectic Runge-Kutta methods on the cotangent bundle of a Lie group. We provide the Lagrangian analogue and, in the case of symplectic diagonal implicit Runge-Kutta methods, derive the methods through a discrete Euler-Poincare reduction. Our derivation relies on a formulation of diagonally implicit isospectral Runge-Kutta methods in terms of the Cayley transform, generalizing earlier work that showed this for the implicit midpoint rule. Our work is also a generalization of earlier variational Lie group integrators that, interestingly, appear when these are interpreted as update equations for intermediate time points. From a practical point of view, our results allow for a simple implementation of higher order isospectral methods and we demonstrate this with numerical experiments where both the isospectral property and energy are conserved to high accuracy.
As is well known, the stability of the 3-step backward differentiation formula (BDF3) on variable grids for a parabolic problem is analyzed in [Calvo and Grigorieff, \newblock BIT. \textbf{42} (2002) 689--701] under the condition $r_k:=\tau_k/\tau_{k-1}<1.199$, where $r_k$ is the adjacent time-step ratio. In this work, we establish the spectral norm inequality, which can be used to give a upper bound for the inverse matrix. Then the BDF3 scheme is unconditionally stable under a new condition $r_k\leq 1.405$. Meanwhile, we show that the upper bound of the ratio $r_k$ is less than $\sqrt{3}$ for BDF3 scheme. In addition, based on the idea of [Wang and Ruuth, J. Comput. Math. \textbf{26} (2008) 838--855; Chen, Yu, and Zhang, arXiv:2108.02910], we design a weighted and shifted BDF3 (WSBDF3) scheme for solving the parabolic problem. We prove that the WSBDF3 scheme is unconditionally stable under the condition $r_k\leq 1.771$, which is a significant improvement for the maximum time-step ratio. The error estimates are obtained by the stability inequality. Finally, numerical experiments are given to illustrate the theoretical results.
The use of mathematical models to make predictions about tumor growth and response to treatment has become increasingly more prevalent in the clinical setting. The level of complexity within these models ranges broadly, and the calibration of more complex models correspondingly requires more detailed clinical data. This raises questions about how much data should be collected and when, in order to minimize the total amount of data used and the time until a model can be calibrated accurately. To address these questions, we propose a Bayesian information-theoretic procedure, using a gradient-based score function to determine the optimal data collection times for model calibration. The novel score function introduced in this work eliminates the need for a weight parameter used in a previous study's score function, while still yielding accurate and efficient model calibration using even fewer scans on a sample set of synthetic data, simulating tumors of varying levels of radiosensitivity. We also conduct a robust analysis of the calibration accuracy and certainty, using both error and uncertainty metrics. Unlike the error analysis of the previous study, the inclusion of uncertainty analysis in this work|as a means for deciding when the algorithm can be terminated|provides a more realistic option for clinical decision-making, since it does not rely on data that will be collected later in time.
In this article, we develop differentially private tools for handling model uncertainty in linear regression models. We introduce hypothesis tests for nested linear models and methods for model averaging and selection. We consider Bayesian approaches based on mixtures of $g$-priors as well as non-Bayesian approaches based on information criteria. The procedures are straightforward to implement with existing software for non-private data and are asymptotically consistent under certain regularity conditions. We address practical issues such as calibrating the tests so that they have adequate type I error rates or quantifying the uncertainty introduced by the privacy mechanisms. Additionally, we provide specific guidelines to maximize the statistical utility of the methods in finite samples.
We consider parametric Markov decision processes (pMDPs) that are augmented with unknown probability distributions over parameter values. The problem is to compute the probability to satisfy a temporal logic specification within any concrete MDP that corresponds to a sample from these distributions. As this problem is infeasible to solve precisely, we resort to sampling techniques that exploit the so-called scenario approach. Based on a finite number of samples of the parameters, the proposed method yields high-confidence bounds on the probability of satisfying the specification. The number of samples required to obtain a high confidence on these bounds is independent of the number of states and the number of random parameters. Experiments on a large set of benchmarks show that several thousand samples suffice to obtain tight and high-confidence lower and upper bounds on the satisfaction probability.
In this paper, with the aid of the mathematical tool of stochastic geometry, we introduce analytical and computational frameworks for the distribution of three different definitions of delay, i.e., the time that it takes for a user to successfully receive a data packet, in large-scale cellular networks. We also provide an asymptotic analysis of one of the delay distributions, which can be regarded as the packet loss probability of a given network. To mitigate the inherent computational difficulties of the obtained analytical formulations in some cases, we propose efficient numerical approximations based on the numerical inversion method, the Riemann sum, and the Beta distribution. Finally, we demonstrate the accuracy of the obtained analytical formulations and the corresponding approximations against Monte Carlo simulation results, and unveil insights on the delay performance with respect to several design parameters, such as the decoding threshold, the transmit power, and the deployment density of the base stations. The proposed methods can facilitate the analysis and optimization of cellular networks subject to reliability constraints on the network packet delay that are not restricted to the local (average) delay, e.g., in the context of delay sensitive applications.
Optimal transport distances have found many applications in machine learning for their capacity to compare non-parametric probability distributions. Yet their algorithmic complexity generally prevents their direct use on large scale datasets. Among the possible strategies to alleviate this issue, practitioners can rely on computing estimates of these distances over subsets of data, {\em i.e.} minibatches. While computationally appealing, we highlight in this paper some limits of this strategy, arguing it can lead to undesirable smoothing effects. As an alternative, we suggest that the same minibatch strategy coupled with unbalanced optimal transport can yield more robust behavior. We discuss the associated theoretical properties, such as unbiased estimators, existence of gradients and concentration bounds. Our experimental study shows that in challenging problems associated to domain adaptation, the use of unbalanced optimal transport leads to significantly better results, competing with or surpassing recent baselines.
Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.