High-fidelity computational simulations and physical experiments of hypersonic flows are resource intensive. Training scientific machine learning (SciML) models on limited high-fidelity data offers one approach to rapidly predict behaviors for situations that have not been seen before. However, high-fidelity data is itself in limited quantity to validate all outputs of the SciML model in unexplored input space. As such, an uncertainty-aware SciML model is desired. The SciML model's output uncertainties could then be used to assess the reliability and confidence of the model's predictions. In this study, we extend a DeepONet using three different uncertainty quantification mechanisms: mean-variance estimation, evidential uncertainty, and ensembling. The uncertainty aware DeepONet models are trained and evaluated on the hypersonic flow around a blunt cone object with data generated via computational fluid dynamics over a wide range of Mach numbers and altitudes. We find that ensembling outperforms the other two uncertainty models in terms of minimizing error and calibrating uncertainty in both interpolative and extrapolative regimes.
This work studies nonparametric Bayesian estimation of the intensity function of an inhomogeneous Poisson point process in the important case where the intensity depends on covariates, based on the observation of a single realisation of the point pattern over a large area. It is shown how the presence of covariates allows to borrow information from far away locations in the observation window, enabling consistent inference in the growing domain asymptotics. In particular, optimal posterior contraction rates under both global and point-wise loss functions are derived. The rates in global loss are obtained under conditions on the prior distribution resembling those in the well established theory of Bayesian nonparametrics, here combined with concentration inequalities for functionals of stationary processes to control certain random covariate-dependent loss functions appearing in the analysis. The local rates are derived with an ad-hoc study that builds on recent advances in the theory of P\'olya tree priors, extended to the present multivariate setting with a novel construction that makes use of the random geometry induced by the covariates.
Rational best approximations (in a Chebyshev sense) to real functions are characterized by an equioscillating approximation error. Similar results do not hold true for rational best approximations to complex functions in general. In the present work, we consider unitary rational approximations to the exponential function on the imaginary axis, which map the imaginary axis to the unit circle. In the class of unitary rational functions, best approximations are shown to exist, to be uniquely characterized by equioscillation of a phase error, and to possess a super-linear convergence rate. Furthermore, the best approximations have full degree (i.e., non-degenerate), achieve their maximum approximation error at points of equioscillation, and interpolate at intermediate points. Asymptotic properties of poles, interpolation nodes, and equioscillation points of these approximants are studied. Three algorithms, which are found very effective to compute unitary rational approximations including candidates for best approximations, are sketched briefly. Some consequences to numerical time-integration are discussed. In particular, time propagators based on unitary best approximants are unitary, symmetric and A-stable.
Positive semidefinite (PSD) matrices are indispensable in many fields of science. A similarity measurement for such matrices is usually an essential ingredient in the mathematical modelling of a scientific problem. This paper proposes a unified framework to construct similarity measurements for PSD matrices. The framework is obtained by exploring the fiber bundle structure of the cone of PSD matrices and generalizing the idea of the point-set distance previously developed for linear subsapces and positive definite (PD) matrices. The framework demonstrates both theoretical advantages and computational convenience: (1) We prove that the similarity measurement constructed by the framework can be recognized either as the cost of a parallel transport or as the length of a quasi-geodesic curve. (2) We extend commonly used divergences for equidimensional PD matrices to the non-equidimensional case. Examples include Kullback-Leibler divergence, Bhattacharyya divergence and R\'enyi divergence. We prove that these extensions enjoy the same consistency property as their counterpart for geodesic distance. (3) We apply our geometric framework to further extend those in (2) to similarity measurements for arbitrary PSD matrices. We also provide simple formulae to compute these similarity measurements in most situations.
Recent progress in artificial intelligence (AI) and high-performance computing (HPC) have brought potentially game-changing opportunities in accelerating reactive flow simulations. In this study, we introduce an open-source computational fluid dynamics (CFD) framework that integrates the strengths of machine learning (ML) and graphics processing unit (GPU) to demonstrate their combined capability. Within this framework, all computational operations are solely executed on GPU, including ML-accelerated chemistry integration, fully-implicit solving of PDEs, and computation of thermal and transport properties, thereby eliminating the CPU-GPU memory copy overhead. Optimisations both within the kernel functions and during the kernel launch process are conducted to enhance computational performance. Strategies such as static data reorganisation and dynamic data allocation are adopted to reduce the GPU memory footprint. The computational performance is evaluated in two turbulent flame benchmarks using quasi-DNS and LES modelling, respectively. Remarkably, while maintaining a similar level of accuracy to the conventional CPU/CVODE-based solver, the GPU/ML-accelerated approach shows an overall speedup of over two orders of magnitude for both cases. This result highlights that high-fidelity turbulent combustion simulation with finite-rate chemistry that requires normally hundreds of CPUs can now be performed on portable devices such as laptops with a medium-end GPU.
Interactions between genes and environmental factors may play a key role in the etiology of many common disorders. Several regularized generalized linear models (GLMs) have been proposed for hierarchical selection of gene by environment interaction (GEI) effects, where a GEI effect is selected only if the corresponding genetic main effect is also selected in the model. However, none of these methods allow to include random effects to account for population structure, subject relatedness and shared environmental exposure. In this paper, we develop a unified approach based on regularized penalized quasi-likelihood (PQL) estimation to perform hierarchical selection of GEI effects in sparse regularized mixed models. We compare the selection and prediction accuracy of our proposed model with existing methods through simulations under the presence of population structure and shared environmental exposure. We show that for all simulation scenarios, compared to other penalized methods, our proposed method enforced sparsity by controlling the number of false positives in the model while having the best predictive performance. Finally, we apply our method to a real data application using the Orofacial Pain: Prospective Evaluation and Risk Assessment (OPPERA) study, and found that our method retrieves previously reported significant loci.
Functions with singularities are notoriously difficult to approximate with conventional approximation schemes. In computational applications they are often resolved with low-order piecewise polynomials, multilevel schemes or other types of grading strategies. Rational functions are an exception to this rule: for univariate functions with point singularities, such as branch points, rational approximations exist with root-exponential convergence in the rational degree. This is typically enabled by the clustering of poles near the singularity. Both the theory and computational practice of rational functions for function approximation have focused on the univariate case, with extensions to two dimensions via identification with the complex plane. Multivariate rational functions, i.e., quotients of polynomials of several variables, are relatively unexplored in comparison. Yet, apart from a steep increase in theoretical complexity, they also offer a wealth of opportunities. A first observation is that singularities of multivariate rational functions may be continuous curves of poles, rather than isolated ones. By generalizing the clustering of poles from points to curves, we explore constructions of multivariate rational approximations to functions with curves of singularities.
While generalized linear mixed models (GLMMs) are a fundamental tool in applied statistics, many specifications -- such as those involving categorical factors with many levels or interaction terms -- can be computationally challenging to estimate due to the need to compute or approximate high-dimensional integrals. Variational inference (VI) methods are a popular way to perform such computations, especially in the Bayesian context. However, naive VI methods can provide unreliable uncertainty quantification. We show that this is indeed the case in the GLMM context, proving that standard VI (i.e. mean-field) dramatically underestimates posterior uncertainty in high-dimensions. We then show how appropriately relaxing the mean-field assumption leads to VI methods whose uncertainty quantification does not deteriorate in high-dimensions, and whose total computational cost scales linearly with the number of parameters and observations. Our theoretical and numerical results focus on GLMMs with Gaussian or binomial likelihoods, and rely on connections to random graph theory to obtain sharp high-dimensional asymptotic analysis. We also provide generic results, which are of independent interest, relating the accuracy of variational inference to the convergence rate of the corresponding coordinate ascent variational inference (CAVI) algorithm for Gaussian targets. Our proposed partially-factorized VI (PF-VI) methodology for GLMMs is implemented in the R package vglmer, see //github.com/mgoplerud/vglmer . Numerical results with simulated and real data examples illustrate the favourable computation cost versus accuracy trade-off of PF-VI.
In this contribution, we derive a consistent variational formulation for computational homogenization methods and show that traditional FE2 and IGA2 approaches are special discretization and solution techniques of this most general framework. This allows us to enhance dramatically the numerical analysis as well as the solution of the arising algebraic system. In particular, we expand the dimension of the continuous system, discretize the higher dimensional problem consistently and apply afterwards a discrete null-space matrix to remove the additional dimensions. A benchmark problem, for which we can obtain an analytical solution, demonstrates the superiority of the chosen approach aiming to reduce the immense computational costs of traditional FE2 and IGA2 formulations to a fraction of the original requirements. Finally, we demonstrate a further reduction of the computational costs for the solution of general non-linear problems.
Empirical Bayes provides a powerful approach to learning and adapting to latent structure in data. Theory and algorithms for empirical Bayes have a rich literature for sequence models, but are less understood in settings where latent variables and data interact through more complex designs. In this work, we study empirical Bayes estimation of an i.i.d. prior in Bayesian linear models, via the nonparametric maximum likelihood estimator (NPMLE). We introduce and study a system of gradient flow equations for optimizing the marginal log-likelihood, jointly over the prior and posterior measures in its Gibbs variational representation using a smoothed reparametrization of the regression coefficients. A diffusion-based implementation yields a Langevin dynamics MCEM algorithm, where the prior law evolves continuously over time to optimize a sequence-model log-likelihood defined by the coordinates of the current Langevin iterate. We show consistency of the NPMLE as $n, p \rightarrow \infty$ under mild conditions, including settings of random sub-Gaussian designs when $n \asymp p$. In high noise, we prove a uniform log-Sobolev inequality for the mixing of Langevin dynamics, for possibly misspecified priors and non-log-concave posteriors. We then establish polynomial-time convergence of the joint gradient flow to a near-NPMLE if the marginal negative log-likelihood is convex in a sub-level set of the initialization.
Coordinate exchange (CEXCH) is a popular algorithm for generating exact optimal experimental designs. The authors of CEXCH advocated for a highly greedy implementation - one that exchanges and optimizes single element coordinates of the design matrix. We revisit the effect of greediness on CEXCHs efficacy for generating highly efficient designs. We implement the single-element CEXCH (most greedy), a design-row (medium greedy) optimization exchange, and particle swarm optimization (PSO; least greedy) on 21 exact response surface design scenarios, under the $D$- and $I-$criterion, which have well-known optimal designs that have been reproduced by several researchers. We found essentially no difference in performance of the most greedy CEXCH and the medium greedy CEXCH. PSO did exhibit better efficacy for generating $D$-optimal designs, and for most $I$-optimal designs than CEXCH, but not to a strong degree under our parametrization. This work suggests that further investigation of the greediness dimension and its effect on CEXCH efficacy on a wider suite of models and criterion is warranted.