Pre-trained on a large and diverse dataset, the segment anything model (SAM) is the first promptable foundation model in computer vision aiming at object segmentation tasks. In this work, we evaluate SAM for the task of nuclear instance segmentation performance with zero-shot learning and finetuning. We compare SAM with other representative methods in nuclear instance segmentation, especially in the context of model generalisability. To achieve automatic nuclear instance segmentation, we propose using a nuclei detection model to provide bounding boxes or central points of nu-clei as visual prompts for SAM in generating nuclear instance masks from histology images.
In this study, the impact of turbulent diffusion on mixing of biochemical reaction models is explored by implementing and validating different models. An original codebase called CHAD (Coupled Hydrodynamics and Anaerobic Digestion) is extended to incorporate turbulent diffusion and validate it against results from OpenFOAM with 2D Rayleigh-Taylor Instability and lid-driven cavity simulations. The models are then tested for the applications with Anaerobic Digestion - a widely used wastewater treatment method. The findings demonstrate that the implemented models accurately capture turbulent diffusion when provided with an accurate flow field. Specifically, a minor effect of chemical turbulent diffusion on biochemical reactions within the anaerobic digestion tank is observed, while thermal turbulent diffusion significantly influences mixing. By successfully implementing turbulent diffusion models in CHAD, its capabilities for more accurate anaerobic digestion simulations are enhanced, aiding in optimizing the design and operation of anaerobic digestion reactors in real-world wastewater treatment applications.
Charts, figures, and text derived from data play an important role in decision making, from data-driven policy development to day-to-day choices informed by online articles. Making sense of, or fact-checking, outputs means understanding how they relate to the underlying data. Even for domain experts with access to the source code and data sets, this poses a significant challenge. In this paper we introduce a new program analysis framework which supports interactive exploration of fine-grained I/O relationships directly through computed outputs, making use of dynamic dependence graphs. Our main contribution is a novel notion in data provenance which we call related inputs, a relation of mutual relevance or "cognacy" which arises between inputs when they contribute to common features of the output. Queries of this form allow readers to ask questions like "What outputs use this data element, and what other data elements are used along with it?". We show how Jonsson and Tarski's concept of conjugate operators on Boolean algebras appropriately characterises the notion of cognacy in a dependence graph, and give a procedure for computing related inputs over such a graph.
Time-lapse seismic monitoring necessitates integrated workflows that combine seismic and reservoir modeling to enhance reservoir property estimation. We present a feasibility study of an end-to-end inversion framework that directly inverts for permeability from prestack time-lapse seismic data. To assess the method's robustness, we design experiments focusing on its sensitivity to initial models and potential errors in modeling. Our study leverages the Compass model to simulate CO2 storage in saline aquifers, which is derived from well and seismic data from the North Sea, a candidate site for geological carbon storage.
We propose a numerical method to solve parameter-dependent hyperbolic partial differential equations (PDEs) with a moment approach, based on a previous work from Marx et al. (2020). This approach relies on a very weak notion of solution of nonlinear equations, namely parametric entropy measure-valued (MV) solutions, satisfying linear equations in the space of Borel measures. The infinite-dimensional linear problem is approximated by a hierarchy of convex, finite-dimensional, semidefinite programming problems, called Lasserre's hierarchy. This gives us a sequence of approximations of the moments of the occupation measure associated with the parametric entropy MV solution, which is proved to converge. In the end, several post-treatments can be performed from this approximate moments sequence. In particular, the graph of the solution can be reconstructed from an optimization of the Christoffel-Darboux kernel associated with the approximate measure, that is a powerful approximation tool able to capture a large class of irregular functions. Also, for uncertainty quantification problems, several quantities of interest can be estimated, sometimes directly such as the expectation of smooth functionals of the solutions. The performance of our approach is evaluated through numerical experiments on the inviscid Burgers equation with parametrised initial conditions or parametrised flux function.
Several mixed-effects models for longitudinal data have been proposed to accommodate the non-linearity of late-life cognitive trajectories and assess the putative influence of covariates on it. No prior research provides a side-by-side examination of these models to offer guidance on their proper application and interpretation. In this work, we examined five statistical approaches previously used to answer research questions related to non-linear changes in cognitive aging: the linear mixed model (LMM) with a quadratic term, LMM with splines, the functional mixed model, the piecewise linear mixed model, and the sigmoidal mixed model. We first theoretically describe the models. Next, using data from two prospective cohorts with annual cognitive testing, we compared the interpretation of the models by investigating associations of education on cognitive change before death. Lastly, we performed a simulation study to empirically evaluate the models and provide practical recommendations. Except for the LMM-quadratic, the fit of all models was generally adequate to capture non-linearity of cognitive change and models were relatively robust. Although spline-based models have no interpretable nonlinearity parameters, their convergence was easier to achieve, and they allow graphical interpretation. In contrast, piecewise and sigmoidal models, with interpretable non-linear parameters, may require more data to achieve convergence.
This study addresses a class of linear mixed-integer programming (MILP) problems that involve uncertainty in the objective function parameters. The parameters are assumed to form a random vector, whose probability distribution can only be observed through a finite training data set. Unlike most of the related studies in the literature, we also consider uncertainty in the underlying data set. The data uncertainty is described by a set of linear constraints for each random sample, and the uncertainty in the distribution (for a fixed realization of data) is defined using a type-1 Wasserstein ball centered at the empirical distribution of the data. The overall problem is formulated as a three-level distributionally robust optimization (DRO) problem. First, we prove that the three-level problem admits a single-level MILP reformulation, if the class of loss functions is restricted to biaffine functions. Secondly, it turns out that for several particular forms of data uncertainty, the outlined problem can be solved reasonably fast by leveraging the nominal MILP problem. Finally, we conduct a computational study, where the out-of-sample performance of our model and computational complexity of the proposed MILP reformulation are explored numerically for several application domains.
For problems of time-harmonic scattering by rational polygonal obstacles, embedding formulae express the far-field pattern induced by any incident plane wave in terms of the far-field patterns for a relatively small (frequency-independent) set of canonical incident angles. Although these remarkable formulae are exact in theory, here we demonstrate that: (i) they are highly sensitive to numerical errors in practice, and (ii) direct calculation of the coefficients in these formulae may be impossible for particular sets of canonical incident angles, even in exact arithmetic. Only by overcoming these practical issues can embedding formulae provide a highly efficient approach to computing the far-field pattern induced by a large number of incident angles. Here we address challenges (i) and (ii), supporting our theory with numerical experiments. Challenge (i) is solved using techniques from computational complex analysis: we reformulate the embedding formula as a complex contour integral and prove that this is much less sensitive to numerical errors. In practice, this contour integral can be efficiently evaluated by residue calculus. Challenge (ii) is addressed using techniques from numerical linear algebra: we oversample, considering more canonical incident angles than are necessary, thus expanding the set of valid coefficient vectors. The coefficient vector can then be selected using either a least squares approach or column subset selection.
Deep learning methods have access to be employed for solving physical systems governed by parametric partial differential equations (PDEs) due to massive scientific data. It has been refined to operator learning that focuses on learning non-linear mapping between infinite-dimensional function spaces, offering interface from observations to solutions. However, state-of-the-art neural operators are limited to constant and uniform discretization, thereby leading to deficiency in generalization on arbitrary discretization schemes for computational domain. In this work, we propose a novel operator learning algorithm, referred to as Dynamic Gaussian Graph Operator (DGGO) that expands neural operators to learning parametric PDEs in arbitrary discrete mechanics problems. The Dynamic Gaussian Graph (DGG) kernel learns to map the observation vectors defined in general Euclidean space to metric vectors defined in high-dimensional uniform metric space. The DGG integral kernel is parameterized by Gaussian kernel weighted Riemann sum approximating and using dynamic message passing graph to depict the interrelation within the integral term. Fourier Neural Operator is selected to localize the metric vectors on spatial and frequency domains. Metric vectors are regarded as located on latent uniform domain, wherein spatial and spectral transformation offer highly regular constraints on solution space. The efficiency and robustness of DGGO are validated by applying it to solve numerical arbitrary discrete mechanics problems in comparison with mainstream neural operators. Ablation experiments are implemented to demonstrate the effectiveness of spatial transformation in the DGG kernel. The proposed method is utilized to forecast stress field of hyper-elastic material with geometrically variable void as engineering application.
Markov chain Monte Carlo (MCMC) simulations have been widely used to generate samples from the complex posterior distribution in Bayesian inferences. However, these simulations often require multiple computations of the forward model in the likelihood function for each drawn sample. This computational burden renders MCMC sampling impractical when the forward model is computationally expensive, such as in the case of partial differential equation models. In this paper, we propose a novel sampling approach called the geometric optics approximation method (GOAM) for Bayesian inverse problems, which entirely circumvents the need for MCMC simulations. Our method is rooted in the problem of reflector shape design, which focuses on constructing a reflecting surface that redirects rays from a source, with a predetermined density, towards a target domain while achieving a desired density distribution. The key idea is to consider the unnormalized Bayesian posterior as the density on the target domain within the optical system and define a geometric optics approximation measure with respect to posterior by a reflecting surface. Consequently, once such a reflecting surface is obtained, we can utilize it to draw an arbitrary number of independent and uncorrelated samples from the posterior measure for Bayesian inverse problems. In theory, we have shown that the geometric optics approximation measure is well-posed. The efficiency and robustness of our proposed sampler, employing the geometric optics approximation method, are demonstrated through several numerical examples provided in this paper.
The main respiratory muscle, the diaphragm, is an example of a thin structure. We aim to perform detailed numerical simulations of the muscle mechanics based on individual patient data. This requires a representation of the diaphragm geometry extracted from medical image data. We design an adaptive reconstruction method based on a least-squares radial basis function partition of unity method. The method is adapted to thin structures by subdividing the structure rather than the surrounding space, and by introducing an anisotropic scaling of local subproblems. The resulting representation is an infinitely smooth level set function, which is stabilized such that there are no spurious zero level sets. We show reconstruction results for 2D cross sections of the diaphragm geometry as well as for the full 3D geometry. We also show solutions to basic PDE test problems in the reconstructed geometries.