Sap exudation is the process whereby trees such as sugar (Acer saccharum) and red maple (Acer rubrum) generate unusually high positive stem pressure in response to repeated cycles of freeze and thaw. This elevated xylem pressure permits the sap to be harvested over a period of several weeks and hence is a major factor in the viability of the maple syrup industry. The extensive literature on sap exudation documents competing hypotheses regarding the physical and biological mechanisms that drive positive pressure generation in maple, but to date relatively little effort has been expended on devising mathematical models for the exudation process. In this paper, we utilize an existing model of Graf et al. [J. Roy. Soc. Interface 12:20150665, 2015] that describes heat and mass transport within the multiphase gas-liquid-ice mixture in the porous xylem tissue. The model captures the inherent multiscale nature of xylem transport by including phase change and osmotic transport in wood cells on the microscale, which is coupled to heat transport through the tree stem on the macroscale. A parametric study based on simulations with synthetic temperature data identifies the model parameters that have greatest impact on stem pressure build-up. Measured daily temperature fluctuations are then used as model inputs and the resulting simulated pressures are compared directly with experimental measurements taken from mature red and sugar maple stems during the sap harvest season. The results demonstrate that our multiscale freeze-thaw model reproduces realistic exudation behavior, thereby providing novel insights into the specific physical mechanisms that dominate positive pressure generation in maple trees.
A discretization method with non-matching grids is proposed for the coupled Stokes-Darcy problem that uses a mortar variable at the interface to couple the marker and cell (MAC) method in the Stokes domain with the Raviart-Thomas mixed finite element pair in the Darcy domain. Due to this choice, the method conserves linear momentum and mass locally in the Stokes domain and exhibits local mass conservation in the Darcy domain. The MAC scheme is reformulated as a mixed finite element method on a staggered grid, which allows for the proposed scheme to be analyzed as a mortar mixed finite element method. We show that the discrete system is well-posed and derive a priori error estimates that indicate first order convergence in all variables. The system can be reduced to an interface problem concerning only the mortar variables, leading to a non-overlapping domain decomposition method. Numerical examples are presented to illustrate the theoretical results and the applicability of the method.
We consider the numerical behavior of the fixed-stress splitting method for coupled poromechanics as undrained regimes are approached. We explain that pressure stability is related to the splitting error of the scheme, not the fact that the discrete saddle point matrix never appears in the fixed-stress approach. This observation reconciles previous results regarding the pressure stability of the splitting method. Using examples of compositional poromechanics with application to geological CO$_2$ sequestration, we see that solutions obtained using the fixed-stress scheme with a low order finite element-finite volume discretization which is not inherently inf-sup stable can exhibit the same pressure oscillations obtained with the corresponding fully implicit scheme. Moreover, pressure jump stabilization can effectively remove these spurious oscillations in the fixed-stress setting, while also improving the efficiency of the scheme in terms of the number of iterations required at every time step to reach convergence.
Normalizing flow is a class of deep generative models for efficient sampling and likelihood estimation, which achieves attractive performance, particularly in high dimensions. The flow is often implemented using a sequence of invertible residual blocks. Existing works adopt special network architectures and regularization of flow trajectories. In this paper, we develop a neural ODE flow network called JKO-iFlow, inspired by the Jordan-Kinderleherer-Otto (JKO) scheme, which unfolds the discrete-time dynamic of the Wasserstein gradient flow. The proposed method stacks residual blocks one after another, allowing efficient block-wise training of the residual blocks, avoiding sampling SDE trajectories and score matching or variational learning, thus reducing the memory load and difficulty in end-to-end training. We also develop adaptive time reparameterization of the flow network with a progressive refinement of the induced trajectory in probability space to improve the model accuracy further. Experiments with synthetic and real data show that the proposed JKO-iFlow network achieves competitive performance compared with existing flow and diffusion models at a significantly reduced computational and memory cost.
Under a generalised estimating equation analysis approach, approximate design theory is used to determine Bayesian D-optimal designs. For two examples, considering simple exchangeable and exponential decay correlation structures, we compare the efficiency of identified optimal designs to balanced stepped-wedge designs and corresponding stepped-wedge designs determined by optimising using a normal approximation approach. The dependence of the Bayesian D-optimal designs on the assumed correlation structure is explored; for the considered settings, smaller decay in the correlation between outcomes across time periods, along with larger values of the intra-cluster correlation, leads to designs closer to a balanced design being optimal. Unlike for normal data, it is shown that the optimal design need not be centro-symmetric in the binary outcome case. The efficiency of the Bayesian D-optimal design relative to a balanced design can be large, but situations are demonstrated in which the advantages are small. Similarly, the optimal design from a normal approximation approach is often not much less efficient than the Bayesian D-optimal design. Bayesian D-optimal designs can be readily identified for stepped-wedge cluster randomised trials with binary outcome data. In certain circumstances, principally ones with strong time period effects, they will indicate that a design unlikely to have been identified by previous methods may be substantially more efficient. However, they require a larger number of assumptions than existing optimal designs, and in many situations existing theory under a normal approximation will provide an easier means of identifying an efficient design for binary outcome data.
The broad class of multivariate unified skew-normal (SUN) distributions has been recently shown to possess fundamental conjugacy properties. When used as priors for the vector of parameters in general probit, tobit, and multinomial probit models, these distributions yield posteriors that still belong to the SUN family. Although such a core result has led to important advancements in Bayesian inference and computation, its applicability beyond likelihoods associated with fully-observed, discretized, or censored realizations from multivariate Gaussian models remains yet unexplored. This article covers such an important gap by proving that the wider family of multivariate unified skew-elliptical (SUE) distributions, which extends SUNs to more general perturbations of elliptical densities, guarantees conjugacy for broader classes of models, beyond those relying on fully-observed, discretized or censored Gaussians. Such a result leverages the closure under linear combinations, conditioning and marginalization of SUE to prove that such a family is conjugate to the likelihood induced by general multivariate regression models for fully-observed, censored or dichotomized realizations from skew-elliptical distributions. This advancement substantially enlarges the set of models that enable conjugate Bayesian inference to general formulations arising from elliptical and skew-elliptical families, including the multivariate Student's t and skew-t, among others.
The incidence of vertebral fragility fracture is increased by the presence of preexisting pathologies such as metastatic disease. Computational tools could support the fracture prediction and consequently the decision of the best medical treatment. Anyway, validation is required to use these tools in clinical practice. To address this necessity, in this study subject-specific homogenized finite element models of single vertebrae were generated from micro CT images for both healthy and metastatic vertebrae and validated against experimental data. More in detail, spine segments were tested under compression and imaged with micro CT. The displacements field could be extracted for each vertebra singularly using the digital volume correlation full-field technique. Homogenized finite element models of each vertebra could hence be built from the micro CT images, applying boundary conditions consistent with the experimental displacements at the endplates. Numerical and experimental displacements and strains fields were eventually compared. In addition, the outcomes of a micro CT based homogenized model were compared to the ones of a clinical-CT based model. Good agreement between experimental and computational displacement fields, both for healthy and metastatic vertebrae, was found. Comparison between micro CT based and clinical-CT based outcomes showed strong correlations. Furthermore, models were able to qualitatively identify the regions which experimentally showed the highest strain concentration. In conclusion, the combination of experimental full-field technique and the in-silico modelling allowed the development of a promising pipeline for validation of fracture risk predictors, although further improvements in both fields are needed to better analyse quantitatively the post-yield behaviour of the vertebra.
Untargeted metabolomic profiling through liquid chromatography-mass spectrometry (LC-MS) measures a vast array of metabolites within biospecimens, advancing drug development, disease diagnosis, and risk prediction. However, the low throughput of LC-MS poses a major challenge for biomarker discovery, annotation, and experimental comparison, necessitating the merging of multiple datasets. Current data pooling methods encounter practical limitations due to their vulnerability to data variations and hyperparameter dependence. Here we introduce GromovMatcher, a flexible and user-friendly algorithm that automatically combines LC-MS datasets using optimal transport. By capitalizing on feature intensity correlation structures, GromovMatcher delivers superior alignment accuracy and robustness compared to existing approaches. This algorithm scales to thousands of features requiring minimal hyperparameter tuning. Applying our method to experimental patient studies of liver and pancreatic cancer, we discover shared metabolic features related to patient alcohol intake, demonstrating how GromovMatcher facilitates the search for biomarkers associated with lifestyle risk factors linked to several cancer types.
A cyclic proof system is a proof system whose proof figure is a tree with cycles. The cut-elimination in a proof system is fundamental. It is conjectured that the cut-elimination in the cyclic proof system for first-order logic with inductive definitions does not hold. This paper shows that the conjecture is correct by giving a sequent not provable without the cut rule but provable in the cyclic proof system.
Identification of tumor margins is essential for surgical decision-making for glioblastoma patients and provides reliable assistance for neurosurgeons. Despite improvements in deep learning architectures for tumor segmentation over the years, creating a fully autonomous system suitable for clinical floors remains a formidable challenge because the model predictions have not yet reached the desired level of accuracy and generalizability for clinical applications. Generative modeling techniques have seen significant improvements in recent times. Specifically, Generative Adversarial Networks (GANs) and Denoising-diffusion-based models (DDPMs) have been used to generate higher-quality images with fewer artifacts and finer attributes. In this work, we introduce a framework called Re-Diffinet for modeling the discrepancy between the outputs of a segmentation model like U-Net and the ground truth, using DDPMs. By explicitly modeling the discrepancy, the results show an average improvement of 0.55\% in the Dice score and 16.28\% in HD95 from cross-validation over 5-folds, compared to the state-of-the-art U-Net segmentation model.
Mesh-based Graph Neural Networks (GNNs) have recently shown capabilities to simulate complex multiphysics problems with accelerated performance times. However, mesh-based GNNs require a large number of message-passing (MP) steps and suffer from over-smoothing for problems involving very fine mesh. In this work, we develop a multiscale mesh-based GNN framework mimicking a conventional iterative multigrid solver, coupled with adaptive mesh refinement (AMR), to mitigate challenges with conventional mesh-based GNNs. We use the framework to accelerate phase field (PF) fracture problems involving coupled partial differential equations with a near-singular operator due to near-zero modulus inside the crack. We define the initial graph representation using all mesh resolution levels. We perform a series of downsampling steps using Transformer MP GNNs to reach the coarsest graph followed by upsampling steps to reach the original graph. We use skip connectors from the generated embedding during coarsening to prevent over-smoothing. We use Transfer Learning (TL) to significantly reduce the size of training datasets needed to simulate different crack configurations and loading conditions. The trained framework showed accelerated simulation times, while maintaining high accuracy for all cases compared to physics-based PF fracture model. Finally, this work provides a new approach to accelerate a variety of mesh-based engineering multiphysics problems