The present study introduces an advanced multi-physics and multi-scale modeling approach to investigate in silico colon motility. We introduce a generalized electromechanical framework, integrating cellular electrophysiology and smooth muscle contractility, thus advancing a first-of-its-kind computational model of laser tissue soldering after incision resection. The proposed theoretical framework comprises three main elements: a microstructural material model describing intestine wall geometry and composition of reinforcing fibers, with four fiber families, two active-conductive and two passive; an electrophysiological model describing the propagation of slow waves, based on a fully-coupled nonlinear phenomenological approach; and a thermodynamical consistent mechanical model describing the hyperelastic energetic contributions ruling tissue equilibrium under diverse loading conditions. The active strain approach was adopted to describe tissue electromechanics by exploiting the multiplicative decomposition of the deformation gradient for each active fiber family and solving the governing equations via a staggered finite element scheme. The computational framework was fine-tuned according to state-of-the-art experimental evidence, and extensive numerical analyses allowed us to compare manometric traces computed via numerical simulations with those obtained clinically in human patients. The model proved capable of reproducing both qualitatively and quantitatively high or low-amplitude propagation contractions. Colon motility after laser tissue soldering demonstrates that material properties and couplings of the deposited tissue are critical to reproducing a physiological muscular contraction, thus restoring a proper peristaltic activity.
This paper contributes to the study of optimal experimental design for Bayesian inverse problems governed by partial differential equations (PDEs). We derive estimates for the parametric regularity of multivariate double integration problems over high-dimensional parameter and data domains arising in Bayesian optimal design problems. We provide a detailed analysis for these double integration problems using two approaches: a full tensor product and a sparse tensor product combination of quasi-Monte Carlo (QMC) cubature rules over the parameter and data domains. Specifically, we show that the latter approach significantly improves the convergence rate, exhibiting performance comparable to that of QMC integration of a single high-dimensional integral. Furthermore, we numerically verify the predicted convergence rates for an elliptic PDE problem with an unknown diffusion coefficient in two spatial dimensions, offering empirical evidence supporting the theoretical results and highlighting practical applicability.
We propose an algorithm to construct optimal exact designs (EDs). Most of the work in the optimal regression design literature focuses on the approximate design (AD) paradigm due to its desired properties, including the optimality verification conditions derived by Kiefer (1959, 1974). ADs may have unbalanced weights, and practitioners may have difficulty implementing them with a designated run size $n$. Some EDs are constructed using rounding methods to get an integer number of runs at each support point of an AD, but this approach may not yield optimal results. To construct EDs, one may need to perform new combinatorial constructions for each $n$, and there is no unified approach to construct them. Therefore, we develop a systematic way to construct EDs for any given $n$. Our method can transform ADs into EDs while retaining high statistical efficiency in two steps. The first step involves constructing an AD by utilizing the convex nature of many design criteria. The second step employs a simulated annealing algorithm to search for the ED stochastically. Through several applications, we demonstrate the utility of our method for various design problems. Additionally, we show that the design efficiency approaches unity as the number of design points increases.
Mathematical models of protein-protein dynamics, such as the heterodimer model, play a crucial role in understanding many physical phenomena. This model is a system of two semilinear parabolic partial differential equations describing the evolution and mutual interaction of biological species. An example is the neurodegenerative disease progression in some significant pathologies, such as Alzheimer's and Parkinson's diseases, characterized by the accumulation and propagation of toxic prionic proteins. This article presents and analyzes a flexible high-order discretization method for the numerical approximation of the heterodimer model. We propose a space discretization based on a Discontinuous Galerkin method on polygonal/polyhedral grids, which provides flexibility in handling complex geometries. Concerning the semi-discrete formulation, we prove stability and a-priori error estimates for the first time. Next, we adopt a $\theta$-method scheme as a time integration scheme. Convergence tests are carried out to demonstrate the theoretical bounds and the ability of the method to approximate traveling wave solutions, considering also complex geometries such as brain sections reconstructed from medical images. Finally, the proposed scheme is tested in a practical test case stemming from neuroscience applications, namely the simulation of the spread of $\alpha$-synuclein in a realistic test case of Parkinson's disease in a two-dimensional sagittal brain section geometry reconstructed from medical images.
In this paper, we investigate the behavior of gradient descent algorithms in physics-informed machine learning methods like PINNs, which minimize residuals connected to partial differential equations (PDEs). Our key result is that the difficulty in training these models is closely related to the conditioning of a specific differential operator. This operator, in turn, is associated to the Hermitian square of the differential operator of the underlying PDE. If this operator is ill-conditioned, it results in slow or infeasible training. Therefore, preconditioning this operator is crucial. We employ both rigorous mathematical analysis and empirical evaluations to investigate various strategies, explaining how they better condition this critical operator, and consequently improve training.
Superconvergence of differential structure on discretized surfaces is studied in this paper. The newly introduced geometric supercloseness provides us with a fundamental tool to prove the superconvergence of gradient recovery on deviated surfaces. An algorithmic framework for gradient recovery without exact geometric information is introduced. Several numerical examples are documented to validate the theoretical results.
We study the potential of noisy labels y to pretrain semantic segmentation models in a multi-modal learning framework for geospatial applications. Specifically, we propose a novel Cross-modal Sample Selection method (CromSS) that utilizes the class distributions P^{(d)}(x,c) over pixels x and classes c modelled by multiple sensors/modalities d of a given geospatial scene. Consistency of predictions across sensors $d$ is jointly informed by the entropy of P^{(d)}(x,c). Noisy label sampling we determine by the confidence of each sensor d in the noisy class label, P^{(d)}(x,c=y(x)). To verify the performance of our approach, we conduct experiments with Sentinel-1 (radar) and Sentinel-2 (optical) satellite imagery from the globally-sampled SSL4EO-S12 dataset. We pair those scenes with 9-class noisy labels sourced from the Google Dynamic World project for pretraining. Transfer learning evaluations (downstream task) on the DFC2020 dataset confirm the effectiveness of the proposed method for remote sensing image segmentation.
We discuss the design of an invariant measure-preserving transformed dynamics for the numerical treatment of Langevin dynamics based on rescaling of time, with the goal of sampling from an invariant measure. Given an appropriate monitor function which characterizes the numerical difficulty of the problem as a function of the state of the system, this method allows the stepsizes to be reduced only when necessary, facilitating efficient recovery of long-time behavior. We study both the overdamped and underdamped Langevin dynamics. We investigate how an appropriate correction term that ensures preservation of the invariant measure should be incorporated into a numerical splitting scheme. Finally, we demonstrate the use of the technique in several model systems, including a Bayesian sampling problem with a steep prior.
Linkage methods are among the most popular algorithms for hierarchical clustering. Despite their relevance the current knowledge regarding the quality of the clustering produced by these methods is limited. Here, we improve the currently available bounds on the maximum diameter of the clustering obtained by complete-link for metric spaces. One of our new bounds, in contrast to the existing ones, allows us to separate complete-link from single-link in terms of approximation for the diameter, which corroborates the common perception that the former is more suitable than the latter when the goal is producing compact clusters. We also show that our techniques can be employed to derive upper bounds on the cohesion of a class of linkage methods that includes the quite popular average-link.
This paper presents a particle-based optimization method designed for addressing minimization problems with equality constraints, particularly in cases where the loss function exhibits non-differentiability or non-convexity. The proposed method combines components from consensus-based optimization algorithm with a newly introduced forcing term directed at the constraint set. A rigorous mean-field limit of the particle system is derived, and the convergence of the mean-field limit to the constrained minimizer is established. Additionally, we introduce a stable discretized algorithm and conduct various numerical experiments to demonstrate the performance of the proposed method.
We establish a connection between problems studied in rigidity theory and matroids arising from linear algebraic constructions like tensor products and symmetric products. A special case of this correspondence identifies the problem of giving a description of the correctable erasure patterns in a maximally recoverable tensor code with the problem of describing bipartite rigid graphs or low-rank completable matrix patterns. Additionally, we relate dependencies among symmetric products of generic vectors to graph rigidity and symmetric matrix completion. With an eye toward applications to computer science, we study the dependency of these matroids on the characteristic by giving new combinatorial descriptions in several cases, including the first description of the correctable patterns in an (m, n, a=2, b=2) maximally recoverable tensor code.