Simulation of the monodomain equation, crucial for modeling the heart's electrical activity, faces scalability limits when traditional numerical methods only parallelize in space. To optimize the use of large multi-processor computers by distributing the computational load more effectively, time parallelization is essential. We introduce a high-order parallel-in-time method addressing the substantial computational challenges posed by the stiff, multiscale, and nonlinear nature of cardiac dynamics. Our method combines the semi-implicit and exponential spectral deferred correction methods, yielding a hybrid method that is extended to parallel-in-time employing the PFASST framework. We thoroughly evaluate the stability, accuracy, and robustness of the proposed parallel-in-time method through extensive numerical experiments, using practical ionic models such as the ten-Tusscher-Panfilov. The results underscore the method's potential to significantly enhance real-time and high-fidelity simulations in biomedical research and clinical applications.
We study the numerical approximation of the stochastic heat equation with a distributional reaction term. Under a condition on the Besov regularity of the reaction term, it was proven recently that a strong solution exists and is unique in the pathwise sense, in a class of H\"older continuous processes. For a suitable choice of sequence $(b^k)_{k\in \mathbb{N}}$ approximating $b$, we prove that the error between the solution $u$ of the SPDE with reaction term $b$ and its tamed Euler finite-difference scheme with mollified drift $b^k$, converges to $0$ in $L^m(\Omega)$ with a rate that depends on the Besov regularity of $b$. In particular, one can consider two interesting cases: first, even when $b$ is only a (finite) measure, a rate of convergence is obtained. On the other hand, when $b$ is a bounded measurable function, the (almost) optimal rate of convergence $(\frac{1}{2}-\varepsilon)$-in space and $(\frac{1}{4}-\varepsilon)$-in time is achieved. Stochastic sewing techniques are used in the proofs, in particular to deduce new regularising properties of the discrete Ornstein-Uhlenbeck process.
Motivated by the recent successful application of physics-informed neural networks (PINNs) to solve Boltzmann-type equations [S. Jin, Z. Ma, and K. Wu, J. Sci. Comput., 94 (2023), pp. 57], we provide a rigorous error analysis for PINNs in approximating the solution of the Boltzmann equation near a global Maxwellian. The challenge arises from the nonlocal quadratic interaction term defined in the unbounded domain of velocity space. Analyzing this term on an unbounded domain requires the inclusion of a truncation function, which demands delicate analysis techniques. As a generalization of this analysis, we also provide proof of the asymptotic preserving property when using micro-macro decomposition-based neural networks.
We propose a topological mapping and localization system able to operate on real human colonoscopies, despite significant shape and illumination changes. The map is a graph where each node codes a colon location by a set of real images, while edges represent traversability between nodes. For close-in-time images, where scene changes are minor, place recognition can be successfully managed with the recent transformers-based local feature matching algorithms. However, under long-term changes -- such as different colonoscopies of the same patient -- feature-based matching fails. To address this, we train on real colonoscopies a deep global descriptor achieving high recall with significant changes in the scene. The addition of a Bayesian filter boosts the accuracy of long-term place recognition, enabling relocalization in a previously built map. Our experiments show that ColonMapper is able to autonomously build a map and localize against it in two important use cases: localization within the same colonoscopy or within different colonoscopies of the same patient. Code: //github.com/jmorlana/ColonMapper.
One of the most promising applications of machine learning (ML) in computational physics is to accelerate the solution of partial differential equations (PDEs). The key objective of ML-based PDE solvers is to output a sufficiently accurate solution faster than standard numerical methods, which are used as a baseline comparison. We first perform a systematic review of the ML-for-PDE solving literature. Of articles that use ML to solve a fluid-related PDE and claim to outperform a standard numerical method, we determine that 79% (60/76) compare to a weak baseline. Second, we find evidence that reporting biases, especially outcome reporting bias and publication bias, are widespread. We conclude that ML-for-PDE solving research is overoptimistic: weak baselines lead to overly positive results, while reporting biases lead to underreporting of negative results. To a large extent, these issues appear to be caused by factors similar to those of past reproducibility crises: researcher degrees of freedom and a bias towards positive results. We call for bottom-up cultural changes to minimize biased reporting as well as top-down structural reforms intended to reduce perverse incentives for doing so.
We consider a non-linear Bayesian data assimilation model for the periodic two-dimensional Navier-Stokes equations with initial condition modelled by a Gaussian process prior. We show that if the system is updated with sufficiently many discrete noisy measurements of the velocity field, then the posterior distribution eventually concentrates near the ground truth solution of the time evolution equation, and in particular that the initial condition is recovered consistently by the posterior mean vector field. We further show that the convergence rate can in general not be faster than inverse logarithmic in sample size, but describe specific conditions on the initial conditions when faster rates are possible. In the proofs we provide an explicit quantitative estimate for backward uniqueness of solutions of the two-dimensional Navier-Stokes equations.
We propose a boundary integral formulation for the dynamic problem of electromagnetic scattering and transmission by homogeneous dielectric obstacles. In the spirit of Costabel and Stephan, we use the transmission conditions to reduce the number of unknown densities and to formulate a system of coupled boundary integral equations describing the scattered and transmitted waves. The system is transformed into the Laplace domain where it is proven to be stable and uniquely solvable. The Laplace domain stability estimates are then used to establish the stability and unique solvability of the original time domain problem. Finally, we show how the bounds obtained in both Laplace and time domains can be used to derive error estimates for semi discrete Galerkin discretizations in space and for fully discrete numerical schemes that use Convolution Quadrature for time discretization and a conforming Galerkin method for discretization of the space variables.
We consider systematic numerical approximation of a viscoelastic phase separation model that describes the demixing of a polymer solvent mixture. An unconditionally stable discretisation method is proposed based on a finite element approximation in space and a variational time discretization strategy. The proposed method preserves the energy-dissipation structure of the underlying system exactly and allows to establish a fully discrete nonlinear stability estimate in natural norms based on the concept of relative energy. These estimates are used to derive order optimal error estimates for the method under minimal smoothness assumptions on the problem data, despite the presence of various strong nonlinearities in the equations. The theoretical results and main properties of the method are illustrated by numerical simulations which also demonstrate the capability to reproduce the relevant physical effects observed in experiments.
We propose an extremely versatile approach to address a large family of matrix nearness problems, possibly with additional linear constraints. Our method is based on splitting a matrix nearness problem into two nested optimization problems, of which the inner one can be solved either exactly or cheaply, while the outer one can be recast as an unconstrained optimization task over a smooth real Riemannian manifold. We observe that this paradigm applies to many matrix nearness problems of practical interest appearing in the literature, thus revealing that they are equivalent in this sense to a Riemannian optimization problem. We also show that the objective function to be minimized on the Riemannian manifold can be discontinuous, thus requiring regularization techniques, and we give conditions for this to happen. Finally, we demonstrate the practical applicability of our method by implementing it for a number of matrix nearness problems that are relevant for applications and are currently considered very demanding in practice. Extensive numerical experiments demonstrate that our method often greatly outperforms its predecessors, including algorithms specifically designed for those particular problems.
We consider the parallel-in-time solution of hyperbolic partial differential equation (PDE) systems in one spatial dimension, both linear and nonlinear. In the nonlinear setting, the discretized equations are solved with a preconditioned residual iteration based on a global linearization. The linear(ized) equation systems are approximately solved parallel-in-time using a block preconditioner applied in the characteristic variables of the underlying linear(ized) hyperbolic PDE. This change of variables is motivated by the observation that inter-variable coupling for characteristic variables is weak relative to intra-variable coupling, at least locally where spatio-temporal variations in the eigenvectors of the associated flux Jacobian are sufficiently small. For an $\ell$-dimensional system of PDEs, applying the preconditioner consists of solving a sequence of $\ell$ scalar linear(ized)-advection-like problems, each being associated with a different characteristic wave-speed in the underlying linear(ized) PDE. We approximately solve these linear advection problems using multigrid reduction-in-time (MGRIT); however, any other suitable parallel-in-time method could be used. Numerical examples are shown for the (linear) acoustics equations in heterogeneous media, and for the (nonlinear) shallow water equations and Euler equations of gas dynamics with shocks and rarefactions.
This work explores multi-modal inference in a high-dimensional simplified model, analytically quantifying the performance gain of multi-modal inference over that of analyzing modalities in isolation. We present the Bayes-optimal performance and weak recovery thresholds in a model where the objective is to recover the latent structures from two noisy data matrices with correlated spikes. The paper derives the approximate message passing (AMP) algorithm for this model and characterizes its performance in the high-dimensional limit via the associated state evolution. The analysis holds for a broad range of priors and noise channels, which can differ across modalities. The linearization of AMP is compared numerically to the widely used partial least squares (PLS) and canonical correlation analysis (CCA) methods, which are both observed to suffer from a sub-optimal recovery threshold.