In this paper, to the best of our knowledge, we make the first attempt at studying the parametric semilinear elliptic eigenvalue problems with the parametric coefficient and some power-type nonlinearities. The parametric coefficient is assumed to have an affine dependence on the countably many parameters with an appropriate class of sequences of functions. In this paper, we obtain the upper bound estimation for the mixed derivatives of the ground eigenpairs that has the same form obtained recently for the linear eigenvalue problem. The three most essential ingredients for this estimation are the parametric analyticity of the ground eigenpairs, the uniform boundedness of the ground eigenpairs, and the uniform positive differences between ground eigenvalues of linear operators. All these three ingredients need new techniques and a careful investigation of the nonlinear eigenvalue problem that will be presented in this paper. As an application, considering each parameter as a uniformly distributed random variable, we estimate the expectation of the eigenpairs using a randomly shifted quasi-Monte Carlo lattice rule and show the dimension-independent error bound.
The efficient approximation of parametric PDEs is of tremendous importance in science and engineering. In this paper, we show how one can train Galerkin discretizations to efficiently learn quantities of interest of solutions to a parametric PDE. The central component in our approach is an efficient neural-network-weighted Minimal-Residual formulation, which, after training, provides Galerkin-based approximations in standard discrete spaces that have accurate quantities of interest, regardless of the coarseness of the discrete space.
In this paper, we design a new kind of high order inverse Lax-Wendroff (ILW) boundary treatment for solving hyperbolic conservation laws with finite difference method on a Cartesian mesh. This new ILW method decomposes the construction of ghost point values near inflow boundary into two steps: interpolation and extrapolation. At first, we impose values of some artificial auxiliary points through a polynomial interpolating the interior points near the boundary. Then, we will construct a Hermite extrapolation based on those auxiliary point values and the spatial derivatives at boundary obtained via the ILW procedure. This polynomial will give us the approximation to the ghost point value. By an appropriate selection of those artificial auxiliary points, high-order accuracy and stable results can be achieved. Moreover, theoretical analysis indicates that comparing with the original ILW method, especially for higher order accuracy, the new proposed one would require fewer terms using the relatively complicated ILW procedure and thus improve computational efficiency on the premise of maintaining accuracy and stability. We perform numerical experiments on several benchmarks, including one- and two-dimensional scalar equations and systems. The robustness and efficiency of the proposed scheme is numerically verified.
Modeling the behavior of biological tissues and organs often necessitates the knowledge of their shape in the absence of external loads. However, when their geometry is acquired in-vivo through imaging techniques, bodies are typically subject to mechanical deformation due to the presence of external forces, and the load-free configuration needs to be reconstructed. This paper addresses this crucial and frequently overlooked topic, known as the inverse elasticity problem (IEP), by delving into both theoretical and numerical aspects, with a particular focus on cardiac mechanics. In this work, we extend Shield's seminal work to determine the structure of the IEP with arbitrary material inhomogeneities and in the presence of both body and active forces. These aspects are fundamental in computational cardiology, and we show that they may break the variational structure of the inverse problem. In addition, we show that the inverse problem might have no solution even in the presence of constant Neumann boundary conditions and a polyconvex strain energy functional. We then present the results of extensive numerical tests to validate our theoretical framework, and to characterize the computational challenges associated with a direct numerical approximation of the IEP. Specifically, we show that this framework outperforms existing approaches both in terms of robustness and optimality, such as Sellier's iterative procedure, even when the latter is improved with acceleration techniques. A notable discovery is that multigrid preconditioners are, in contrast to standard elasticity, not efficient, where a one-level additive Schwarz and generalized Dryja-Smith-Widlund provide a much more reliable alternative. Finally, we successfully address the IEP for a full-heart geometry, demonstrating that the IEP formulation can compute the stress-free configuration in real-life scenarios.
In this paper, we introduce a new simple approach to developing and establishing the convergence of splitting methods for a large class of stochastic differential equations (SDEs), including additive, diagonal and scalar noise types. The central idea is to view the splitting method as a replacement of the driving signal of an SDE, namely Brownian motion and time, with a piecewise linear path that yields a sequence of ODEs $-$ which can be discretized to produce a numerical scheme. This new way of understanding splitting methods is inspired by, but does not use, rough path theory. We show that when the driving piecewise linear path matches certain iterated stochastic integrals of Brownian motion, then a high order splitting method can be obtained. We propose a general proof methodology for establishing the strong convergence of these approximations that is akin to the general framework of Milstein and Tretyakov. That is, once local error estimates are obtained for the splitting method, then a global rate of convergence follows. This approach can then be readily applied in future research on SDE splitting methods. By incorporating recently developed approximations for iterated integrals of Brownian motion into these piecewise linear paths, we propose several high order splitting methods for SDEs satisfying a certain commutativity condition. In our experiments, which include the Cox-Ingersoll-Ross model and additive noise SDEs (noisy anharmonic oscillator, stochastic FitzHugh-Nagumo model, underdamped Langevin dynamics), the new splitting methods exhibit convergence rates of $O(h^{3/2})$ and outperform schemes previously proposed in the literature.
Retinopathy of prematurity (ROP) is a severe condition affecting premature infants, leading to abnormal retinal blood vessel growth, retinal detachment, and potential blindness. While semi-automated systems have been used in the past to diagnose ROP-related plus disease by quantifying retinal vessel features, traditional machine learning (ML) models face challenges like accuracy and overfitting. Recent advancements in deep learning (DL), especially convolutional neural networks (CNNs), have significantly improved ROP detection and classification. The i-ROP deep learning (i-ROP-DL) system also shows promise in detecting plus disease, offering reliable ROP diagnosis potential. This research comprehensively examines the contemporary progress and challenges associated with using retinal imaging and artificial intelligence (AI) to detect ROP, offering valuable insights that can guide further investigation in this domain. Based on 89 original studies in this field (out of 1487 studies that were comprehensively reviewed), we concluded that traditional methods for ROP diagnosis suffer from subjectivity and manual analysis, leading to inconsistent clinical decisions. AI holds great promise for improving ROP management. This review explores AI's potential in ROP detection, classification, diagnosis, and prognosis.
In this paper, we aim to perform sensitivity analysis of set-valued models and, in particular, to quantify the impact of uncertain inputs on feasible sets, which are key elements in solving a robust optimization problem under constraints. While most sensitivity analysis methods deal with scalar outputs, this paper introduces a novel approach for performing sensitivity analysis with set-valued outputs. Our innovative methodology is designed for excursion sets, but is versatile enough to be applied to set-valued simulators, including those found in viability fields, or when working with maps like pollutant concentration maps or flood zone maps. We propose to use the Hilbert-Schmidt Independence Criterion (HSIC) with a kernel designed for set-valued outputs. After proposing a probabilistic framework for random sets, a first contribution is the proof that this kernel is characteristic, an essential property in a kernel-based sensitivity analysis context. To measure the contribution of each input, we then propose to use HSIC-ANOVA indices. With these indices, we can identify which inputs should be neglected (screening) and we can rank the others according to their influence (ranking). The estimation of these indices is also adapted to the set-valued outputs. Finally, we test the proposed method on three test cases of excursion sets.
In this paper, we perform a study on the effectiveness of Neural Network (NN) techniques for deconvolution inverse problems. We consider NN's asymptotic limits, corresponding to Gaussian Processes (GPs), where parameter non-linearities are lost. Using these resulting GPs, we address the deconvolution inverse problem in the case of a quantum harmonic oscillator simulated through Monte Carlo techniques on a lattice. A scenario with a known analytical solution. Our findings indicate that solving the deconvolution inverse problem with a fully connected NN yields less performing results than those obtained using the GPs derived from NN's asymptotic limits. Furthermore, we observe the trained NN's accuracy approaching that of GPs with increasing layer width. Notably, one of these GPs defies interpretation as a probabilistic model, offering a novel perspective compared to established methods in the literature. Additionally, the NNs, in their asymptotic limit, provide cost-effective analytical solutions.
Quantization for a Borel probability measure refers to the idea of estimating a given probability by a discrete probability with support containing a finite number of elements. In this paper, we have considered a Borel probability measure $P$ on $\mathbb R^2$, which has support a nonuniform stretched Sierpi\'{n}ski triangle generated by a set of three contractive similarity mappings on $\mathbb R^2$. For this probability measure, we investigate the optimal sets of $n$-means and the $n$th quantization errors for all positive integers $n$.
This paper investigates the supercloseness of a singularly perturbed convection diffusion problem using the direct discontinuous Galerkin (DDG) method on a Shishkin mesh. The main technical difficulties lie in controlling the diffusion term inside the layer, the convection term outside the layer, and the inter element jump term caused by the discontinuity of the numerical solution. The main idea is to design a new composite interpolation, in which a global projection is used outside the layer to satisfy the interface conditions determined by the selection of numerical flux, thereby eliminating or controlling the troublesome terms on the unit interface; and inside the layer, Gau{\ss} Lobatto projection is used to improve the convergence order of the diffusion term. On the basis of that, by selecting appropriate parameters in the numerical flux, we obtain the supercloseness result of almost $k+1$ order under an energy norm. Numerical experiments support our main theoretical conclusion.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.