Numerical solutions for flows in partially saturated porous media pose challenges related to the non-linearity and elliptic-parabolic degeneracy of the governing Richards' equation. Iterative methods are therefore required to manage the complexity of the flow problem. Norms of successive corrections in the iterative procedure form sequences of positive numbers. Definitions of computational orders of convergence and theoretical results for abstract convergent sequences can thus be used to evaluate and compare different iterative methods. We analyze in this frame Newton's and $L$-scheme methods for an implicit finite element method (FEM) and the $L$-scheme for an explicit finite difference method (FDM). We also investigate the effect of the Anderson Acceleration (AA) on both the implicit and the explicit $L$-schemes. Considering a two-dimensional test problem, we found that the AA halves the number of iterations and renders the convergence of the FEM scheme two times faster. As for the FDM approach, AA does not reduce the number of iterations and even increases the computational effort. Instead, being explicit, the FDM $L$-scheme without AA is faster and as accurate as the FEM $L$-scheme with AA.
This work presents a comparative review and classification between some well-known thermodynamically consistent models of hydrogel behavior in a large deformation setting, specifically focusing on solvent absorption/desorption and its impact on mechanical deformation and network swelling. The proposed discussion addresses formulation aspects, general mathematical classification of the governing equations, and numerical implementation issues based on the finite element method. The theories are presented in a unified framework demonstrating that, despite not being evident in some cases, all of them follow equivalent thermodynamic arguments. A detailed numerical analysis is carried out where Taylor-Hood elements are employed in the spatial discretization to satisfy the inf-sup condition and to prevent spurious numerical oscillations. The resulting discrete problems are solved using the FEniCS platform through consistent variational formulations, employing both monolithic and staggered approaches. We conduct benchmark tests on various hydrogel structures, demonstrating that major differences arise from the chosen volumetric response of the hydrogel. The significance of this choice is frequently underestimated in the state-of-the-art literature but has been shown to have substantial implications on the resulting hydrogel behavior.
We propose an implicit Discontinuous Galerkin (DG) discretization for incompressible two-phase flows using an artificial compressibility formulation. The conservative level set (CLS) method is employed in combination with a reinitialization procedure to capture the moving interface. A projection method based on the L-stable TR-BDF2 method is adopted for the time discretization of the Navier-Stokes equations and of the level set method. Adaptive Mesh Refinement (AMR) is employed to enhance the resolution in correspondence of the interface between the two fluids. The effectiveness of the proposed approach is shown in a number of classical benchmarks. A specific analysis on the influence of different choices of the mixture viscosity is also carried out.
This article aims to provide approximate solutions for the non-linear collision-induced breakage equation using two different semi-analytical schemes, i.e., variational iteration method (VIM) and optimized decomposition method (ODM). The study also includes the detailed convergence analysis and error estimation for ODM in the case of product collisional ($K(\epsilon,\rho)=\epsilon\rho$) and breakage ($b(\epsilon,\rho,\sigma)=\frac{2}{\rho}$) kernels with an exponential decay initial condition. By contrasting estimated concentration function and moments with exact solutions, the novelty of the suggested approaches is presented considering three numerical examples. Interestingly, in one case, VIM provides a closed-form solution, however, finite term series solutions obtained via both schemes supply a great approximation for the concentration function and moments.
The non-linear collision-induced breakage equation has significant applications in particulate processes. Two semi-analytical techniques, namely homotopy analysis method (HAM) and accelerated homotopy perturbation method (AHPM) are investigated along with the well-known finite volume method (FVM) to comprehend the dynamical behavior of the non-linear system, i.e., the concentration function, the total number and the total mass of the particles in the system. The theoretical convergence analyses of the series solutions of HAM and AHPM are discussed. In addition, the error estimations of the truncated solutions of both methods equip the maximum absolute error bound. To justify the applicability and accuracy of these methods, numerical simulations are compared with the findings of FVM and analytical solutions considering three physical problems.
We show convergence rates for a sparse grid approximation of the distribution of solutions of the stochastic Landau-Lifshitz-Gilbert equation. Beyond being a frequently studied equation in engineering and physics, the stochastic Landau-Lifshitz-Gilbert equation poses many interesting challenges that do not appear simultaneously in previous works on uncertainty quantification: The equation is strongly non-linear, time-dependent, and has a non-convex side constraint. Moreover, the parametrization of the stochastic noise features countably many unbounded parameters and low regularity compared to other elliptic and parabolic problems studied in uncertainty quantification. We use a novel technique to establish uniform holomorphic regularity of the parameter-to-solution map based on a Gronwall-type estimate and the implicit function theorem. This method is very general and based on a set of abstract assumptions. Thus, it can be applied beyond the Landau-Lifshitz-Gilbert equation as well. We demonstrate numerically the feasibility of approximating with sparse grid and show a clear advantage of a multi-level sparse grid scheme.
We propose an operator learning approach to accelerate geometric Markov chain Monte Carlo (MCMC) for solving infinite-dimensional nonlinear Bayesian inverse problems. While geometric MCMC employs high-quality proposals that adapt to posterior local geometry, it requires computing local gradient and Hessian information of the log-likelihood, incurring a high cost when the parameter-to-observable (PtO) map is defined through expensive model simulations. We consider a delayed-acceptance geometric MCMC method driven by a neural operator surrogate of the PtO map, where the proposal is designed to exploit fast surrogate approximations of the log-likelihood and, simultaneously, its gradient and Hessian. To achieve a substantial speedup, the surrogate needs to be accurate in predicting both the observable and its parametric derivative (the derivative of the observable with respect to the parameter). Training such a surrogate via conventional operator learning using input--output samples often demands a prohibitively large number of model simulations. In this work, we present an extension of derivative-informed operator learning [O'Leary-Roseberry et al., J. Comput. Phys., 496 (2024)] using input--output--derivative training samples. Such a learning method leads to derivative-informed neural operator (DINO) surrogates that accurately predict the observable and its parametric derivative at a significantly lower training cost than the conventional method. Cost and error analysis for reduced basis DINO surrogates are provided. Numerical studies on PDE-constrained Bayesian inversion demonstrate that DINO-driven MCMC generates effective posterior samples 3--9 times faster than geometric MCMC and 60--97 times faster than prior geometry-based MCMC. Furthermore, the training cost of DINO surrogates breaks even after collecting merely 10--25 effective posterior samples compared to geometric MCMC.
This paper proposes a~simple, yet powerful, method for balancing distributions of covariates for causal inference based on observational studies. The method makes it possible to balance an arbitrary number of quantiles (e.g., medians, quartiles, or deciles) together with means if necessary. The proposed approach is based on the theory of calibration estimators (Deville and S\"arndal 1992), in particular, calibration estimators for quantiles, proposed by Harms and Duchesne (2006). The method does not require numerical integration, kernel density estimation or assumptions about the distributions. Valid estimates can be obtained by drawing on existing asymptotic theory. An~illustrative example of the proposed approach is presented for the entropy balancing method and the covariate balancing propensity score method. Results of a~simulation study indicate that the method efficiently estimates average treatment effects on the treated (ATT), the average treatment effect (ATE), the quantile treatment effect on the treated (QTT) and the quantile treatment effect (QTE), especially in the presence of non-linearity and mis-specification of the models. The proposed approach can be further generalized to other designs (e.g. multi-category, continuous) or methods (e.g. synthetic control method). An open source software implementing proposed methods is available.
Existence of sufficient conditions for unisolvence of Kansa unsymmetric collocation for PDEs is still an open problem. In this paper we make a first step in this direction, proving that unsymmetric collocation matrices with Thin-Plate Splines for the 2D Poisson equation are almost surely nonsingular, when the discretization points are chosen randomly on domains with analytic boundary.
In decision-making, maxitive functions are used for worst-case and best-case evaluations. Maxitivity gives rise to a rich structure that is well-studied in the context of the pointwise order. In this article, we investigate maxitivity with respect to general preorders and provide a representation theorem for such functionals. The results are illustrated for different stochastic orders in the literature, including the usual stochastic order, the increasing convex/concave order, and the dispersive order.
This article is concerned with the multilevel Monte Carlo (MLMC) methods for approximating expectations of some functions of the solution to the Heston 3/2-model from mathematical finance, which takes values in $(0, \infty)$ and possesses superlinearly growing drift and diffusion coefficients. To discretize the SDE model, a new Milstein-type scheme is proposed to produce independent sample paths. The proposed scheme can be explicitly solved and is positivity-preserving unconditionally, i.e., for any time step-size $h>0$. This positivity-preserving property for large discretization time steps is particularly desirable in the MLMC setting. Furthermore, a mean-square convergence rate of order one is proved in the non-globally Lipschitz regime, which is not trivial, as the diffusion coefficient grows super-linearly. The obtained order-one convergence in turn promises the desired relevant variance of the multilevel estimator and justifies the optimal complexity $\mathcal{O}(\epsilon^{-2})$ for the MLMC approach, where $\epsilon > 0$ is the required target accuracy. Numerical experiments are finally reported to confirm the theoretical findings.