We introduce numerical solvers for the steady-state Boltzmann equation based on the symmetric Gauss-Seidel (SGS) method. Due to the quadratic collision operator in the Boltzmann equation, the SGS method requires solving a nonlinear system on each grid cell, and we consider two methods, namely Newton's method and the fixed-point iteration, in our numerical tests. For small Knudsen numbers, our method has an efficiency between the classical source iteration and the modern generalized synthetic iterative scheme, and the complexity of its implementation is closer to the source iteration. A variety of numerical tests are carried out to demonstrate its performance, and it is concluded that the proposed method is suitable for applications with moderate to large Knudsen numbers.
The automated finite element analysis of complex CAD models using boundary-fitted meshes is rife with difficulties. Immersed finite element methods are intrinsically more robust but usually less accurate. In this work, we introduce an efficient, robust, high-order immersed finite element method for complex CAD models. Our approach relies on three adaptive structured grids: a geometry grid for representing the implicit geometry, a finite element grid for discretising physical fields and a quadrature grid for evaluating the finite element integrals. The geometry grid is a sparse VDB (Volumetric Dynamic B+ tree) grid that is highly refined close to physical domain boundaries. The finite element grid consists of a forest of octree grids distributed over several processors, and the quadrature grid in each finite element cell is an octree grid constructed in a bottom-up fashion. We discretise physical fields on the finite element grid using high-order Lagrange basis functions. The resolution of the quadrature grid ensures that finite element integrals are evaluated with sufficient accuracy and that any sub-grid geometric features, like small holes or corners, are resolved up to a desired resolution. The conceptual simplicity and modularity of our approach make it possible to reuse open-source libraries, i.e. openVDB and p4est for implementing the geometry and finite element grids, respectively, and BDDCML for iteratively solving the discrete systems of equations in parallel using domain decomposition. We demonstrate the efficiency and robustness of the proposed approach by solving the Poisson equation on domains given by complex CAD models and discretised with tens of millions of degrees of freedom.
In this study, we investigate an anisotropic weakly over-penalised symmetric interior penalty method for the Stokes equation {on convex domains}. Our approach is a simple discontinuous Galerkin method similar to the Crouzeix--Raviart finite element method. As our primary contribution, we show a new proof for the consistency term, which allows us to obtain an estimate of the anisotropic consistency error. The key idea of the proof is to apply the relation between the Raviart--Thomas finite element space and a discontinuous space. While inf-sup stable schemes of the discontinuous Galerkin method on shape-regular mesh partitions have been widely discussed, our results show that the Stokes element satisfies the inf-sup condition on anisotropic meshes. Furthermore, we also provide an error estimate in an energy norm on anisotropic meshes. In numerical experiments, we compare calculation results for standard and anisotropic mesh partitions.
We propose a Lawson-time-splitting extended Fourier pseudospectral (LTSeFP) method for the numerical integration of the Gross-Pitaevskii equation with time-dependent potential that is of low regularity in space. For the spatial discretization of low regularity potential, we use an extended Fourier pseudospectral (eFP) method, i.e., we compute the discrete Fourier transform of the low regularity potential in an extended window. For the temporal discretization, to efficiently implement the eFP method for time-dependent low regularity potential, we combine the standard time-splitting method with a Lawson-type exponential integrator to integrate potential and nonlinearity differently. The LTSeFP method is both accurate and efficient: it achieves first-order convergence in time and optimal-order convergence in space in $L^2$-norm under low regularity potential, while the computational cost is comparable to the standard time-splitting Fourier pseudospectral method. Theoretically, we also prove such convergence orders for a large class of spatially low regularity time-dependent potential. Extensive numerical results are reported to confirm the error estimates and to demonstrate the superiority of our method.
The numerical solution of continuum damage mechanics (CDM) problems suffers from convergence-related challenges during the material softening stage, and consequently existing iterative solvers are subject to a trade-off between computational expense and solution accuracy. In this work, we present a novel unified arc-length (UAL) method, and we derive the formulation of the analytical tangent matrix and governing system of equations for both local and non-local gradient damage problems. Unlike existing versions of arc-length solvers that monolithically scale the external force vector, the proposed method treats the latter as an independent variable and determines the position of the system on the equilibrium path based on all the nodal variations of the external force vector. This approach renders the proposed solver substantially more efficient and robust than existing solvers used in CDM problems. We demonstrate the considerable advantages of the proposed algorithm through several benchmark 1D problems with sharp snap-backs and 2D examples under various boundary conditions and loading scenarios. The proposed UAL approach exhibits a superior ability of overcoming critical increments along the equilibrium path. Moreover, in the presented examples, the proposed UAL method is 1-2 orders of magnitude faster than force-controlled arc-length and monolithic Newton-Raphson solvers.
This paper addresses a factorization method for imaging the support of a wave-number-dependent source function from multi-frequency data measured at a finite pair of symmetric receivers in opposite directions. The source function is given by the inverse Fourier transform of a compactly supported time-dependent source whose initial moment or terminal moment for radiating is unknown. Using the multi-frequency far-field data at two opposite observation directions, we provide a computational criterion for characterizing the smallest strip containing the support and perpendicular to the directions. A new parameter is incorporated into the design of test functions for indicating the unknown moment. The data from a finite pair of opposite directions can be used to recover the $\Theta$-convex polygon of the support. Uniqueness in recovering the convex hull of the support is obtained as a by-product of our analysis using all observation directions. Similar results are also discussed with the multi-frequency near-field data from a finite pair of observation positions in three dimensions. We further comment on possible extensions to source functions with two disconnected supports. Extensive numerical tests in both two and three dimensions are implemented to show effectiveness and feasibility of the approach. The theoretical framework explored here should be seen as the frequency-domain analysis for inverse source problems in the time domain.
We develop adaptive time-stepping strategies for It\^o-type stochastic differential equations (SDEs) with jump perturbations. Our approach builds on adaptive strategies for SDEs. Adaptive methods can ensure strong convergence of nonlinear SDEs with drift and diffusion coefficients that violate global Lipschitz bounds by adjusting the stepsize dynamically on each trajectory to prevent spurious growth that can lead to loss of convergence if it occurs with sufficiently high probability. In this article we demonstrate the use of a jump-adapted mesh that incorporates jump times into the adaptive time-stepping strategy. We prove that any adaptive scheme satisfying a particular mean-square consistency bound for a nonlinear SDE in the non-jump case may be extended to a strongly convergent scheme in the Poisson jump case where jump and diffusion perturbations are mutually independent and the jump coefficient satisfies a global Lipschitz condition.
The hazard function represents one of the main quantities of interest in the analysis of survival data. We propose a general approach for parametrically modelling the dynamics of the hazard function using systems of autonomous ordinary differential equations (ODEs). This modelling approach can be used to provide qualitative and quantitative analyses of the evolution of the hazard function over time. Our proposal capitalises on the extensive literature of ODEs which, in particular, allow for establishing basic rules or laws on the dynamics of the hazard function via the use of autonomous ODEs. We show how to implement the proposed modelling framework in cases where there is an analytic solution to the system of ODEs or where an ODE solver is required to obtain a numerical solution. We focus on the use of a Bayesian modelling approach, but the proposed methodology can also be coupled with maximum likelihood estimation. A simulation study is presented to illustrate the performance of these models and the interplay of sample size and censoring. Two case studies using real data are presented to illustrate the use of the proposed approach and to highlight the interpretability of the corresponding models. We conclude with a discussion on potential extensions of our work and strategies to include covariates into our framework.
We consider nonlinear solvers for the incompressible, steady (or at a fixed time step for unsteady) Navier-Stokes equations in the setting where partial measurement data of the solution is available. The measurement data is incorporated/assimilated into the solution through a nudging term addition to the the Picard iteration that penalized the difference between the coarse mesh interpolants of the true solution and solver solution, analogous to how continuous data assimilation (CDA) is implemented for time dependent PDEs. This was considered in the paper [Li et al. {\it CMAME} 2023], and we extend the methodology by improving the analysis to be in the $L^2$ norm instead of a weighted $H^1$ norm where the weight depended on the coarse mesh width, and to the case of noisy measurement data. For noisy measurement data, we prove that the CDA-Picard method is stable and convergent, up to the size of the noise. Numerical tests illustrate the results, and show that a very good strategy when using noisy data is to use CDA-Picard to generate an initial guess for the classical Newton iteration.
We consider scalar semilinear elliptic PDEs, where the nonlinearity is strongly monotone, but only locally Lipschitz continuous. To linearize the arising discrete nonlinear problem, we employ a damped Zarantonello iteration, which leads to a linear Poisson-type equation that is symmetric and positive definite. The resulting system is solved by a contractive algebraic solver such as a multigrid method with local smoothing. We formulate a fully adaptive algorithm that equibalances the various error components coming from mesh refinement, iterative linearization, and algebraic solver. We prove that the proposed adaptive iteratively linearized finite element method (AILFEM) guarantees convergence with optimal complexity, where the rates are understood with respect to the overall computational cost (i.e., the computational time). Numerical experiments investigate the involved adaptivity parameters.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.