The direct method is one of the most important algorithms for solving linear systems of equations, with LU decomposition comprising a significant portion of its computation time. This study explores strategies to accelerate complex LU decomposition using multiple-precision floating-point arithmetic of the multiple-component type. Specifically, we explore the potential efficiency gains using a combination of SIMDization and the 3M method for complex matrix multiplication. Our benchmark tests compare this approach with the direct method implementation in MPLAPACK, focusing on computation time and numerical errors.
We consider the solution of systems of linear algebraic equations (SLAEs) with an ill- conditioned or degenerate exact matrix and an approximate right-hand side. An approach to solving such a problem is proposed and justified, which makes it possible to improve the conditionality of the SLAE matrix and, as a result, obtain an approximate solution that is stable to perturbations of the right hand side with higher accuracy than using other methods. The approach is implemented by an algorithm that uses so-called minimal pseudoinverse matrices. The results of numerical experiments are presented that confirm the theoretical provisions of the article.
The multigrid V-cycle method is a popular method for solving systems of linear equations. It computes an approximate solution by using smoothing on fine levels and solving a system of linear equations on the coarsest level. Solving on the coarsest level depends on the size and difficulty of the problem. If the size permits, it is typical to use a direct method based on LU or Cholesky decomposition. In settings with large coarsest-level problems, approximate solvers such as iterative Krylov subspace methods, or direct methods based on low-rank approximation, are often used. The accuracy of the coarsest-level solver is typically determined based on the experience of the users with the concrete problems and methods. In this paper we present an approach to analyzing the effects of approximate coarsest-level solves on the convergence of the V-cycle method for symmetric positive definite problems. Using these results, we derive coarsest-level stopping criterion through which we may control the difference between the approximation computed by a V-cycle method with approximate coarsest-level solver and the approximation which would be computed if the coarsest-level problems were solved exactly. The coarsest-level stopping criterion may thus be set up such that the V-cycle method converges to a chosen finest-level accuracy in (nearly) the same number of V-cycle iterations as the V-cycle method with exact coarsest-level solver. We also utilize the theoretical results to discuss how the convergence of the V-cycle method may be affected by the choice of a tolerance in a coarsest-level stopping criterion based on the relative residual norm.
Partial differential equations (PDEs) are crucial in modelling diverse phenomena across scientific disciplines, including seismic and medical imaging, computational fluid dynamics, image processing, and neural networks. Solving these PDEs on a large scale is an intricate and time-intensive process that demands careful tuning. This paper introduces automated code-generation techniques specifically tailored for distributed memory parallelism (DMP) to solve explicit finite-difference (FD) stencils at scale, a fundamental challenge in numerous scientific applications. These techniques are implemented and integrated into the Devito DSL and compiler framework, a well-established solution for automating the generation of FD solvers based on a high-level symbolic math input. Users benefit from modelling simulations at a high-level symbolic abstraction and effortlessly harnessing HPC-ready distributed-memory parallelism without altering their source code. This results in drastic reductions both in execution time and developer effort. While the contributions of this work are implemented and integrated within the Devito framework, the DMP concepts and the techniques applied are generally applicable to any FD solvers. A comprehensive performance evaluation of Devito's DMP via MPI demonstrates highly competitive weak and strong scaling on the Archer2 supercomputer, demonstrating the effectiveness of the proposed approach in meeting the demands of large-scale scientific simulations.
This manuscript derives locally weighted ensemble Kalman methods from the point of view of ensemble-based function approximation. This is done by using pointwise evaluations to build up a local linear or quadratic approximation of a function, tapering off the effect of distant particles via local weighting. This introduces a candidate method (the locally weighted Ensemble Kalman method for inversion) with the motivation of combining some of the strengths of the particle filter (ability to cope with nonlinear maps and non-Gaussian distributions) and the Ensemble Kalman filter (no filter degeneracy).
Characterized by an outer integral connected to an inner integral through a nonlinear function, nested integration is a challenging problem in various fields, such as engineering and mathematical finance. The available numerical methods for nested integration based on Monte Carlo (MC) methods can be prohibitively expensive owing to the error propagating from the inner to the outer integral. Attempts to enhance the efficiency of these approximations using the quasi-MC (QMC) or randomized QMC (rQMC) method have focused on either the inner or outer integral approximation. This work introduces a novel nested rQMC method that simultaneously addresses the approximation of the inner and outer integrals. This method leverages the unique nested integral structure to offer a more efficient approximation mechanism. By incorporating Owen's scrambling techniques, we address integrands exhibiting infinite variation in the Hardy--Krause sense, enabling theoretically sound error estimates. As the primary contribution, we derive asymptotic error bounds for the bias and variance of our estimator, along with the regularity conditions under which these bounds can be attained. In addition, we provide nearly optimal sample sizes for the rQMC approximations underlying the numerical implementation of the proposed method. Moreover, we indicate how to combine this method with importance sampling to remedy the measure concentration arising in the inner integral. We verify the estimator quality through numerical experiments in the context of expected information gain estimation. We compare the computational efficiency of the nested rQMC method against standard nested MC integration for two case studies: one in thermomechanics and the other in pharmacokinetics. These examples highlight the computational savings and enhanced applicability of the proposed approach.
This paper contributes to the study of optimal experimental design for Bayesian inverse problems governed by partial differential equations (PDEs). We derive estimates for the parametric regularity of multivariate double integration problems over high-dimensional parameter and data domains arising in Bayesian optimal design problems. We provide a detailed analysis for these double integration problems using two approaches: a full tensor product and a sparse tensor product combination of quasi-Monte Carlo (QMC) cubature rules over the parameter and data domains. Specifically, we show that the latter approach significantly improves the convergence rate, exhibiting performance comparable to that of QMC integration of a single high-dimensional integral. Furthermore, we numerically verify the predicted convergence rates for an elliptic PDE problem with an unknown diffusion coefficient in two spatial dimensions, offering empirical evidence supporting the theoretical results and highlighting practical applicability.
We introduce a new class of absorbing boundary conditions (ABCs) for the Helmholtz equation. The proposed ABCs are obtained by using $L$ discrete layers and the $Q_N$ Lagrange finite element in conjunction with the $N$-point Gauss-Legendre quadrature reduced integration rule in a specific formulation of perfectly matched layers. The proposed ABCs are classified by a tuple $(L,N)$, and achieve reflection error of order $O(R^{2LN})$ for some $R<1$. The new ABCs generalise the perfectly matched discrete layers proposed by Guddati and Lim [Int. J. Numer. Meth. Engng 66 (6) (2006) 949-977], including them as type $(L,1)$. An analysis of the proposed ABCs is performed motivated by the work of Ainsworth [J. Comput. Phys. 198 (1) (2004) 106-130]. The new ABCs facilitate numerical implementations of the Helmholtz problem with ABCs if $Q_N$ finite elements are used in the physical domain as well as give more insight into this field for the further advancement.
Physics-informed neural networks (PINNs) have emerged as powerful tools for solving a wide range of partial differential equations (PDEs). However, despite their user-friendly interface and broad applicability, PINNs encounter challenges in accurately resolving PDEs, especially when dealing with singular cases that may lead to unsatisfactory local minima. To address these challenges and improve solution accuracy, we propose an innovative approach called Annealed Adaptive Importance Sampling (AAIS) for computing the discretized PDE residuals of the cost functions, inspired by the Expectation Maximization algorithm used in finite mixtures to mimic target density. Our objective is to approximate discretized PDE residuals by strategically sampling additional points in regions with elevated residuals, thus enhancing the effectiveness and accuracy of PINNs. Implemented together with a straightforward resampling strategy within PINNs, our AAIS algorithm demonstrates significant improvements in efficiency across a range of tested PDEs, even with limited training datasets. Moreover, our proposed AAIS-PINN method shows promising capabilities in solving high-dimensional singular PDEs. The adaptive sampling framework introduced here can be integrated into various PINN frameworks.
Stability of the BDF methods of order up to five for parabolic equations can be established by the energy technique via Nevanlinna--Odeh multipliers. The nonexistence of Nevanlinna--Odeh multipliers makes the six-step BDF method special; however, the energy technique was recently extended by the authors in [Akrivis et al., SIAM J. Numer. Anal. \textbf{59} (2021) 2449--2472] and covers all six stable BDF methods. The seven-step BDF method is unstable for parabolic equations, since it is not even zero-stable. In this work, we construct and analyze a stable linear combination of two non zero-stable schemes, the seven-step BDF method and its shifted counterpart, referred to as WSBDF7 method. The stability regions of the WSBDF$q, q\leqslant 7$, with a weight $\vartheta\geqslant1$, increase as $\vartheta$ increases, are larger than the stability regions of the classical BDF$q,$ corresponding to $\vartheta=1$. We determine novel and suitable multipliers for the WSBDF7 method and establish stability for parabolic equations by the energy technique. The proposed approach is applicable for mean curvature flow, gradient flows, fractional equations and nonlinear equations.
In reinsurance, Poisson and Negative binomial distributions are employed for modeling frequency. However, the incomplete data regarding reported incurred claims above a priority level presents challenges in estimation. This paper focuses on frequency estimation using Schnieper's framework for claim numbering. We demonstrate that Schnieper's model is consistent with a Poisson distribution for the total number of claims above a priority at each year of development, providing a robust basis for parameter estimation. Additionally, we explain how to build an alternative assumption based on a Negative binomial distribution, which yields similar results. The study includes a bootstrap procedure to manage uncertainty in parameter estimation and a case study comparing assumptions and evaluating the impact of the bootstrap approach.