We introduce two iterative methods, GPBiLQ and GPQMR, for solving unsymmetric partitioned linear systems. The basic mechanism underlying GPBiLQ and GPQMR is a novel simultaneous tridiagonalization via biorthogonality that allows for short-recurrence iterative schemes. Similar to the biconjugate gradient method, it is possible to develop another method, GPBiCG, whose iterate (if it exists) can be obtained inexpensively from the GPBiLQ iterate. Whereas the iterate of GPBiCG may not exist, the iterates of GPBiLQ and GPQMR are always well defined as long as the biorthogonal tridiagonal reduction process does not break down. We discuss connections between the proposed methods and some existing methods, and give numerical experiments to illustrate the performance of the proposed methods.
In this work, we present a high-order finite volume framework for the numerical simulation of shallow water flows. The method is designed to accurately capture complex dynamics inherent in shallow water systems, particularly suited for applications such as tsunami simulations. The arbitrarily high-order framework ensures precise representation of flow behaviors, crucial for simulating phenomena characterized by rapid changes and fine-scale features. Thanks to an {\it ad-hoc} reformulation in terms of production-destruction terms, the time integration ensures positivity preservation without any time-step restrictions, a vital attribute for physical consistency, especially in scenarios where negative water depth reconstructions could lead to unrealistic results. In order to introduce the preservation of general steady equilibria dictated by the underlying balance law, the high-order reconstruction and numerical flux are blended in a convex fashion with a well-balanced approximation, which is able to provide exact preservation of both static and moving equilibria. Through numerical experiments, we demonstrate the effectiveness and robustness of the proposed approach in capturing the intricate dynamics of shallow water flows, while preserving key physical properties essential for flood simulations.
A major challenge in computed tomography is reconstructing objects from incomplete data. An increasingly popular solution for these problems is to incorporate deep learning models into reconstruction algorithms. This study introduces a novel approach by integrating a Fourier neural operator (FNO) into the Filtered Backprojection (FBP) reconstruction method, yielding the FNO back projection (FNO-BP) network. We employ moment conditions for sinogram extrapolation to assist the model in mitigating artefacts from limited data. Notably, our deep learning architecture maintains a runtime comparable to classical filtered back projection (FBP) reconstructions, ensuring swift performance during both inference and training. We assess our reconstruction method in the context of the Helsinki Tomography Challenge 2022 and also compare it against regular FBP methods.
Motivated by a recent work on a preconditioned MINRES for flipped linear systems in imaging, in this note we extend the scope of that research for including more precise boundary conditions such as reflective and anti-reflective ones. We prove spectral results for the matrix-sequences associated to the original problem, which justify the use of the MINRES in the current setting. The theoretical spectral analysis is supported by a wide variety of numerical experiments, concerning the visualization of the spectra of the original matrices in various ways. We also report numerical tests regarding the convergence speed and regularization features of the associated GMRES and MINRES methods. Conclusions and open problems end the present study.
Latitude on the choice of initialisation is a shared feature between one-step extended state-space and multi-step methods. The paper focuses on lattice Boltzmann schemes, which can be interpreted as examples of both previous categories of numerical schemes. We propose a modified equation analysis of the initialisation schemes for lattice Boltzmann methods, determined by the choice of initial data. These modified equations provide guidelines to devise and analyze the initialisation in terms of order of consistency with respect to the target Cauchy problem and time smoothness of the numerical solution. In detail, the larger the number of matched terms between modified equations for initialisation and bulk methods, the smoother the obtained numerical solution. This is particularly manifest for numerical dissipation. Starting from the constraints to achieve time smoothness, which can quickly become prohibitive for they have to take the parasitic modes into consideration, we explain how the distinct lack of observability for certain lattice Boltzmann schemes -- seen as dynamical systems on a commutative ring -- can yield rather simple conditions and be easily studied as far as their initialisation is concerned. This comes from the reduced number of initialisation schemes at the fully discrete level. These theoretical results are successfully assessed on several lattice Boltzmann methods.
Two Cox-based multistate modeling approaches are compared for analyzing a complex multicohort event history process. The first approach incorporates cohort information as a fixed covariate, thereby providing a direct estimation of the cohort-specific effects. The second approach includes the cohort as stratum variable, thus giving an extra flexibility in estimating the transition probabilities. Additionally, both approaches may include possible interaction terms between the cohort and a given prognostic predictor. Furthermore, the Markov property conditional on observed prognostic covariates is assessed using a global score test. Whenever departures from the Markovian assumption are revealed for a given transition, the time of entry into the current state is incorporated as a fixed covariate, yielding a semi-Markov process. The two proposed methods are applied to a three-wave dataset of COVID-19-hospitalized adults in the southern Barcelona metropolitan area (Spain), and the corresponding performance is discussed. While both semi-Markovian approaches are shown to be useful, the preferred one will depend on the focus of the inference. To summarize, the cohort-covariate approach enables an insightful discussion on the the behavior of the cohort effects, whereas the stratum-cohort approach provides flexibility to estimate transition-specific underlying risks according with the different cohorts
Boundary value problems involving elliptic PDEs such as the Laplace and the Helmholtz equations are ubiquitous in mathematical physics and engineering. Many such problems can be alternatively formulated as integral equations that are mathematically more tractable. However, an integral-equation formulation poses a significant computational challenge: solving large dense linear systems that arise upon discretization. In cases where iterative methods converge rapidly, existing methods that draw on fast summation schemes such as the Fast Multipole Method are highly efficient and well-established. More recently, linear complexity direct solvers that sidestep convergence issues by directly computing an invertible factorization have been developed. However, storage and computation costs are high, which limits their ability to solve large-scale problems in practice. In this work, we introduce a distributed-memory parallel algorithm based on an existing direct solver named ``strong recursive skeletonization factorization.'' Specifically, we apply low-rank compression to certain off-diagonal matrix blocks in a way that minimizes computation and data movement. Compared to iterative algorithms, our method is particularly suitable for problems involving ill-conditioned matrices or multiple right-hand sides. Large-scale numerical experiments are presented to show the performance of our Julia implementation.
We present an algorithm for the exact computer-aided construction of the Voronoi cells of lattices with known symmetry group. Our algorithm scales better than linearly with the total number of faces and is applicable to dimensions beyond 12, which previous methods could not achieve. The new algorithm is applied to the Coxeter-Todd lattice $K_{12}$ as well as to a family of lattices obtained from laminating $K_{12}$. By optimizing this family, we obtain a new best 13-dimensional lattice quantizer (among the lattices with published exact quantizer constants).
Complex interval arithmetic is a powerful tool for the analysis of computational errors. The naturally arising rectangular, polar, and circular (together called primitive) interval types are not closed under simple arithmetic operations, and their use yields overly relaxed bounds. The later introduced polygonal type, on the other hand, allows for arbitrarily precise representation of the above operations for a higher computational cost. We propose the polyarcular interval type as an effective extension of the previous types. The polyarcular interval can represent all primitive intervals and most of their arithmetic combinations precisely and has an approximation capability competing with that of the polygonal interval. In particular, in antenna tolerance analysis it can achieve perfect accuracy for lower computational cost then the polygonal type, which we show in a relevant case study. In this paper, we present a rigorous analysis of the arithmetic properties of all five interval types, involving a new algebro-geometric method of boundary analysis.
Continual learning algorithms strive to acquire new knowledge while preserving prior information. Often, these algorithms emphasise stability and restrict network updates upon learning new tasks. In many cases, such restrictions come at a cost to the model's plasticity, i.e. the model's ability to adapt to the requirements of a new task. But is all change detrimental? Here, we approach this question by proposing that activation spaces in neural networks can be decomposed into two subspaces: a readout range in which change affects prior tasks and a null space in which change does not alter prior performance. Based on experiments with this novel technique, we show that, indeed, not all activation change is associated with forgetting. Instead, the only change in the subspace visible to the readout of a task can lead to decreased stability, while restricting change outside of this subspace is associated only with a loss of plasticity. Analysing various commonly used algorithms, we show that regularisation-based techniques do not fully disentangle the two spaces and, as a result, restrict plasticity more than need be. We expand our results by investigating a linear model in which we can manipulate learning in the two subspaces directly and thus causally link activation changes to stability and plasticity. For hierarchical, nonlinear cases, we present an approximation that enables us to estimate functionally relevant subspaces at every layer of a deep nonlinear network, corroborating our previous insights. Together, this work provides novel means to derive insights into the mechanisms behind stability and plasticity in continual learning and may serve as a diagnostic tool to guide developments of future continual learning algorithms that stabilise inference while allowing maximal space for learning.
The current study investigates the asymptotic spectral properties of a finite difference approximation of nonlocal Helmholtz equations with a Caputo fractional Laplacian and a variable coefficient wave number $\mu$, as it occurs when considering a wave propagation in complex media, characterized by nonlocal interactions and spatially varying wave speeds. More specifically, by using tools from Toeplitz and generalized locally Toeplitz theory, the present research delves into the spectral analysis of nonpreconditioned and preconditioned matrix-sequences. We report numerical evidences supporting the theoretical findings. Finally, open problems and potential extensions in various directions are presented and briefly discussed.