This paper presents a study of solution strategies for the Cahn-Hilliard-Biot equations, a complex mathematical model for understanding flow in deformable porous media with changing solid phases. Solving the Cahn-Hilliard-Biot system poses significant challenges due to its coupled, nonlinear and non-convex nature. We explore various solution algorithms, comparing monolithic and splitting strategies, focusing on both their computational efficiency and robustness.
With the increasing availability of large scale datasets, computational power and tools like automatic differentiation and expressive neural network architectures, sequential data are now often treated in a data-driven way, with a dynamical model trained from the observation data. While neural networks are often seen as uninterpretable black-box architectures, they can still benefit from physical priors on the data and from mathematical knowledge. In this paper, we use a neural network architecture which leverages the long-known Koopman operator theory to embed dynamical systems in latent spaces where their dynamics can be described linearly, enabling a number of appealing features. We introduce methods that enable to train such a model for long-term continuous reconstruction, even in difficult contexts where the data comes in irregularly-sampled time series. The potential for self-supervised learning is also demonstrated, as we show the promising use of trained dynamical models as priors for variational data assimilation techniques, with applications to e.g. time series interpolation and forecasting.
The Gaussian Elimination with Partial Pivoting (GEPP) is a classical algorithm for solving systems of linear equations. Although in specific cases the loss of precision in GEPP due to roundoff errors can be very significant, empirical evidence strongly suggests that for a {\it typical} square coefficient matrix, GEPP is numerically stable. We obtain a (partial) theoretical justification of this phenomenon by showing that, given the random $n\times n$ standard Gaussian coefficient matrix $A$, the {\it growth factor} of the Gaussian Elimination with Partial Pivoting is at most polynomially large in $n$ with probability close to one. This implies that with probability close to one the number of bits of precision sufficient to solve $Ax = b$ to $m$ bits of accuracy using GEPP is $m+O(\log n)$, which improves an earlier estimate $m+O(\log^2 n)$ of Sankar, and which we conjecture to be optimal by the order of magnitude. We further provide tail estimates of the growth factor which can be used to support the empirical observation that GEPP is more stable than the Gaussian Elimination with no pivoting.
We discuss the inhomogeneous spiked Wigner model, a theoretical framework recently introduced to study structured noise in various learning scenarios, through the prism of random matrix theory, with a specific focus on its spectral properties. Our primary objective is to find an optimal spectral method and to extend the celebrated \cite{BBP} (BBP) phase transition criterion -- well-known in the homogeneous case -- to our inhomogeneous, block-structured, Wigner model. We provide a thorough rigorous analysis of a transformed matrix and show that the transition for the appearance of 1) an outlier outside the bulk of the limiting spectral distribution and 2) a positive overlap between the associated eigenvector and the signal, occurs precisely at the optimal threshold, making the proposed spectral method optimal within the class of iterative methods for the inhomogeneous Wigner problem.
Of all the possible projection methods for solving large-scale Lyapunov matrix equations, Galerkin approaches remain much more popular than minimal-residual ones. This is mainly due to the different nature of the projected problems stemming from these two families of methods. While a Galerkin approach leads to the solution of a low-dimensional matrix equation per iteration, a matrix least-squares problem needs to be solved per iteration in a minimal-residual setting. The significant computational cost of these least-squares problems has steered researchers towards Galerkin methods in spite of the appealing properties of minimal-residual schemes. In this paper we introduce a framework that allows for modifying the Galerkin approach by low-rank, additive corrections to the projected matrix equation problem with the two-fold goal of attaining monotonic convergence rates similar to those of minimal-residual schemes while maintaining essentially the same computational cost of the original Galerkin method. We analyze the well-posedness of our framework and determine possible scenarios where we expect the residual norm attained by two low-rank-modified variants to behave similarly to the one computed by a minimal-residual technique. A panel of diverse numerical examples shows the behavior and potential of our new approach.
We address a prime counting problem across the homology classes of a graph, presenting a graph-theoretical Dirichlet-type analogue of the prime number theorem. The main machinery we have developed and employed is a spectral antisymmetry theorem, revealing that the spectra of the twisted graph adjacency matrices have an antisymmetric distribution over the character group of the graph. Additionally, we derive some trace formulas based on the twisted adjacency matrices as part of our analysis.
Yurinskii's coupling is a popular theoretical tool for non-asymptotic distributional analysis in mathematical statistics and applied probability, offering a Gaussian strong approximation with an explicit error bound under easily verified conditions. Originally stated in $\ell^2$-norm for sums of independent random vectors, it has recently been extended both to the $\ell^p$-norm, for $1 \leq p \leq \infty$, and to vector-valued martingales in $\ell^2$-norm, under some strong conditions. We present as our main result a Yurinskii coupling for approximate martingales in $\ell^p$-norm, under substantially weaker conditions than those previously imposed. Our formulation further allows for the coupling variable to follow a more general Gaussian mixture distribution, and we provide a novel third-order coupling method which gives tighter approximations in certain settings. We specialize our main result to mixingales, martingales, and independent data, and derive uniform Gaussian mixture strong approximations for martingale empirical processes. Applications to nonparametric partitioning-based and local polynomial regression procedures are provided.
We present some basic elements of the theory of generalised Br\`{e}gman relative entropies over nonreflexive Banach spaces. Using nonlinear embeddings of Banach spaces together with the Euler--Legendre functions, this approach unifies two former approaches to Br\`{e}gman relative entropy: one based on reflexive Banach spaces, another based on differential geometry. This construction allows to extend Br\`{e}gman relative entropies, and related geometric and operator structures, to arbitrary-dimensional state spaces of probability, quantum, and postquantum theory. We give several examples, not considered previously in the literature.
This article focuses on the coherent forecasting of the recently introduced novel geometric AR(1) (NoGeAR(1)) model - an INAR model based on inflated - parameter binomial thinning approach. Various techniques are available to achieve h - step ahead coherent forecasts of count time series, like median and mode forecasting. However, there needs to be more body of literature addressing coherent forecasting in the context of overdispersed count time series. Here, we study the forecasting distribution corresponding to NoGeAR(1) process using the Monte Carlo (MC) approximation method. Accordingly, several forecasting measures are employed in the simulation study to facilitate a thorough comparison of the forecasting capability of NoGeAR(1) with other models. The methodology is also demonstrated using real-life data, specifically the data on CW{\ss} TeXpert downloads and Barbados COVID-19 data.
This paper intends to apply the sample-average-approximation (SAA) scheme to solve a system of stochastic equations (SSE), which has many applications in a variety of fields. The SAA is an effective paradigm to address risks and uncertainty in stochastic models from the perspective of Monte Carlo principle. Nonetheless, a numerical conflict arises from the sample size of SAA when one has to make a tradeoff between the accuracy of solutions and the computational cost. To alleviate this issue, we incorporate a gradually reinforced SAA scheme into a differentiable homotopy method and develop a gradually reinforced sample-average-approximation (GRSAA) differentiable homotopy method in this paper. By introducing a series of continuously differentiable functions of the homotopy parameter $t$ ranging between zero and one, we establish a differentiable homotopy system, which is able to gradually increase the sample size of SAA as $t$ descends from one to zero. The set of solutions to the homotopy system contains an everywhere smooth path, which starts from an arbitrary point and ends at a solution to the SAA with any desired accuracy. The GRSAA differentiable homotopy method serves as a bridge to link the gradually reinforced SAA scheme and a differentiable homotopy method and retains the nice property of global convergence the homotopy method possesses while greatly reducing the computational cost for attaining a desired solution to the original SSE. Several numerical experiments further confirm the effectiveness and efficiency of the proposed method.
While there exists a rich array of matrix column subset selection problem (CSSP) algorithms for use with interpolative and CUR-type decompositions, their use can often become prohibitive as the size of the input matrix increases. In an effort to address these issues, the authors in \cite{emelianenko2024adaptive} developed a general framework that pairs a column-partitioning routine with a column-selection algorithm. Two of the four algorithms presented in that work paired the Centroidal Voronoi Orthogonal Decomposition (\textsf{CVOD}) and an adaptive variant (\textsf{adaptCVOD}) with the Discrete Empirical Interpolation Method (\textsf{DEIM}) \cite{sorensen2016deim}. In this work, we extend this framework and pair the \textsf{CVOD}-type algorithms with any CSSP algorithm that returns linearly independent columns. Our results include detailed error bounds for the solutions provided by these paired algorithms, as well as expressions that explicitly characterize how the quality of the selected column partition affects the resulting CSSP solution.