Reidl, S\'anchez Villaamil, and Stravopoulos (2019) characterized graph classes of bounded expansion as follows: A class $\mathcal{C}$ closed under subgraphs has bounded expansion if and only if there exists a function $f:\mathbb{N} \to \mathbb{N}$ such that for every graph $G \in \mathcal{C}$, every nonempty subset $A$ of vertices in $G$ and every nonnegative integer $r$, the number of distinct intersections between $A$ and a ball of radius $r$ in $G$ is at most $f(r) |A|$. When $\mathcal{C}$ has bounded expansion, the function $f(r)$ coming from existing proofs is typically exponential. In the special case of planar graphs, it was conjectured by Soko{\l}owski (2021) that $f(r)$ could be taken to be a polynomial. In this paper, we prove this conjecture: For every nonempty subset $A$ of vertices in a planar graph $G$ and every nonnegative integer $r$, the number of distinct intersections between $A$ and a ball of radius $r$ in $G$ is $O(r^4 |A|)$. We also show that a polynomial bound holds more generally for every proper minor-closed class of graphs.
Classical Krylov subspace projection methods for the solution of linear problem $Ax = b$ output an approximate solution $\widetilde{x}\simeq x$. Recently, it has been recognized that projection methods can be understood from a statistical perspective. These probabilistic projection methods return a distribution $p(\widetilde{x})$ in place of a point estimate $\widetilde{x}$. The resulting uncertainty, codified as a distribution, can, in theory, be meaningfully combined with other uncertainties, can be propagated through computational pipelines, and can be used in the framework of probabilistic decision theory. The problem we address is that the current probabilistic projection methods lead to the poorly calibrated posterior distribution. We improve the covariance matrix from previous works in a way that it does not contain such undesirable objects as $A^{-1}$ or $A^{-1}A^{-T}$, results in nontrivial uncertainty, and reproduces an arbitrary projection method as a mean of the posterior distribution. We also propose a variant that is numerically inexpensive in the case the uncertainty is calibrated a priori. Since it usually is not, we put forward a practical way to calibrate uncertainty that performs reasonably well, albeit at the expense of roughly doubling the numerical cost of the underlying projection method.
Multi-fidelity models provide a framework for integrating computational models of varying complexity, allowing for accurate predictions while optimizing computational resources. These models are especially beneficial when acquiring high-accuracy data is costly or computationally intensive. This review offers a comprehensive analysis of multi-fidelity models, focusing on their applications in scientific and engineering fields, particularly in optimization and uncertainty quantification. It classifies publications on multi-fidelity modeling according to several criteria, including application area, surrogate model selection, types of fidelity, combination methods and year of publication. The study investigates techniques for combining different fidelity levels, with an emphasis on multi-fidelity surrogate models. This work discusses reproducibility, open-sourcing methodologies and benchmarking procedures to promote transparency. The manuscript also includes educational toy problems to enhance understanding. Additionally, this paper outlines best practices for presenting multi-fidelity-related savings in a standardized, succinct and yet thorough manner. The review concludes by examining current trends in multi-fidelity modeling, including emerging techniques, recent advancements, and promising research directions.
We expound on some known lower bounds of the quadratic Wasserstein distance between random vectors in $\mathbb{R}^n$ with an emphasis on affine transformations that have been used in manifold learning of data in Wasserstein space. In particular, we give concrete lower bounds for rotated copies of random vectors in $\mathbb{R}^2$ by computing the Bures metric between the covariance matrices. We also derive upper bounds for compositions of affine maps which yield a fruitful variety of diffeomorphisms applied to an initial data measure. We apply these bounds to various distributions including those lying on a 1-dimensional manifold in $\mathbb{R}^2$ and illustrate the quality of the bounds. Finally, we give a framework for mimicking handwritten digit or alphabet datasets that can be applied in a manifold learning framework.
Given a graph~$G$, the domination number, denoted by~$\gamma(G)$, is the minimum cardinality of a dominating set in~$G$. Dual to the notion of domination number is the packing number of a graph. A packing of~$G$ is a set of vertices whose pairwise distance is at least three. The packing number~$\rho(G)$ of~$G$ is the maximum cardinality of one such set. Furthermore, the inequality~$\rho(G) \leq \gamma(G)$ is well-known. Henning et al.\ conjectured that~$\gamma(G) \leq 2\rho(G)+1$ if~$G$ is subcubic. In this paper we progress towards this conjecture by showing that~${\gamma(G) \leq \frac{120}{49}\rho(G)}$ if~$G$ is a bipartite cubic graph. We also show that $\gamma(G) \leq 3\rho(G)$ if~$G$ is a maximal outerplanar graph, and that~$\gamma(G) \leq 2\rho(G)$ if~$G$ is a biconvex graph. Moreover, in the last case, we show that this upper bound is tight.
We introduce $\varepsilon$-approximate versions of the notion of Euclidean vector bundle for $\varepsilon \geq 0$, which recover the classical notion of Euclidean vector bundle when $\varepsilon = 0$. In particular, we study \v{C}ech cochains with coefficients in the orthogonal group that satisfy an approximate cocycle condition. We show that $\varepsilon$-approximate vector bundles can be used to represent classical vector bundles when $\varepsilon > 0$ is sufficiently small. We also introduce distances between approximate vector bundles and use them to prove that sufficiently similar approximate vector bundles represent the same classical vector bundle. This gives a way of specifying vector bundles over finite simplicial complexes using a finite amount of data, and also allows for some tolerance to noise when working with vector bundles in an applied setting. As an example, we prove a reconstruction theorem for vector bundles from finite samples. We give algorithms for the effective computation of low-dimensional characteristic classes of vector bundles directly from discrete and approximate representations and illustrate the usage of these algorithms with computational examples.
Exact methods for exponentiation of matrices of dimension $N$ can be computationally expensive in terms of execution time ($N^{3}$) and memory requirements ($N^{2}$) not to mention numerical precision issues. A type of matrix often exponentiated in the sciences is the rate matrix. Here we explore five methods to exponentiate rate matrices some of which apply even more broadly to other matrix types. Three of the methods leverage a mathematical analogy between computing matrix elements of a matrix exponential and computing transition probabilities of a dynamical processes (technically a Markov jump process, MJP, typically simulated using Gillespie). In doing so, we identify a novel MJP-based method relying on restricting the number of "trajectory" jumps based on the magnitude of the matrix elements with favorable computational scaling. We then discuss this method's downstream implications on mixing properties of Monte Carlo posterior samplers. We also benchmark two other methods of matrix exponentiation valid for any matrix (beyond rate matrices and, more generally, positive definite matrices) related to solving differential equations: Runge-Kutta integrators and Krylov subspace methods. Under conditions where both the largest matrix element and the number of non-vanishing elements scale linearly with $N$ -- reasonable conditions for rate matrices often exponentiated -- computational time scaling with the most competitive methods (Krylov and one of the MJP-based methods) reduces to $N^2$ with total memory requirements of $N$.
A proper $k$-coloring of a graph $G$ is a \emph{neighbor-locating $k$-coloring} if for each pair of vertices in the same color class, the two sets of colors found in their respective neighborhoods are different. The \textit{neighbor-locating chromatic number} $\chi_{NL}(G)$ is the minimum $k$ for which $G$ admits a neighbor-locating $k$-coloring. A proper $k$-vertex-coloring of a graph $G$ is a \emph{locating $k$-coloring} if for each pair of vertices $x$ and $y$ in the same color-class, there exists a color class $S_i$ such that $d(x,S_i)\neq d(y,S_i)$. The locating chromatic number $\chi_{L}(G)$ is the minimum $k$ for which $G$ admits a locating $k$-coloring. Our main results concern the largest possible order of a sparse graph of given neighbor-locating chromatic number. More precisely, we prove that if $G$ has order $n$, neighbor-locating chromatic number $k$ and average degree at most $2a$, where $2a\le k-1$ is a positive integer, then $n$ is upper-bounded by $\mathcal{O}(a^2(k^{2a+1}))$. We also design a family of graphs of bounded maximum degree whose order is close to reaching this upper bound. Our upper bound generalizes two previous bounds from the literature, which were obtained for graphs of bounded maximum degree and graphs of bounded cycle rank, respectively. Also, we prove that determining whether $\chi_L(G)\le k$ and $\chi_{NL}(G)\le k$ are NP-complete for sparse graphs: more precisely, for graphs with average degree at most 7, maximum average degree at most 20 and that are $4$-partite. We also study the possible relation between the ordinary chromatic number, the locating chromatic number and the neighbor-locating chromatic number of a graph.
We present and analyze a new finite volume scheme of Gudonov-type for a nonlinear scalar conservation law whose flux function has a discontinuous coefficient due to time-dependent changes in its sign along a Lipschitz continuous curve.
Input-output conformance simulation (iocos) has been proposed by Gregorio-Rodr\'iguez, Llana and Mart\'inez-Torres as a simulation-based behavioural preorder underlying model-based testing. This relation is inspired by Tretmans' classic ioco relation, but has better worst-case complexity than ioco and supports stepwise refinement. The goal of this paper is to develop the theory of iocos by studying logical characterisations of this relation, rule formats for it and its compositionality. More specifically, this article presents characterisations of iocos in terms of modal logics and compares them with an existing logical characterisation for ioco proposed by Beohar and Mousavi. It also offers a characteristic-formula construction for iocos over finite processes in an extension of the proposed modal logics with greatest fixed points. A precongruence rule format for iocos and a rule format ensuring that operations take quiescence properly into account are also given. Both rule formats are based on the GSOS format by Bloom, Istrail and Meyer. The general modal decomposition methodology of Fokkink and van Glabbeek is used to show how to check the satisfaction of properties expressed in the logic for iocos in a compositional way for operations specified by rules in the precongruence rule format for iocos .
Vecchia approximation has been widely used to accurately scale Gaussian-process (GP) inference to large datasets, by expressing the joint density as a product of conditional densities with small conditioning sets. We study fixed-domain asymptotic properties of Vecchia-based GP inference for a large class of covariance functions (including Mat\'ern covariances) with boundary conditioning. In this setting, we establish that consistency and asymptotic normality of maximum exact-likelihood estimators imply those of maximum Vecchia-likelihood estimators, and that exact GP prediction can be approximated accurately by Vecchia GP prediction, given that the size of conditioning sets grows polylogarithmically with the data size. Hence, Vecchia-based inference with quasilinear complexity is asymptotically equivalent to exact GP inference with cubic complexity. This also provides a general new result on the screening effect. Our findings are illustrated by numerical experiments, which also show that Vecchia approximation can be more accurate than alternative approaches such as covariance tapering and reduced-rank approximations.