Through an uncertainty quantification (UQ) perspective, we show that score-based generative models (SGMs) are provably robust to the multiple sources of error in practical implementation. Our primary tool is the Wasserstein uncertainty propagation (WUP) theorem, a model-form UQ bound that describes how the $L^2$ error from learning the score function propagates to a Wasserstein-1 ($\mathbf{d}_1$) ball around the true data distribution under the evolution of the Fokker-Planck equation. We show how errors due to (a) finite sample approximation, (b) early stopping, (c) score-matching objective choice, (d) score function parametrization expressiveness, and (e) reference distribution choice, impact the quality of the generative model in terms of a $\mathbf{d}_1$ bound of computable quantities. The WUP theorem relies on Bernstein estimates for Hamilton-Jacobi-Bellman partial differential equations (PDE) and the regularizing properties of diffusion processes. Specifically, PDE regularity theory shows that stochasticity is the key mechanism ensuring SGM algorithms are provably robust. The WUP theorem applies to integral probability metrics beyond $\mathbf{d}_1$, such as the total variation distance and the maximum mean discrepancy. Sample complexity and generalization bounds in $\mathbf{d}_1$ follow directly from the WUP theorem. Our approach requires minimal assumptions, is agnostic to the manifold hypothesis and avoids absolute continuity assumptions for the target distribution. Additionally, our results clarify the trade-offs among multiple error sources in SGMs.
Many environmental processes such as rainfall, wind or snowfall are inherently spatial and the modelling of extremes has to take into account that feature. In addition, environmental processes are often attached with an angle, e.g., wind speed and direction or extreme snowfall and time of occurrence in year. This article proposes a Bayesian hierarchical model with a conditional independence assumption that aims at modelling simultaneously spatial extremes and an angular component. The proposed model relies on the extreme value theory as well as recent developments for handling directional statistics over a continuous domain. Working within a Bayesian setting, a Gibbs sampler is introduced whose performances are analysed through a simulation study. The paper ends with an application on extreme wind speed in France. Results show that extreme wind events in France are mainly coming from West apart from the Mediterranean part of France and the Alps.
In this paper, we propose a novel shape optimization approach for the source identification of elliptic equations. This identification problem arises from two application backgrounds: actuator placement in PDE-constrained optimal controls and the regularized least-squares formulation of source identifications. The optimization problem seeks both the source strength and its support. By eliminating the variable associated with the source strength, we reduce the problem to a shape optimization problem for a coupled elliptic system, known as the first-order optimality system. As a model problem, we derive the shape derivative for the regularized least-squares formulation of the inverse source problem and propose a gradient descent shape optimization algorithm, implemented using the level-set method. Several numerical experiments are presented to demonstrate the efficiency of our proposed algorithms.
Threshold selection is a fundamental problem in any threshold-based extreme value analysis. While models are asymptotically motivated, selecting an appropriate threshold for finite samples is difficult and highly subjective through standard methods. Inference for high quantiles can also be highly sensitive to the choice of threshold. Too low a threshold choice leads to bias in the fit of the extreme value model, while too high a choice leads to unnecessary additional uncertainty in the estimation of model parameters. We develop a novel methodology for automated threshold selection that directly tackles this bias-variance trade-off. We also develop a method to account for the uncertainty in the threshold estimation and propagate this uncertainty through to high quantile inference. Through a simulation study, we demonstrate the effectiveness of our method for threshold selection and subsequent extreme quantile estimation, relative to the leading existing methods, and show how the method's effectiveness is not sensitive to the tuning parameters. We apply our method to the well-known, troublesome example of the River Nidd dataset.
The aim of this article is to introduce a new methodology for constructing morphings between shapes that have identical topology. This morphing is obtained by deforming a reference shape, through the resolution of a sequence of linear elasticity equations, onto the target shape. In particular, our approach does not assume any knowledge of a boundary parametrization. Furthermore, we demonstrate how constraints can be imposed on specific points, lines and surfaces in the reference domain to ensure alignment with their counterparts in the target domain after morphing. Additionally, we show how the proposed methodology can be integrated in an offline and online paradigm, which is useful in reduced-order modeling scenarii involving variable shapes. This framework facilitates the efficient computation of the morphings in various geometric configurations, thus improving the versatility and applicability of the approach. The methodology is illustrated on the regression problem of the drag and lift coefficients of airfoils of non-parameterized variable shapes.
Generalized linear models (GLMs) arguably represent the standard approach for statistical regression beyond the Gaussian likelihood scenario. When Bayesian formulations are employed, the general absence of a tractable posterior distribution has motivated the development of deterministic approximations, which are generally more scalable than sampling techniques. Among them, expectation propagation (EP) showed extreme accuracy, usually higher than many variational Bayes solutions. However, the higher computational cost of EP posed concerns about its practical feasibility, especially in high-dimensional settings. We address these concerns by deriving a novel efficient formulation of EP for GLMs, whose cost scales linearly in the number of covariates p. This reduces the state-of-the-art O(p^2 n) per-iteration computational cost of the EP routine for GLMs to O(p n min{p,n}), with n being the sample size. We also show that, for binary models and log-linear GLMs approximate predictive means can be obtained at no additional cost. To preserve efficient moment matching for count data, we propose employing a combination of log-normal Laplace transform approximations, avoiding numerical integration. These novel results open the possibility of employing EP in settings that were believed to be practically impossible. Improvements over state-of-the-art approaches are illustrated both for simulated and real data. The efficient EP implementation is available at //github.com/niccoloanceschi/EPglm.
In this work, we present a model order reduction technique for nonlinear structures assembled from components.The reduced order model is constructed by reducing the substructures with proper orthogonal decomposition and connecting them by a mortar-tied contact formulation. The snapshots for the substructure projection matrices are computed on the substructure level by the proper orthogonal decomposition (POD) method. The snapshots are computed using a random sampling procedure based on a parametrization of boundary conditions. To reduce the computational effort of the snapshot computation full-order simulations of the substructures are only computed when the error of the reduced solution is above a threshold. In numerical examples, we show the accuracy and efficiency of the method for nonlinear problems involving material and geometric nonlinearity as well as non-matching meshes. We are able to predict solutions of systems that we did not compute in our snapshots.
We show that the limiting variance of a sequence of estimators for a structured covariance matrix has a general form that appears as the variance of a scaled projection of a random matrix that is of radial type and a similar result is obtained for the corresponding sequence of estimators for the vector of variance components. These results are illustrated by the limiting behavior of estimators for a linear covariance structure in a variety of multivariate statistical models. We also derive a characterization for the influence function of corresponding functionals. Furthermore, we derive the limiting distribution and influence function of scale invariant mappings of such estimators and their corresponding functionals. As a consequence, the asymptotic relative efficiency of different estimators for the shape component of a structured covariance matrix can be compared by means of a single scalar and the gross error sensitivity of the corresponding influence functions can be compared by means of a single index. Similar results are obtained for estimators of the normalized vector of variance components. We apply our results to investigate how the efficiency, gross error sensitivity, and breakdown point of S-estimators for the normalized variance components are affected simultaneously by varying their cutoff value.
In shape-constrained nonparametric inference, it is often necessary to perform preliminary tests to verify whether a probability mass function (p.m.f.) satisfies qualitative constraints such as monotonicity, convexity or in general $k$-monotonicity. In this paper, we are interested in testing $k$-monotonicity of a compactly supported p.m.f. and we put our main focus on monotonicity and convexity; i.e., $k \in \{1,2\}$. We consider new testing procedures that are directly derived from the definition of $k$-monotonicity and rely exclusively on the empirical measure, as well as tests that are based on the projection of the empirical measure on the class of $k$-monotone p.m.f.s. The asymptotic behaviour of the introduced test statistics is derived and a simulation study is performed to assess the finite sample performance of all the proposed tests. Applications to real datasets are presented to illustrate the theory.
Visibility problems have been investigated for a long time under different assumptions as they pose challenging combinatorial problems and are connected to robot navigation problems. The mutual-visibility problem in a graph $G$ of $n$ vertices asks to find the largest set of vertices $X\subseteq V(G)$, also called $\mu$-set, such that for any two vertices $u,v\in X$, there is a shortest $u,v$-path $P$ where all internal vertices of $P$ are not in $X$. This means that $u$ and $v$ are visible w.r.t. $X$. Variations of this problem are known as total, outer, and dual mutual-visibility problems, depending on the visibility property of vertices inside and/or outside $X$. The mutual-visibility problem and all its variations are known to be $\mathsf{NP}$-complete on graphs of diameter $4$. In this paper, we design a polynomial-time algorithm that finds a $\mu$-set with size $\Omega\left( \sqrt{n/ \overline{D}} \right)$, where $\overline D$ is the average distance between any two vertices of $G$. Moreover, we show inapproximability results for all visibility problems on graphs of diameter $2$ and strengthen the inapproximability ratios for graphs of diameter $3$ or larger. More precisely, for graphs of diameter at least $3$ and for every constant $\varepsilon > 0$, we show that mutual-visibility and dual mutual-visibility problems are not approximable within a factor of $n^{1/3-\varepsilon}$, while outer and total mutual-visibility problems are not approximable within a factor of $n^{1/2 - \varepsilon}$, unless $\mathsf{P}=\mathsf{NP}$. Furthermore we study the relationship between the mutual-visibility number and the general position number in which no three distinct vertices $u,v,w$ of $X$ belong to any shortest path of $G$.
Detecting and quantifying causality is a focal topic in the fields of science, engineering, and interdisciplinary studies. However, causal studies on non-intervention systems attract much attention but remain extremely challenging. To address this challenge, we propose a framework named Interventional Dynamical Causality (IntDC) for such non-intervention systems, along with its computational criterion, Interventional Embedding Entropy (IEE), to quantify causality. The IEE criterion theoretically and numerically enables the deciphering of IntDC solely from observational (non-interventional) time-series data, without requiring any knowledge of dynamical models or real interventions in the considered system. Demonstrations of performance showed the accuracy and robustness of IEE on benchmark simulated systems as well as real-world systems, including the neural connectomes of C. elegans, COVID-19 transmission networks in Japan, and regulatory networks surrounding key circadian genes.