Most of the existing diffusion models use Gaussian noise for training and sampling across all time steps, which may not optimally account for the frequency contents reconstructed by the denoising network. Despite the diverse applications of correlated noise in computer graphics, its potential for improving the training process has been underexplored. In this paper, we introduce a novel and general class of diffusion models taking correlated noise within and across images into account. More specifically, we propose a time-varying noise model to incorporate correlated noise into the training process, as well as a method for fast generation of correlated noise mask. Our model is built upon deterministic diffusion models and utilizes blue noise to help improve the generation quality compared to using Gaussian white (random) noise only. Further, our framework allows introducing correlation across images within a single mini-batch to improve gradient flow. We perform both qualitative and quantitative evaluations on a variety of datasets using our method, achieving improvements on different tasks over existing deterministic diffusion models in terms of FID metric.
Foundational vision transformer models have shown impressive few shot performance on many vision tasks. This research presents a novel investigation into the application of parameter efficient fine-tuning methods within an active learning (AL) framework, to advance the sampling selection process in extremely budget constrained classification tasks. The focus on image datasets, known for their out-of-distribution characteristics, adds a layer of complexity and relevance to our study. Through a detailed evaluation, we illustrate the improved AL performance on these challenging datasets, highlighting the strategic advantage of merging parameter efficient fine tuning methods with foundation models. This contributes to the broader discourse on optimizing AL strategies, presenting a promising avenue for future exploration in leveraging foundation models for efficient and effective data annotation in specialized domains.
Diffusion models are typically trained using score matching, yet score matching is agnostic to the particular forward process that defines the model. This paper argues that Markov diffusion models enjoy an advantage over other types of diffusion model, as their associated operators can be exploited to improve the training process. In particular, (i) there exists an explicit formal solution to the forward process as a sequence of time-dependent kernel mean embeddings; and (ii) the derivation of score-matching and related estimators can be streamlined. Building upon (i), we propose Riemannian diffusion kernel smoothing, which ameliorates the need for neural score approximation, at least in the low-dimensional context; Building upon (ii), we propose operator-informed score matching, a variance reduction technique that is straightforward to implement in both low- and high-dimensional diffusion modeling and is demonstrated to improve score matching in an empirical proof-of-concept.
Linear structural vector autoregressive models can be identified statistically without imposing restrictions on the model if the shocks are mutually independent and at most one of them is Gaussian. We show that this result extends to structural threshold and smooth transition vector autoregressive models incorporating a time-varying impact matrix defined as a weighted sum of the impact matrices of the regimes. We also discuss labelling of the shocks, maximum likelihood estimation of the parameters, and stationarity the model. The introduced methods are implemented to the accompanying R package sstvars. Our empirical application studies the effects of the climate policy uncertainty shock on the U.S. macroeconomy. In a structural logistic smooth transition vector autoregressive model consisting of two regimes, we find that a positive climate policy uncertainty shock decreases production in times of low economic policy uncertainty but slightly increases it in times of high economic policy uncertainty.
In the context of sketching for compressive mixture modeling, we revisit existing proofs of the Restricted Isometry Property of sketching operators with respect to certain mixtures models. After examining the shortcomings of existing guarantees, we propose an alternative analysis that circumvents the need to assume importance sampling when drawing random Fourier features to build random sketching operators. Our analysis is based on new deterministic bounds on the restricted isometry constant that depend solely on the set of frequencies used to define the sketching operator; then we leverage these bounds to establish concentration inequalities for random sketching operators that lead to the desired RIP guarantees. Our analysis also opens the door to theoretical guarantees for structured sketching with frequencies associated to fast random linear operators.
Diffusion or score-based models recently showed high performance in image generation. They rely on a forward and a backward stochastic differential equations (SDE). The sampling of a data distribution is achieved by solving numerically the backward SDE or its associated flow ODE. Studying the convergence of these models necessitates to control four different types of error: the initialization error, the truncation error, the discretization and the score approximation. In this paper, we study theoretically the behavior of diffusion models and their numerical implementation when the data distribution is Gaussian. In this restricted framework where the score function is a linear operator, we can derive the analytical solutions of the forward and backward SDEs as well as the associated flow ODE. This provides exact expressions for various Wasserstein errors which enable us to compare the influence of each error type for any sampling scheme, thus allowing to monitor convergence directly in the data space instead of relying on Inception features. Our experiments show that the recommended numerical schemes from the diffusion models literature are also the best sampling schemes for Gaussian distributions.
This work develops, for the first time, a face-centred finite volume (FCFV) solver for the simulation of laminar and turbulent viscous incompressible flows. The formulation relies on the Reynolds-averaged Navier-Stokes (RANS) equations coupled with the negative Spalart-Allmaras (SA) model and three novel convective stabilisations, inspired by Riemann solvers, are derived and compared numerically. The resulting method achieves first-order convergence of the velocity, the velocity-gradient tensor and the pressure. FCFV accurately predicts engineering quantities of interest, such as drag and lift, on unstructured meshes and, by avoiding gradient reconstruction, the method is less sensitive to mesh quality than other FV methods, even in the presence of highly distorted and stretched cells. A monolithic and a staggered solution strategies for the RANS-SA system are derived and compared numerically. Numerical benchmarks, involving laminar and turbulent, steady and transient cases are used to assess the performance, accuracy and robustness of the proposed FCFV method.
In recent years, the evolution of end-to-end (E2E) automatic speech recognition (ASR) models has been remarkable, largely due to advances in deep learning architectures like transformer. On top of E2E systems, researchers have achieved substantial accuracy improvement by rescoring E2E model's N-best hypotheses with a phoneme-based model. This raises an interesting question about where the improvements come from other than the system combination effect. We examine the underlying mechanisms driving these gains and propose an efficient joint training approach, where E2E models are trained jointly with diverse modeling units. This methodology does not only align the strengths of both phoneme and grapheme-based models but also reveals that using these diverse modeling units in a synergistic way can significantly enhance model accuracy. Our findings offer new insights into the optimal integration of heterogeneous modeling units in the development of more robust and accurate ASR systems.
Optimal experimental design (OED) aims to choose the observations in an experiment to be as informative as possible, according to certain statistical criteria. In the linear case (when the observations depend linearly on the unknown parameters), it seeks the optimal weights over rows of the design matrix A under certain criteria. Classical OED assumes a discrete design space and thus a design matrix with finite dimensions. In many practical situations, however, the design space is continuous-valued, so that the OED problem is one of optimizing over a continuous-valued design space. The objective becomes a functional over the probability measure, instead of over a finite dimensional vector. This change of perspective requires a new set of techniques that can handle optimizing over probability measures, and Wasserstein gradient flow becomes a natural candidate. Both the first-order criticality and the convexity properties of the OED objective are presented. Computationally Monte Carlo particle simulation is deployed to formulate the main algorithm. This algorithm is applied to two elliptic inverse problems.
We consider a convex constrained Gaussian sequence model and characterize necessary and sufficient conditions for the least squares estimator (LSE) to be optimal in a minimax sense. For a closed convex set $K\subset \mathbb{R}^n$ we observe $Y=\mu+\xi$ for $\xi\sim N(0,\sigma^2\mathbb{I}_n)$ and $\mu\in K$ and aim to estimate $\mu$. We characterize the worst case risk of the LSE in multiple ways by analyzing the behavior of the local Gaussian width on $K$. We demonstrate that optimality is equivalent to a Lipschitz property of the local Gaussian width mapping. We also provide theoretical algorithms that search for the worst case risk. We then provide examples showing optimality or suboptimality of the LSE on various sets, including $\ell_p$ balls for $p\in[1,2]$, pyramids, solids of revolution, and multivariate isotonic regression, among others.
In causal inference, sensitivity models assess how unmeasured confounders could alter causal analyses, but the sensitivity parameter -- which quantifies the degree of unmeasured confounding -- is often difficult to interpret. For this reason, researchers sometimes compare the sensitivity parameter to an estimate for measured confounding. This is known as calibration. Although calibration can aid interpretation, it is typically conducted post hoc, and uncertainty in the point estimate for measured confounding is rarely accounted for. To address these limitations, we propose novel calibrated sensitivity models, which directly bound the degree of unmeasured confounding by a multiple of measured confounding. The calibrated sensitivity parameter is interpretable as an intuitive unit-less ratio of unmeasured to measured confounding, and uncertainty due to estimating measured confounding can be incorporated. Incorporating this uncertainty shows causal analyses can be less or more robust to unmeasured confounding than would have been suggested by standard approaches. We develop efficient estimators and inferential methods for bounds on the average treatment effect with three calibrated sensitivity models, establishing parametric efficiency and asymptotic normality under doubly robust style nonparametric conditions. We illustrate our methods with a data analysis of the effect of mothers' smoking on infant birthweight.