We introduce an independence criterion based on entropy regularized optimal transport. Our criterion can be used to test for independence between two samples. We establish non-asymptotic bounds for our test statistic and study its statistical behavior under both the null hypothesis and the alternative hypothesis. The theoretical results involve tools from U-process theory and optimal transport theory. We also offer a random feature type approximation for large-scale problems, as well as a differentiable program implementation for deep learning applications. We present experimental results on existing benchmarks for independence testing, illustrating the interest of the proposed criterion to capture both linear and nonlinear dependencies in synthetic data and real data.
We establish optimal Statistical Query (SQ) lower bounds for robustly learning certain families of discrete high-dimensional distributions. In particular, we show that no efficient SQ algorithm with access to an $\epsilon$-corrupted binary product distribution can learn its mean within $\ell_2$-error $o(\epsilon \sqrt{\log(1/\epsilon)})$. Similarly, we show that no efficient SQ algorithm with access to an $\epsilon$-corrupted ferromagnetic high-temperature Ising model can learn the model to total variation distance $o(\epsilon \log(1/\epsilon))$. Our SQ lower bounds match the error guarantees of known algorithms for these problems, providing evidence that current upper bounds for these tasks are best possible. At the technical level, we develop a generic SQ lower bound for discrete high-dimensional distributions starting from low dimensional moment matching constructions that we believe will find other applications. Additionally, we introduce new ideas to analyze these moment-matching constructions for discrete univariate distributions.
The free-form deformation model can represent a wide range of non-rigid deformations by manipulating a control point lattice over the image. However, due to a large number of parameters, it is challenging to fit the free-form deformation model directly to the deformed image for deformation estimation because of the complexity of the fitness landscape. In this paper, we cast the registration task as a multi-objective optimization problem (MOP) according to the fact that regions affected by each control point overlap with each other. Specifically, by partitioning the template image into several regions and measuring the similarity of each region independently, multiple objectives are built and deformation estimation can thus be realized by solving the MOP with off-the-shelf multi-objective evolutionary algorithms (MOEAs). In addition, a coarse-to-fine strategy is realized by image pyramid combined with control point mesh subdivision. Specifically, the optimized candidate solutions of the current image level are inherited by the next level, which increases the ability to deal with large deformation. Also, a post-processing procedure is proposed to generate a single output utilizing the Pareto optimal solutions. Comparative experiments on both synthetic and real-world images show the effectiveness and usefulness of our deformation estimation method.
Exploration in high-dimensional, continuous spaces with sparse rewards is an open problem in reinforcement learning. Artificial curiosity algorithms address this by creating rewards that lead to exploration. Given a reinforcement learning algorithm capable of maximizing rewards, the problem reduces to finding an optimization objective consistent with exploration. Maximum entropy exploration uses the entropy of the state visitation distribution as such an objective. However, efficiently estimating the entropy of the state visitation distribution is challenging in high-dimensional, continuous spaces. We introduce an artificial curiosity algorithm based on lower bounding an approximation to the entropy of the state visitation distribution. The bound relies on a result we prove for non-parametric density estimation in arbitrary dimensions using k-means. We show that our approach is both computationally efficient and competitive on benchmarks for exploration in high-dimensional, continuous spaces, especially on tasks where reinforcement learning algorithms are unable to find rewards.
Two of the most significant challenges in uncertainty quantification pertain to the high computational cost for simulating complex physical models and the high dimension of the random inputs. In applications of practical interest, both of these problems are encountered, and standard methods either fail or are not feasible. To overcome the current limitations, we present a generalized formulation of a Bayesian multi-fidelity Monte-Carlo (BMFMC) framework that can exploit lower-fidelity model versions in a small data regime. The goal of our analysis is an efficient and accurate estimation of the complete probabilistic response for high-fidelity models. BMFMC circumvents the curse of dimensionality by learning the relationship between the outputs of a reference high-fidelity model and potentially several lower-fidelity models. While the continuous formulation is mathematically exact and independent of the low-fidelity model's accuracy, we address challenges associated with the small data regime (i.e., only a small number of 50 to 300 high-fidelity model runs can be performed). Specifically, we complement the formulation with a set of informative input features at no extra cost. Despite the inaccurate and noisy information that some low-fidelity models provide, we demonstrate that accurate and certifiable estimates for the quantities of interest can be obtained for uncertainty quantification problems in high stochastic dimensions, with significantly fewer high-fidelity model runs than state-of-the-art methods for uncertainty quantification. We illustrate our approach by applying it to challenging numerical examples such as Navier-Stokes flow simulations and fluid-structure interaction problems.
Optimal transport (OT) is a versatile framework for comparing probability measures, with many applications to statistics, machine learning, and applied mathematics. However, OT distances suffer from computational and statistical scalability issues to high dimensions, which motivated the study of regularized OT methods like slicing, smoothing, and entropic penalty. This work establishes a unified framework for deriving limit distributions of empirical regularized OT distances, semiparametric efficiency of the plug-in empirical estimator, and bootstrap consistency. We apply the unified framework to provide a comprehensive statistical treatment of: (i) average- and max-sliced $p$-Wasserstein distances, for which several gaps in existing literature are closed; (ii) smooth distances with compactly supported kernels, the analysis of which is motivated by computational considerations; and (iii) entropic OT, for which our method generalizes existing limit distribution results and establishes, for the first time, efficiency and bootstrap consistency. While our focus is on these three regularized OT distances as applications, the flexibility of the proposed framework renders it applicable to broad classes of functionals beyond these examples.
In inverse problems, the parameters of a model are estimated based on observations of the model response. The Bayesian approach is powerful for solving such problems; one formulates a prior distribution for the parameter state that is updated with the observations to compute the posterior parameter distribution. Solving for the posterior distribution can be challenging when, e.g., prior and posterior significantly differ from one another and/or the parameter space is high-dimensional. We use a sequence of importance sampling measures that arise by tempering the likelihood to approach inverse problems exhibiting a significant distance between prior and posterior. Each importance sampling measure is identified by cross-entropy minimization as proposed in the context of Bayesian inverse problems in Engel et al. (2021). To efficiently address problems with high-dimensional parameter spaces we set up the minimization procedure in a low-dimensional subspace of the original parameter space. The principal idea is to analyse the spectrum of the second-moment matrix of the gradient of the log-likelihood function to identify a suitable subspace. Following Zahm et al. (2021), an upper bound on the Kullback-Leibler-divergence between full-dimensional and subspace posterior is provided, which can be utilized to determine the effective dimension of the inverse problem corresponding to a prescribed approximation error bound. We suggest heuristic criteria for optimally selecting the number of model and model gradient evaluations in each iteration of the importance sampling sequence. We investigate the performance of this approach using examples from engineering mechanics set in various parameter space dimensions.
We study the decentralized online regularized linear regression algorithm over random time-varying graphs. At each time step, every node runs an online estimation algorithm consisting of an innovation term processing its own new measurement, a consensus term taking a weighted sum of estimations of its own and its neighbors with additive and multiplicative communication noises and a regularization term preventing over-fitting. It is not required that the regression matrices and graphs satisfy special statistical assumptions such as mutual independence, spatio-temporal independence or stationarity. We develop the nonnegative supermartingale inequality of the estimation error, and prove that the estimations of all nodes converge to the unknown true parameter vector almost surely if the algorithm gains, graphs and regression matrices jointly satisfy the sample path spatio-temporal persistence of excitation condition. Especially, this condition holds by choosing appropriate algorithm gains if the graphs are uniformly conditionally jointly connected and conditionally balanced, and the regression models of all nodes are uniformly conditionally spatio-temporally jointly observable, under which the algorithm converges in mean square and almost surely. In addition, we prove that the regret upper bound $\mathcal O(T^{1-\tau}\ln T)$, where $\tau\in (0.5,1)$ is a constant depending on the algorithm gains.
Considering two random variables with different laws to which we only have access through finite size iid samples, we address how to reweight the first sample so that its empirical distribution converges towards the true law of the second sample as the size of both samples goes to infinity. We study an optimal reweighting that minimizes the Wasserstein distance between the empirical measures of the two samples, and leads to an expression of the weights in terms of Nearest Neighbors. The consistency and some asymptotic convergence rates in terms of expected Wasserstein distance are derived, and do not need the assumption of absolute continuity of one random variable with respect to the other. These results have some application in Uncertainty Quantification for decoupled estimation and in the bound of the generalization error for the Nearest Neighbor Regression under covariate shift.
Mini-batch optimal transport (m-OT) has been widely used recently to deal with the memory issue of OT in large-scale applications. Despite their practicality, m-OT suffers from misspecified mappings, namely, mappings that are optimal on the mini-batch level but are partially wrong in the comparison with the optimal transportation plan between the original measures. Motivated by the misspecified mappings issue, we propose a novel mini-batch method by using partial optimal transport (POT) between mini-batch empirical measures, which we refer to as mini-batch partial optimal transport (m-POT). Leveraging the insight from the partial transportation, we explain the source of misspecified mappings from the m-OT and motivate why limiting the amount of transported masses among mini-batches via POT can alleviate the incorrect mappings. Finally, we carry out extensive experiments on various applications such as deep domain adaptation, partial domain adaptation, deep generative model, color transfer, and gradient flow to demonstrate the favorable performance of m-POT compared to current mini-batch methods.
We provide a unifying framework for $\mathcal{L}_2$-optimal reduced-order modeling for linear time-invariant dynamical systems and stationary parametric problems. Using parameter-separable forms of the reduced-model quantities, we derive the gradients of the $\mathcal{L}_2$ cost function with respect to the reduced matrices, which then allows a non-intrusive, data-driven, gradient-based descent algorithm to construct the optimal approximant using only output samples. By choosing an appropriate measure, the framework covers both continuous (Lebesgue) and discrete cost functions. We show the efficacy of the proposed algorithm via various numerical examples. Furthermore, we analyze under what conditions the data-driven approximant can be obtained via projection.