An important theme in modern inverse problems is the reconstruction of time-dependent data from only finitely many measurements. To obtain satisfactory reconstruction results in this setting it is essential to strongly exploit temporal consistency between the different measurement times. The strongest consistency can be achieved by reconstructing data directly in phase space, the space of positions and velocities. However, this space is usually too high-dimensional for feasible computations. We introduce a novel dimension reduction technique, based on projections of phase space onto lower-dimensional subspaces, which provably circumvents this curse of dimensionality: Indeed, in the exemplary framework of superresolution we prove that known exact reconstruction results stay true after dimension reduction, and we additionally prove new error estimates of reconstructions from noisy data in optimal transport metrics which are of the same quality as one would obtain in the non-dimension-reduced case.
Min-max optimization problems arise in several key machine learning setups, including adversarial learning and generative modeling. In their general form, in absence of convexity/concavity assumptions, finding pure equilibria of the underlying two-player zero-sum game is computationally hard [Daskalakis et al., 2021]. In this work we focus instead in finding mixed equilibria, and consider the associated lifted problem in the space of probability measures. By adding entropic regularization, our main result establishes global convergence towards the global equilibrium by using simultaneous gradient ascent-descent with respect to the Wasserstein metric -- a dynamics that admits efficient particle discretization in high-dimensions, as opposed to entropic mirror descent. We complement this positive result with a related entropy-regularized loss which is not bilinear but still convex-concave in the Wasserstein geometry, and for which simultaneous dynamics do not converge yet timescale separation does. Taken together, these results showcase the benign geometry of bilinear games in the space of measures, enabling particle dynamics with global qualitative convergence guarantees.
Aiming to recover the data from several concurrent node failures, linear $r$-LRC codes with locality $r$ were extended into $(r, \delta)$-LRC codes with locality $(r, \delta)$ which can enable the local recovery of a failed node in case of more than one node failure. Optimal LRC codes are those whose parameters achieve the generalized Singleton bound with equality. In the present paper, we are interested in studying optimal LRC codes over small fields and, more precisely, over $\mathbb{F}_4$. We shall adopt an approach by investigating optimal quaternary $(r,\delta)$-LRC codes through their parity-check matrices. Our study includes determining the structural properties of optimal $(r,\delta)$-LRC codes, their constructions, and their complete classification over $\F_4$ by browsing all possible parameters. We emphasize that the precise structure of optimal quaternary $(r,\delta)$-LRC codes and their classification are obtained via the parity-check matrix approach use proofs-techniques different from those used recently for optimal binary and ternary $(r,\delta)$-LRC codes obtained by Hao et al. in [IEEE Trans. Inf. Theory, 2020, 66(12): 7465-7474].
In statistical dimensionality reduction, it is common to rely on the assumption that high dimensional data tend to concentrate near a lower dimensional manifold. There is a rich literature on approximating the unknown manifold, and on exploiting such approximations in clustering, data compression, and prediction. Most of the literature relies on linear or locally linear approximations. In this article, we propose a simple and general alternative, which instead uses spheres, an approach we refer to as spherelets. We develop spherical principal components analysis (SPCA), and provide theory on the convergence rate for global and local SPCA, while showing that spherelets can provide lower covering numbers and MSEs for many manifolds. Results relative to state-of-the-art competitors show gains in ability to accurately approximate manifolds with fewer components. Unlike most competitors, which simply output lower-dimensional features, our approach projects data onto the estimated manifold to produce fitted values that can be used for model assessment and cross validation. The methods are illustrated with applications to multiple data sets.
We propose a component-based (CB) parametric model order reduction (pMOR) formulation for parameterized {nonlinear} elliptic partial differential equations (PDEs). CB-pMOR is designed to deal with large-scale problems for which full-order solves are not affordable in a reasonable time frame or parameters' variations induce topology changes that prevent the application of monolithic pMOR techniques. We rely on the partition-of-unity method (PUM) to devise global approximation spaces from local reduced spaces, and on Galerkin projection to compute the global state estimate. We propose a randomized data compression algorithm based on oversampling for the construction of the components' reduced spaces: the approach exploits random boundary conditions of controlled smoothness on the oversampling boundary. We further propose an adaptive residual-based enrichment algorithm that exploits global reduced-order solves on representative systems to update the local reduced spaces. We prove exponential convergence of the enrichment procedure for linear coercive problems; we further present numerical results for a two-dimensional nonlinear diffusion problem to illustrate the many features of our proposal and demonstrate its effectiveness.
Surface reconstruction is a fundamental problem in 3D graphics. In this paper, we propose a learning-based approach for implicit surface reconstruction from raw point clouds without normals. Our method is inspired by Gauss Lemma in potential energy theory, which gives an explicit integral formula for the indicator functions. We design a novel deep neural network to perform surface integral and learn the modified indicator functions from un-oriented and noisy point clouds. We concatenate features with different scales for accurate point-wise contributions to the integral. Moreover, we propose a novel Surface Element Feature Extractor to learn local shape properties. Experiments show that our method generates smooth surfaces with high normal consistency from point clouds with different noise scales and achieves state-of-the-art reconstruction performance compared with current data-driven and non-data-driven approaches.
Recently, recovering an unknown signal from quadratic measurements has gained popularity because it includes many interesting applications as special cases such as phase retrieval, fusion frame phase retrieval, and positive operator-valued measure. In this paper, by employing the least squares approach to reconstruct the signal, we establish the non-asymptotic statistical property showing that the gap between the estimator and the true signal is vanished in the noiseless case and is bounded in the noisy case by an error rate of $O(\sqrt{p\log(1+2n)/n})$, where $n$ and $p$ are the number of measurements and the dimension of the signal, respectively. We develop a gradient regularized Newton method (GRNM) to solve the least squares problem and prove that it converges to a unique local minimum at a superlinear rate under certain mild conditions. In addition to the deterministic results, GRNM can reconstruct the true signal exactly for the noiseless case and achieve the above error rate with a high probability for the noisy case. Numerical experiments demonstrate the GRNM performs nicely in terms of high order of recovery accuracy, faster computational speed, and strong recovery capability.
This work aims at tackling the problem of learning surrogate models from noisy time-domain data by means of matrix pencil-based techniques, namely the Hankel and Loewner frameworks. A data-driven approach to obtain reduced order state-space models from time-domain input-output measurements for linear time-invariant (LTI) systems is proposed. This is accomplished by combining the aforementioned model order reduction (MOR) techniques with the signal matrix model (SMM) approach. The proposed method is illustrated by a numerical benchmark example consisting of a building model.
Ensemble methods based on subsampling, such as random forests, are popular in applications due to their high predictive accuracy. Existing literature views a random forest prediction as an infinite-order incomplete U-statistic to quantify its uncertainty. However, these methods focus on a small subsampling size of each tree, which is theoretically valid but practically limited. This paper develops an unbiased variance estimator based on incomplete U-statistics, which allows the tree size to be comparable with the overall sample size, making statistical inference possible in a broader range of real applications. Simulation results demonstrate that our estimators enjoy lower bias and more accurate confidence interval coverage without additional computational costs. We also propose a local smoothing procedure to reduce the variation of our estimator, which shows improved numerical performance when the number of trees is relatively small. Further, we investigate the ratio consistency of our proposed variance estimator under specific scenarios. In particular, we develop a new "double U-statistic" formulation to analyze the Hoeffding decomposition of the estimator's variance.
Image foreground extraction is a classical problem in image processing and vision, with a large range of applications. In this dissertation, we focus on the extraction of text and graphics in mixed-content images, and design novel approaches for various aspects of this problem. We first propose a sparse decomposition framework, which models the background by a subspace containing smooth basis vectors, and foreground as a sparse and connected component. We then formulate an optimization framework to solve this problem, by adding suitable regularizations to the cost function to promote the desired characteristics of each component. We present two techniques to solve the proposed optimization problem, one based on alternating direction method of multipliers (ADMM), and the other one based on robust regression. Promising results are obtained for screen content image segmentation using the proposed algorithm. We then propose a robust subspace learning algorithm for the representation of the background component using training images that could contain both background and foreground components, as well as noise. With the learnt subspace for the background, we can further improve the segmentation results, compared to using a fixed subspace. Lastly, we investigate a different class of signal/image decomposition problem, where only one signal component is active at each signal element. In this case, besides estimating each component, we need to find their supports, which can be specified by a binary mask. We propose a mixed-integer programming problem, that jointly estimates the two components and their supports through an alternating optimization scheme. We show the application of this algorithm on various problems, including image segmentation, video motion segmentation, and also separation of text from textured images.
Purpose: MR image reconstruction exploits regularization to compensate for missing k-space data. In this work, we propose to learn the probability distribution of MR image patches with neural networks and use this distribution as prior information constraining images during reconstruction, effectively employing it as regularization. Methods: We use variational autoencoders (VAE) to learn the distribution of MR image patches, which models the high-dimensional distribution by a latent parameter model of lower dimensions in a non-linear fashion. The proposed algorithm uses the learned prior in a Maximum-A-Posteriori estimation formulation. We evaluate the proposed reconstruction method with T1 weighted images and also apply our method on images with white matter lesions. Results: Visual evaluation of the samples showed that the VAE algorithm can approximate the distribution of MR patches well. The proposed reconstruction algorithm using the VAE prior produced high quality reconstructions. The algorithm achieved normalized RMSE, CNR and CN values of 2.77\%, 0.43, 0.11; 4.29\%, 0.43, 0.11, 6.36\%, 0.47, 0.11 and 10.00\%, 0.42, 0.10 for undersampling ratios of 2, 3, 4 and 5, respectively, where it outperformed most of the alternative methods. In the experiments on images with white matter lesions, the method faithfully reconstructed the lesions. Conclusion: We introduced a novel method for MR reconstruction, which takes a new perspective on regularization by using priors learned by neural networks. Results suggest the method compares favorably against the other evaluated methods and can reconstruct lesions as well. Keywords: Reconstruction, MRI, prior probability, MAP estimation, machine learning, variational inference, deep learning