In this paper we demonstrate how sub-Riemannian geometry can be used for manifold learning and surface reconstruction by combining local linear approximations of a point cloud to obtain lower dimensional bundles. Local approximations obtained by local PCAs are collected into a rank $k$ tangent subbundle on $\mathbb{R}^d$, $k<d$, which we call a principal subbundle. This determines a sub-Riemannian metric on $\mathbb{R}^d$. We show that sub-Riemannian geodesics with respect to this metric can successfully be applied to a number of important problems, such as: explicit construction of an approximating submanifold $M$, construction of a representation of the point-cloud in $\mathbb{R}^k$, and computation of distances between observations, taking the learned geometry into account. The reconstruction is guaranteed to equal the true submanifold in the limit case where tangent spaces are estimated exactly. Via simulations, we show that the framework is robust when applied to noisy data. Furthermore, the framework generalizes to observations on an a priori known Riemannian manifold.
We propose and analyse an explicit boundary-preserving scheme for the strong approximations of some SDEs with non-globally Lipschitz drift and diffusion coefficients whose state-space is bounded. The scheme consists of a Lamperti transform followed by a Lie--Trotter splitting. We prove $L^{p}(\Omega)$-convergence of order $1$, for every $p \in \mathbb{N}$, of the scheme and exploit the Lamperti transform to confine the numerical approximations to the state-space of the considered SDE. We provide numerical experiments that confirm the theoretical results and compare the proposed Lamperti-splitting scheme to other numerical schemes for SDEs.
This work introduces a stabilised finite element formulation for the Stokes flow problem with a nonlinear slip boundary condition of friction type. The boundary condition is enforced with the help of an additional Lagrange multiplier and the stabilised formulation is based on simultaneously stabilising both the pressure and the Lagrange multiplier. We establish the stability and the a priori error analyses, and perform a numerical convergence study in order to verify the theory.
We study a family of distances between functions of a single variable. These distances are examples of integral probability metrics, and have been used previously for comparing probability measures. Special cases include the Earth Mover's Distance and the Kolmogorov Metric. We examine their properties for general signals, proving that they are robust to a broad class of perturbations and that the distance between one-dimensional tomographic projections of a two-dimensional function is bounded by the size of the difference in projection angles. We also establish error bounds for approximating the metric from finite samples, and prove that these approximations are robust to additive Gaussian noise. The results are illustrated in numerical experiments.
In this work we introduce S-TREK, a novel local feature extractor that combines a deep keypoint detector, which is both translation and rotation equivariant by design, with a lightweight deep descriptor extractor. We train the S-TREK keypoint detector within a framework inspired by reinforcement learning, where we leverage a sequential procedure to maximize a reward directly related to keypoint repeatability. Our descriptor network is trained following a "detect, then describe" approach, where the descriptor loss is evaluated only at those locations where keypoints have been selected by the already trained detector. Extensive experiments on multiple benchmarks confirm the effectiveness of our proposed method, with S-TREK often outperforming other state-of-the-art methods in terms of repeatability and quality of the recovered poses, especially when dealing with in-plane rotations.
Benefiting from the development of deep learning, text-to-speech (TTS) techniques using clean speech have achieved significant performance improvements. The data collected from real scenes often contain noise and generally needs to be denoised by speech enhancement models. Noise-robust TTS models are often trained using the enhanced speech, which thus suffer from speech distortion and background noise that affect the quality of the synthesized speech. Meanwhile, it was shown that self-supervised pre-trained models exhibit excellent noise robustness on many speech tasks, implying that the learned representation has a better tolerance for noise perturbations. In this work, we therefore explore pre-trained models to improve the noise robustness of TTS models. Based on HIFI-GAN we first propose a representation-to-waveform vocoder, which aims to learn to map the representation of pre-trained models to the waveform. We then propose a text-to-representation Fastspeech2 model, which aims to learn to map text to pre-trained model representations. Experimental results on the LJSpeech and LibriTTS datasets show that our method outperforms those using speech enhancement methods in both subjective and objective metrics. Audio samples are available at: //zqs01.github.io/rep2wav/.
We study a class of Gaussian processes for which the posterior mean, for a particular choice of data, replicates a truncated Taylor expansion of any order. The data consist of derivative evaluations at the expansion point and the prior covariance kernel belongs to the class of Taylor kernels, which can be written in a certain power series form. We discuss and prove some results on maximum likelihood estimation of parameters of Taylor kernels. The proposed framework is a special case of Gaussian process regression based on data that is orthogonal in the reproducing kernel Hilbert space of the covariance kernel.
We study the continuous multi-reference alignment model of estimating a periodic function on the circle from noisy and circularly-rotated observations. Motivated by analogous high-dimensional problems that arise in cryo-electron microscopy, we establish minimax rates for estimating generic signals that are explicit in the dimension $K$. In a high-noise regime with noise variance $\sigma^2 \gtrsim K$, for signals with Fourier coefficients of roughly uniform magnitude, the rate scales as $\sigma^6$ and has no further dependence on the dimension. This rate is achieved by a bispectrum inversion procedure, and our analyses provide new stability bounds for bispectrum inversion that may be of independent interest. In a low-noise regime where $\sigma^2 \lesssim K/\log K$, the rate scales instead as $K\sigma^2$, and we establish this rate by a sharp analysis of the maximum likelihood estimator that marginalizes over latent rotations. A complementary lower bound that interpolates between these two regimes is obtained using Assouad's hypercube lemma. We extend these analyses also to signals whose Fourier coefficients have a slow power law decay.
This note shows how to compute, to high relative accuracy under mild assumptions, complex Jacobi rotations for diagonalization of Hermitian matrices of order two, using the correctly rounded functions $\mathtt{cr\_hypot}$ and $\mathtt{cr\_rsqrt}$, proposed for standardization in the C programming language as recommended by the IEEE-754 floating-point standard. The rounding to nearest (ties to even) and the non-stop arithmetic are assumed. The numerical examples compare the observed with theoretical bounds on the relative errors in the rotations' elements, and show that the maximal observed departure of the rotations' determinants from unity is smaller than that of the transformations computed by LAPACK.
This paper formulates, analyzes, and demonstrates numerically a method for the partitioned solution of coupled interface problems involving combinations of projection-based reduced order models (ROM) and/or full order methods (FOMs). The method builds on the partitioned scheme developed in [1], which starts from a well-posed formulation of the coupled interface problem and uses its dual Schur complement to obtain an approximation of the interface flux. Explicit time integration of this problem decouples its subdomain equations and enables their independent solution on each subdomain. Extension of this partitioned scheme to coupled ROM-ROM or ROM-FOM problems required formulations with non-singular Schur complements. To obtain these problems, we project a well-posed coupled FOM-FOM problem onto a composite reduced basis comprising separate sets of basis vectors for the interface and interior variables, and use the interface reduced basis as a Lagrange multiplier. Our analysis confirms that the resulting coupled ROM-ROM and ROM-FOM problems have provably non-singular Schur complements, independent of the mesh size and the reduced basis size. In the ROM-FOM case, analysis shows that one can also use the interface FOM space as a Lagrange multiplier. We illustrate the theoretical and computational properties of the partitioned scheme through reproductive and predictive tests for a model advection-diffusion transmission problem.
This paper considers a Bayesian approach for inclusion detection in nonlinear inverse problems using two known and popular push-forward prior distributions: the star-shaped and level set prior distributions. We analyze the convergence of the corresponding posterior distributions in a small measurement noise limit. The methodology is general; it works for priors arising from any H\"older continuous transformation of Gaussian random fields and is applicable to a range of inverse problems. The level set and star-shaped prior distributions are examples of push-forward priors under H\"older continuous transformations that take advantage of the structure of inclusion detection problems. We show that the corresponding posterior mean converges to the ground truth in a proper probabilistic sense. Numerical tests on a two-dimensional quantitative photoacoustic tomography problem showcase the approach. The results highlight the convergence properties of the posterior distributions and the ability of the methodology to detect inclusions with sufficiently regular boundaries.