We consider the problem of reconstructing inhomogeneities in an isotropic elastic body using time harmonic waves. Here we extend the so called monotonicity method for inclusion detection and show how to determine certain types of inhomogeneities in the Lam\'e parameters and the density. We also included some numerical tests of the method.
Multivariate imputation by chained equations (MICE) is one of the most popular approaches to address missing values in a data set. This approach requires specifying a univariate imputation model for every variable under imputation. The specification of which predictors should be included in these univariate imputation models can be a daunting task. Principal component analysis (PCA) can simplify this process by replacing all of the potential imputation model predictors with a few components summarizing their variance. In this article, we extend the use of PCA with MICE to include a supervised aspect whereby information from the variables under imputation is incorporated into the principal component estimation. We conducted an extensive simulation study to assess the statistical properties of MICE with different versions of supervised dimensionality reduction and we compared them with the use of classical unsupervised PCA as a simpler dimensionality reduction technique.
In generalized regression models the effect of continuous covariates is commonly assumed to be linear. This assumption, however, may be too restrictive in applications and may lead to biased effect estimates and decreased predictive ability. While a multitude of alternatives for the flexible modeling of continuous covariates have been proposed, methods that provide guidance for choosing a suitable functional form are still limited. To address this issue, we propose a detection algorithm that evaluates several approaches for modeling continuous covariates and guides practitioners to choose the most appropriate alternative. The algorithm utilizes a unified framework for tree-structured modeling which makes the results easily interpretable. We assessed the performance of the algorithm by conducting a simulation study. To illustrate the proposed algorithm, we analyzed data of patients suffering from chronic kidney disease.
A new numerical domain decomposition method is proposed for solving elliptic equations on compact Riemannian manifolds. The advantage of this method is to avoid global triangulations or grids on manifolds. Our method is numerically tested on some $4$-dimensional manifolds such as the unit sphere $S^{4}$, the complex projective space $\mathbb{CP}^{2}$ and the product manifold $S^{2} \times S^{2}$.
The reverse engineering of a complex mixture, regardless of its nature, has become significant today. Being able to quickly assess the potential toxicity of new commercial products in relation to the environment presents a genuine analytical challenge. The development of digital tools (databases, chemometrics, machine learning, etc.) and analytical techniques (Raman spectroscopy, NIR spectroscopy, mass spectrometry, etc.) will allow for the identification of potential toxic molecules. In this article, we use the example of detergent products, whose composition can prove dangerous to humans or the environment, necessitating precise identification and quantification for quality control and regulation purposes. The combination of various digital tools (spectral database, mixture database, experimental design, Chemometrics / Machine Learning algorithm{\ldots}) together with different sample preparation methods (raw sample, or several concentrated / diluted samples) Raman spectroscopy, has enabled the identification of the mixture's constituents and an estimation of its composition. Implementing such strategies across different analytical tools can result in time savings for pollutant identification and contamination assessment in various matrices. This strategy is also applicable in the industrial sector for product or raw material control, as well as for quality control purposes.
Geometric quantiles are location parameters which extend classical univariate quantiles to normed spaces (possibly infinite-dimensional) and which include the geometric median as a special case. The infinite-dimensional setting is highly relevant in the modeling and analysis of functional data, as well as for kernel methods. We begin by providing new results on the existence and uniqueness of geometric quantiles. Estimation is then performed with an approximate M-estimator and we investigate its large-sample properties in infinite dimension. When the population quantile is not uniquely defined, we leverage the theory of variational convergence to obtain asymptotic statements on subsequences in the weak topology. When there is a unique population quantile, we show, under minimal assumptions, that the estimator is consistent in the norm topology for a wide range of Banach spaces including every separable uniformly convex space. In separable Hilbert spaces, we establish weak Bahadur-Kiefer representations of the estimator, from which $\sqrt n$-asymptotic normality follows. As a consequence, we obtain the first central limit theorem valid in a generic Hilbert space and under minimal assumptions that exactly match those of the finite-dimensional case. Our consistency and asymptotic normality results significantly improve the state of the art, even for exact geometric medians in Hilbert spaces.
We propose and analyze a novel symmetric exponential wave integrator (sEWI) for the nonlinear Schr\"odinger equation (NLSE) with low regularity potential and typical power-type nonlinearity of the form $ f(\rho) = \rho^\sigma $, where $ \rho:=|\psi|^2 $ is the density with $ \psi $ the wave function and $ \sigma > 0 $ is the exponent of the nonlinearity. The sEWI is explicit and stable under a time step size restriction independent of the mesh size. We rigorously establish error estimates of the sEWI under various regularity assumptions on potential and nonlinearity. For "good" potential and nonlinearity ($H^2$-potential and $\sigma \geq 1$), we establish an optimal second-order error bound in $L^2$-norm. For low regularity potential and nonlinearity ($L^\infty$-potential and $\sigma > 0$), we obtain a first-order $L^2$-norm error bound accompanied with a uniform $H^2$-norm bound of the numerical solution. Moreover, adopting a new technique of \textit{regularity compensation oscillation} (RCO) to analyze error cancellation, for some non-resonant time steps, the optimal second-order $L^2$-norm error bound is proved under a weaker assumption on the nonlinearity: $\sigma \geq 1/2$. For all the cases, we also present corresponding fractional order error bounds in $H^1$-norm, which is the natural norm in terms of energy. Extensive numerical results are reported to confirm our error estimates and to demonstrate the superiority of the sEWI, including much weaker regularity requirements on potential and nonlinearity, and excellent long-time behavior with near-conservation of mass and energy.
We propose and analyze an extended Fourier pseudospectral (eFP) method for the spatial discretization of the Gross-Pitaevskii equation (GPE) with low regularity potential by treating the potential in an extended window for its discrete Fourier transform. The proposed eFP method maintains optimal convergence rates with respect to the regularity of the exact solution even if the potential is of low regularity and enjoys similar computational cost as the standard Fourier pseudospectral method, and thus it is both efficient and accurate. Furthermore, similar to the Fourier spectral/pseudospectral methods, the eFP method can be easily coupled with different popular temporal integrators including finite difference methods, time-splitting methods and exponential-type integrators. Numerical results are presented to validate our optimal error estimates and to demonstrate that they are sharp as well as to show its efficiency in practical computations.
The formulation of Bayesian inverse problems involves choosing prior distributions; choices that seem equally reasonable may lead to significantly different conclusions. We develop a computational approach to better understand the impact of the hyperparameters defining the prior on the posterior statistics of the quantities of interest. Our approach relies on global sensitivity analysis (GSA) of Bayesian inverse problems with respect to the hyperparameters defining the prior. This, however, is a challenging problem--a naive double loop sampling approach would require running a prohibitive number of Markov chain Monte Carlo (MCMC) sampling procedures. The present work takes a foundational step in making such a sensitivity analysis practical through (i) a judicious combination of efficient surrogate models and (ii) a tailored importance sampling method. In particular, we can perform accurate GSA of posterior prediction statistics with respect to prior hyperparameters without having to repeat MCMC runs. We demonstrate the effectiveness of the approach on a simple Bayesian linear inverse problem and a nonlinear inverse problem governed by an epidemiological model
We introduce a convergent hierarchy of lower bounds on the minimum value of a real homogeneous polynomial over the sphere. The main practical advantage of our hierarchy over the sum-of-squares (SOS) hierarchy is that the lower bound at each level of our hierarchy is obtained by a minimum eigenvalue computation, as opposed to the full semidefinite program (SDP) required at each level of SOS. In practice, this allows us to go to much higher levels than are computationally feasible for the SOS hierarchy. For both hierarchies, the underlying space at the $k$-th level is the set of homogeneous polynomials of degree $2k$. We prove that our hierarchy converges as $O(1/k)$ in the level $k$, matching the best-known convergence of the SOS hierarchy when the number of variables $n$ is less than the half-degree $d$ (the best-known convergence of SOS when $n \geq d$ is $O(1/k^2)$). More generally, we introduce a convergent hierarchy of minimum eigenvalue computations for minimizing the inner product between a real tensor and an element of the spherical Segre-Veronese variety, with similar convergence guarantees. As examples, we obtain hierarchies for computing the (real) tensor spectral norm, and for minimizing biquadratic forms over the sphere. Hierarchies of eigencomputations for more general constrained polynomial optimization problems are discussed.
This work studies time-dependent electromagnetic scattering from obstacles whose interaction with the wave is fully determined by a nonlinear boundary condition. In particular, the boundary condition studied in this work enforces a power law type relation between the electric and magnetic field along the boundary. Based on time-dependent jump conditions of classical boundary operators, we derive a nonlinear system of time-dependent boundary integral equations that determines the tangential traces of the scattered electric and magnetic fields. These fields can subsequently be computed at arbitrary points in the exterior domain by evaluating a time-dependent representation formula. Fully discrete schemes are obtained by discretising the nonlinear system of boundary integral equations with Runge--Kutta based convolution quadrature in time and Raviart--Thomas boundary elements in space. Error bounds with explicitly stated convergence rates are proven, under the assumption of sufficient regularity of the exact solution. The error analysis is conducted through novel techniques based on time-discrete transmission problems and the use of a new discrete partial integration inequality. Numerical experiments illustrate the use of the proposed method and provide empirical convergence rates.