The proposed method re-frames traditional inverse problems of electrocardiography into regression problems, constraining the solution space by decomposing signals with multidimensional Gaussian impulse basis functions. Impulse HSPs were generated with single Gaussian basis functions at discrete heart surface locations and projected to corresponding BSPs using a volume conductor torso model. Both BSP (inputs) and HSP (outputs) were mapped to regular 2D surface meshes and used to train a neural network. Predictive capabilities of the network were tested with unseen synthetic and experimental data. A dense full connected single hidden layer neural network was trained to map body surface impulses to heart surface Gaussian basis functions for reconstructing HSP. Synthetic pulses moving across the heart surface were predicted from the neural network with root mean squared error of $9.1\pm1.4$%. Predicted signals were robust to noise up to 20 dB and errors due to displacement and rotation of the heart within the torso were bounded and predictable. A shift of the heart 40 mm toward the spine resulted in a 4\% increase in signal feature localization error. The set of training impulse function data could be reduced and prediction error remained bounded. Recorded HSPs from in-vitro pig hearts were reliably decomposed using space-time Gaussian basis functions. Predicted HSPs for left-ventricular pacing had a mean absolute error of $10.4\pm11.4$ ms. Other pacing scenarios were analyzed with similar success. Conclusion: Impulses from Gaussian basis functions are potentially an effective and robust way to train simple neural network data models for reconstructing HSPs from decomposed BSPs. The HSPs predicted by the neural network can be used to generate activation maps that non-invasively identify features of cardiac electrical dysfunction and can guide subsequent treatment options.
In this article, we aim to study the stability and dynamic transition of an electrically conducting fluid in the presence of an external uniform horizontal magnetic field and a rotation based on a Boussinesq approximation model. We take a hybrid approach combining theoretical analysis with numerical computation to study the transition from a simple real eigenvalue, a pair of complex conjugate eigenvalues and a pair of real eigenvalues. The center manifold reduction theory is applied to reduce the infinite dimensional system to the corresponding finite dimensional one together with several non-dimensional transition numbers that determine the dynamic transition types. Careful numerical computations are performed to determine these transition numbers as well as related temporal and flow patterns etc. Our results indicate that both transition of continuous type and transition of jump type can occur at certain parameter region. For the continuous transition from a simple real eigenvalue, the Boussinesq approximation model bifurcates to two nontrivial stable steady-state solutions. For the continuous transition from a pair of complex conjugate eigenvalues, the model bifurcates to a stable periodic solutions. For the continuous transition from a pair of real eigenvalues, the model bifurcates to a local attractor at the critical Rayleigh number. The local attractor contains two (four) stable nodes and two (four) saddle points.
We study the problem of estimating a rank-$1$ signal in the presence of rotationally invariant noise-a class of perturbations more general than Gaussian noise. Principal Component Analysis (PCA) provides a natural estimator, and sharp results on its performance have been obtained in the high-dimensional regime. Recently, an Approximate Message Passing (AMP) algorithm has been proposed as an alternative estimator with the potential to improve the accuracy of PCA. However, the existing analysis of AMP requires an initialization that is both correlated with the signal and independent of the noise, which is often unrealistic in practice. In this work, we combine the two methods, and propose to initialize AMP with PCA. Our main result is a rigorous asymptotic characterization of the performance of this estimator. Both the AMP algorithm and its analysis differ from those previously derived in the Gaussian setting: at every iteration, our AMP algorithm requires a specific term to account for PCA initialization, while in the Gaussian case, PCA initialization affects only the first iteration of AMP. The proof is based on a two-phase artificial AMP that first approximates the PCA estimator and then mimics the true AMP. Our numerical simulations show an excellent agreement between AMP results and theoretical predictions, and suggest an interesting open direction on achieving Bayes-optimal performance.
We introduce a kernel estimator, to the tail index of a right-censored Pareto-type distribution, that generalizes Worms's one (Worms and Worms, 2014)in terms of weight coefficients. Under some regularity conditions, the asymptotic normality of the proposed estimator is established. In the framework of the second-order condition, we derive an asymptotically bias-reduced version to the new estimator. Through a simulation study, we conclude that one of the main features of the proposed kernel estimator is its smoothness contrary to Worms's one, which behaves, rather erratically, as a function of the number of largest extreme values. As expected, the bias significantly decreases compared to that of the non-smoothed estimator with however a slight increase in the mean squared error.
The time domain linear sampling method (TD-LSM) solves inverse scattering problems using time domain data by creating an indicator function for the support of the unknown scatterer. It involves only solving a linear integral equation called the near-field equation using different data from sampling points that probe the domain where the scatterer is located. To date, the method has been used for the acoustic wave equation and has been tested for several different types of scatterers, i.e. sound hard, impedance, and penetrable, and for wave-guides. In this paper, we extend the TD-LSM to the time dependent Maxwell's system with impedance boundary conditions - a similar analysis handles the case of a perfectly electrically conducting (PEC) body. We provide an analysis that supports the use of the TD-LSM for this problem, and preliminary numerical tests of the algorithm. Our analysis relies on the Laplace transform approach previously used for the acoustic wave equation. This is the first application of the TD-LSM in electromagnetism.
In this work we consider a class of non-linear eigenvalue problems that admit a spectrum similar to that of a Hamiltonian matrix, in the sense that the spectrum is symmetric with respect to both the real and imaginary axis. More precisely, we present a method to iteratively approximate the eigenvalues of such non-linear eigenvalue problems closest to a given purely real or imaginary shift, while preserving the symmetries of the spectrum. To this end the presented method exploits the equivalence between the considered non-linear eigenvalue problem and the eigenvalue problem associated with a linear but infinite-dimensional operator. To compute the eigenvalues closest to the given shift, we apply a specifically chosen shift-invert transformation to this linear operator and compute the eigenvalues with the largest modulus of the new shifted and inverted operator using an (infinite) Arnoldi procedure. The advantage of the chosen shift-invert transformation is that the spectrum of the transformed operator has a "real skew-Hamiltonian"-like structure. Furthermore, it is proven that the Krylov space constructed by applying this operator, satisfies an orthogonality property in terms of a specifically chosen bilinear form. By taking this property into account in the orthogonalization process, it is ensured that even in the presence of rounding errors, the obtained approximation for, e.g., a simple, purely imaginary eigenvalue is simple and purely imaginary. The presented work can thus be seen as an extension of [V. Mehrmann and D. Watkins, "Structure-Preserving Methods for Computing Eigenpairs of Large Sparse Skew-Hamiltonian/Hamiltonian Pencils", SIAM J. Sci. Comput. (22.6), 2001], to the considered class of non-linear eigenvalue problems. Although the presented method is initially defined on function spaces, it can be implemented using finite dimensional linear algebra operations.
Selecting powerful predictors for an outcome is a cornerstone task for machine learning. However, some types of questions can only be answered by identifying the predictors that causally affect the outcome. A recent approach to this causal inference problem leverages the invariance property of a causal mechanism across differing experimental environments (Peters et al., 2016; Heinze-Deml et al., 2018). This method, invariant causal prediction (ICP), has a substantial computational defect -- the runtime scales exponentially with the number of possible causal variables. In this work, we show that the approach taken in ICP may be reformulated as a series of nonparametric tests that scales linearly in the number of predictors. Each of these tests relies on the minimization of a novel loss function -- the Wasserstein variance -- that is derived from tools in optimal transport theory and is used to quantify distributional variability across environments. We prove under mild assumptions that our method is able to recover the set of identifiable direct causes, and we demonstrate in our experiments that it is competitive with other benchmark causal discovery algorithms.
Probabilistic estimation of cardiac electrophysiological model parameters serves an important step towards model personalization and uncertain quantification. The expensive computation associated with these model simulations, however, makes direct Markov Chain Monte Carlo (MCMC) sampling of the posterior probability density function (pdf) of model parameters computationally intensive. Approximated posterior pdfs resulting from replacing the simulation model with a computationally efficient surrogate, on the other hand, have seen limited accuracy. In this paper, we present a Bayesian active learning method to directly approximate the posterior pdf function of cardiac model parameters, in which we intelligently select training points to query the simulation model in order to learn the posterior pdf using a small number of samples. We integrate a generative model into Bayesian active learning to allow approximating posterior pdf of high-dimensional model parameters at the resolution of the cardiac mesh. We further introduce new acquisition functions to focus the selection of training points on better approximating the shape rather than the modes of the posterior pdf of interest. We evaluated the presented method in estimating tissue excitability in a 3D cardiac electrophysiological model in a range of synthetic and real-data experiments. We demonstrated its improved accuracy in approximating the posterior pdf compared to Bayesian active learning using regular acquisition functions, and substantially reduced computational cost in comparison to existing standard or accelerated MCMC sampling.
Seismic wave propagation forms the basis for most aspects of seismological research, yet solving the wave equation is a major computational burden that inhibits the progress of research. This is exacerbated by the fact that new simulations must be performed when the velocity structure or source location is perturbed. Here, we explore a prototype framework for learning general solutions using a recently developed machine learning paradigm called Neural Operator. A trained Neural Operator can compute a solution in negligible time for any velocity structure or source location. We develop a scheme to train Neural Operators on an ensemble of simulations performed with random velocity models and source locations. As Neural Operators are grid-free, it is possible to evaluate solutions on higher resolution velocity models than trained on, providing additional computational efficiency. We illustrate the method with the 2D acoustic wave equation and demonstrate the method's applicability to seismic tomography, using reverse mode automatic differentiation to compute gradients of the wavefield with respect to the velocity structure. The developed procedure is nearly an order of magnitude faster than using conventional numerical methods for full waveform inversion.
We present the meshfree Mixed Collocation Method (MCM) to solve the monodomain model for numerical simulation of cardiac electrophysiology. We apply MCM to simulate cardiac electrical propagation in 2D tissue sheets and 3D tissue slabs as well as in realistic large-scale biventricular anatomies. Capitalizing on the meshfree property of MCM, we introduce an immersed grid approach for automated generation of nodes in the modeled cardiac domains. We demonstrate that MCM solutions are in agreement with FEM solutions, thus confirming their suitability for simulation of cardiac electrophysiology both in healthy and disease conditions, such as left-bundle-branch block (LBBB) and myocardial infarction. Despite the fact that the computational time for MCM calculations is longer than for FEM, its efficiency in dealing with domains presenting irregularity, nonlinearity and discontinuity make MCM a promising alternative for heart's electrical investigations.
We consider the task of learning the parameters of a {\em single} component of a mixture model, for the case when we are given {\em side information} about that component, we call this the "search problem" in mixture models. We would like to solve this with computational and sample complexity lower than solving the overall original problem, where one learns parameters of all components. Our main contributions are the development of a simple but general model for the notion of side information, and a corresponding simple matrix-based algorithm for solving the search problem in this general setting. We then specialize this model and algorithm to four common scenarios: Gaussian mixture models, LDA topic models, subspace clustering, and mixed linear regression. For each one of these we show that if (and only if) the side information is informative, we obtain parameter estimates with greater accuracy, and also improved computation complexity than existing moment based mixture model algorithms (e.g. tensor methods). We also illustrate several natural ways one can obtain such side information, for specific problem instances. Our experiments on real data sets (NY Times, Yelp, BSDS500) further demonstrate the practicality of our algorithms showing significant improvement in runtime and accuracy.