The aim of this study is to implement a method to remove ambient noise in biomedical sounds captured in auscultation. We propose an incremental approach based on multichannel non-negative matrix partial co-factorization (NMPCF) for ambient denoising focusing on high noisy environment with a Signal-to-Noise Ratio (SNR) <= -5 dB. The first contribution applies NMPCF assuming that ambient noise can be modelled as repetitive sound events simultaneously found in two single-channel inputs captured by means of different recording devices. The second contribution proposes an incremental algorithm, based on the previous multichannel NMPCF, that refines the estimated biomedical spectrogram throughout a set of incremental stages by eliminating most of the ambient noise that was not removed in the previous stage at the expense of preserving most of the biomedical spectral content. The ambient denoising performance of the proposed method, compared to some of the most relevant state-of-the-art methods, has been evaluated using a set of recordings composed of biomedical sounds mixed with ambient noise that typically surrounds a medical consultation room to simulate high noisy environments with a SNR from -20 dB to -5 dB. Experimental results report that: (i) the performance drop suffered by the proposed method is lower compared to MSS and NLMS; (ii) unlike what happens with MSS and NLMS, the proposed method shows a stable trend of the average SDR and SIR results regardless of the type of ambient noise and the SNR level evaluated; and (iii) a remarkable advantage is the high robustness of the estimated biomedical sounds when the two single-channel inputs suffer from a delay between them.
This paper addresses the inverse scattering problem for Maxwell's equations. We first show that a bianisotropic scatterer can be uniquely determined from multi-static far-field data through the factorization analysis of the far-field operator. Next, we investigate a modified version of the orthogonality sampling method, as proposed in \cite{Le2022}, for the numerical reconstruction of the scatterer. Finally, we apply this sampling method to invert unprocessed 3D experimental data obtained from the Fresnel Institute \cite{Geffrin2009}. Numerical examples with synthetic scattering data for bianisotropic targets are also presented to demonstrate the effectiveness of the method.
A new variant of the GMRES method is presented for solving linear systems with the same matrix and subsequently obtained multiple right-hand sides. The new method keeps such properties of the classical GMRES algorithm as follows. Both bases of the search space and its image are maintained orthonormal that increases the robustness of the method. Moreover there is no need to store both bases since they are effectively represented within a common basis. Along with it our method is theoretically equivalent to the GCR method extended for a case of multiple right-hand sides but is more numerically robust and requires less memory. The main result of the paper is a mechanism of adding an arbitrary direction vector to the search space that can be easily adopted for flexible GMRES or GMRES with deflated restarting.
A discrete spatial lattice can be cast as a network structure over which spatially-correlated outcomes are observed. A second network structure may also capture similarities among measured features, when such information is available. Incorporating the network structures when analyzing such doubly-structured data can improve predictive power, and lead to better identification of important features in the data-generating process. Motivated by applications in spatial disease mapping, we develop a new doubly regularized regression framework to incorporate these network structures for analyzing high-dimensional datasets. Our estimators can be easily implemented with standard convex optimization algorithms. In addition, we describe a procedure to obtain asymptotically valid confidence intervals and hypothesis tests for our model parameters. We show empirically that our framework provides improved predictive accuracy and inferential power compared to existing high-dimensional spatial methods. These advantages hold given fully accurate network information, and also with networks which are partially misspecified or uninformative. The application of the proposed method to modeling COVID-19 mortality data suggests that it can improve prediction of deaths beyond standard spatial models, and that it selects relevant covariates more often.
Phase-field models of fatigue are capable of reproducing the main phenomenology of fatigue behavior. However, phase-field computations in the high-cycle fatigue regime are prohibitively expensive, due to the need to resolve spatially the small length scale inherent to phase-field models and temporally the loading history for several millions of cycles. As a remedy, we propose a fully adaptive acceleration scheme based on the cycle jump technique, where the cycle-by-cycle resolution of an appropriately determined number of cycles is skipped while predicting the local system evolution during the jump. The novelty of our approach is a cycle-jump criterion to determine the appropriate cycle-jump size based on a target increment of a global variable which monitors the advancement of fatigue. We propose the definition and meaning of this variable for three general stages of the fatigue life. In comparison to existing acceleration techniques, our approach needs no parameters and bounds for the cycle-jump size, and it works independently of the material, specimen or loading conditions. Since one of the monitoring variables is the fatigue crack length, we introduce an accurate, flexible and efficient method for its computation, which overcomes the issues of conventional crack tip tracking algorithms and enables the consideration of several cracks evolving at the same time. The performance of the proposed acceleration scheme is demonstrated with representative numerical examples, which show a speedup reaching four orders of magnitude in the high-cycle fatigue regime with consistently high accuracy.
We investigate a Tikhonov regularization scheme specifically tailored for shallow neural networks within the context of solving a classic inverse problem: approximating an unknown function and its derivatives within a unit cubic domain based on noisy measurements. The proposed Tikhonov regularization scheme incorporates a penalty term that takes three distinct yet intricately related network (semi)norms: the extended Barron norm, the variation norm, and the Radon-BV seminorm. These choices of the penalty term are contingent upon the specific architecture of the neural network being utilized. We establish the connection between various network norms and particularly trace the dependence of the dimensionality index, aiming to deepen our understanding of how these norms interplay with each other. We revisit the universality of function approximation through various norms, establish rigorous error-bound analysis for the Tikhonov regularization scheme, and explicitly elucidate the dependency of the dimensionality index, providing a clearer understanding of how the dimensionality affects the approximation performance and how one designs a neural network with diverse approximating tasks.
Parameter inference is essential when interpreting observational data using mathematical models. Standard inference methods for differential equation models typically rely on obtaining repeated numerical solutions of the differential equation(s). Recent results have explored how numerical truncation error can have major, detrimental, and sometimes hidden impacts on likelihood-based inference by introducing false local maxima into the log-likelihood function. We present a straightforward approach for inference that eliminates the need for solving the underlying differential equations, thereby completely avoiding the impact of truncation error. Open-access Jupyter notebooks, available on GitHub, allow others to implement this method for a broad class of widely-used models to interpret biological data.
We present here a large collection of harmonic and quadratic harmonic sums, that can be useful in applied questions, e.g., probabilistic ones. We find closed-form formulae, that we were not able to locate in the literature.
Molecular communication is a bio-inspired communication paradigm where molecules are used as the information carrier. This paper considers a molecular communication network where the transmitter uses concentration modulated signals for communication. Our focus is to design receivers that can demodulate these signals. We want the receivers to use enzymatic cycles as their building blocks and can work approximately as a maximum a posteriori (MAP) demodulator. No receivers with all these features exist in the current molecular communication literature. We consider enzymatic cycles because they are a very common class of chemical reactions that are found in living cells. In addition, a MAP receiver has good statistical performance. In this paper, we study the operating regime of an enzymatic cycle and how the parameters of the enzymatic cycles can be chosen so that the receiver can approximately implement a MAP demodulator. We use simulation to study the performance of this receiver. We show that we can reduce the bit-error ratio of the demodulator if the enzymatic cycle operates in specific parameter regimes.
Unlabeled sensing is a linear inverse problem with permuted measurements. We propose an alternating minimization (AltMin) algorithm with a suitable initialization for two widely considered permutation models: partially shuffled/$k$-sparse permutations and $r$-local/block diagonal permutations. Key to the performance of the AltMin algorithm is the initialization. For the exact unlabeled sensing problem, assuming either a Gaussian measurement matrix or a sub-Gaussian signal, we bound the initialization error in terms of the number of blocks $s$ and the number of shuffles $k$. Experimental results show that our algorithm is fast, applicable to both permutation models, and robust to choice of measurement matrix. We also test our algorithm on several real datasets for the linked linear regression problem and show superior performance compared to baseline methods.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.