A fractal mobile-immobile (MIM in short) solute transport model in porous media is set forth, and an inverse problem of determining the fractional orders by the additional measurements at one interior point is investigated by Laplace transform. The unique existence of the solution to the forward problem is obtained based on the inverse Laplace transform, and the uniqueness of the inverse problem is proved in the real-space of Laplace transform by the maximum principle, and numerical inversions with noisy data are presented to demonstrate a numerical stability of the inverse problem.
To overcome topological constraints and improve the expressiveness of normalizing flow architectures, Wu, K\"ohler and No\'e introduced stochastic normalizing flows which combine deterministic, learnable flow transformations with stochastic sampling methods. In this paper, we consider stochastic normalizing flows from a Markov chain point of view. In particular, we replace transition densities by general Markov kernels and establish proofs via Radon-Nikodym derivatives which allows to incorporate distributions without densities in a sound way. Further, we generalize the results for sampling from posterior distributions as required in inverse problems. The performance of the proposed conditional stochastic normalizing flow is demonstrated by numerical examples.
It is common practice to use Laplace approximations to compute marginal likelihoods in Bayesian versions of generalised linear models (GLM). Marginal likelihoods combined with model priors are then used in different search algorithms to compute the posterior marginal probabilities of models and individual covariates. This allows performing Bayesian model selection and model averaging. For large sample sizes, even the Laplace approximation becomes computationally challenging because the optimisation routine involved needs to evaluate the likelihood on the full set of data in multiple iterations. As a consequence, the algorithm is not scalable for large datasets. To address this problem, we suggest using a version of a popular batch stochastic gradient descent (BSGD) algorithm for estimating the marginal likelihood of a GLM by subsampling from the data. We further combine the algorithm with Markov chain Monte Carlo (MCMC) based methods for Bayesian model selection and provide some theoretical results on the convergence of the estimates. Finally, we report results from experiments illustrating the performance of the proposed algorithm.
We present semantic correctness proofs of automatic differentiation (AD). We consider a forward-mode AD method on a higher order language with algebraic data types, and we characterise it as the unique structure preserving macro given a choice of derivatives for basic operations. We describe a rich semantics for differentiable programming, based on diffeological spaces. We show that it interprets our language, and we phrase what it means for the AD method to be correct with respect to this semantics. We show that our characterisation of AD gives rise to an elegant semantic proof of its correctness based on a gluing construction on diffeological spaces. We explain how this is, in essence, a logical relations argument. Throughout, we show how the analysis extends to AD methods for computing higher order derivatives using a Taylor approximation.
The conventional approach to data-driven inversion framework is based on Gaussian statistics that presents serious difficulties, especially in the presence of outliers in the measurements. In this work, we present maximum likelihood estimators associated with generalized Gaussian distributions in the context of R\'enyi, Tsallis and Kaniadakis statistics. In this regard, we analytically analyse the outlier-resistance of each proposal through the so-called influence function. In this way, we formulate inverse problems by constructing objective functions linked to the maximum likelihood estimators. To demonstrate the robustness of the generalized methodologies, we consider an important geophysical inverse problem with high noisy data with spikes. The results reveal that the best data inversion performance occurs when the entropic index from each generalized statistic is associated with objective functions proportional to the inverse of the error amplitude. We argue that in such a limit the three approaches are resistant to outliers and are also equivalent, which suggests a lower computational cost for the inversion process due to the reduction of numerical simulations to be performed and the fast convergence of the optimization process.
We propose a dimension reduction technique for Bayesian inverse problems with nonlinear forward operators, non-Gaussian priors, and non-Gaussian observation noise. The likelihood function is approximated by a ridge function, i.e., a map which depends non-trivially only on a few linear combinations of the parameters. We build this ridge approximation by minimizing an upper bound on the Kullback--Leibler divergence between the posterior distribution and its approximation. This bound, obtained via logarithmic Sobolev inequalities, allows one to certify the error of the posterior approximation. Computing the bound requires computing the second moment matrix of the gradient of the log-likelihood function. In practice, a sample-based approximation of the upper bound is then required. We provide an analysis that enables control of the posterior approximation error due to this sampling. Numerical and theoretical comparisons with existing methods illustrate the benefits of the proposed methodology.
We present a novel offline-online method to mitigate the computational burden of Bayesian inference, particularly in the regime where the posterior densities are computationally demanding to evaluate while real-time inference results are needed. In the offline phase, the proposed method learns the joint law of the parameter random variables and the observable random variables in the tensor-train (TT) format. Then, in the online phase, the resulting order-preserving transport can be conditioned on newly observed data to characterize the posterior random variables in real-time. Compared with the state-of-the-art normalizing flows techniques, our proposed method relies on function approximation, for which we can provide a thorough performance analysis. The function approximation perspective allows us to significantly improve the capability of transport maps in challenging problems with high-dimensional observations and high-dimensional parameters. Capitalizing on this, we present novel heuristics to either reorder or reparametrize the variables to enhance the approximation power of TT. We then integrate the TT-based transport maps and the parameter reordering/reparametrization into a layered composite map to further improve the performance of the resulting inference. We demonstrate the efficiency of the proposed method on various statistical learning tasks involving ordinary differential equations (ODEs) and partial differential equations (PDEs).
We introduce a numerical technique for controlling the location and stability properties of Hopf bifurcations in dynamical systems. The algorithm consists of solving an optimization problem constrained by an extended system of nonlinear partial differential equations that characterizes Hopf bifurcation points. The flexibility and robustness of the method allows us to advance or delay a Hopf bifurcation to a target value of the bifurcation parameter, as well as controlling the oscillation frequency with respect to a parameter of the system or the shape of the domain on which solutions are defined. Numerical applications are presented in systems arising from biology and fluid dynamics, such as the FitzHugh-Nagumo model, Ginzburg-Landau equation, Rayleigh-B\'enard convection problem, and Navier-Stokes equations, where the control of the location and oscillation frequency of periodic solutions is of high interest.
Recent advancements in miniaturized fluorescence microscopy have made it possible to investigate neuronal responses to external stimuli in awake behaving animals through the analysis of intra-cellular calcium signals. An on-going challenge is deconvolving the temporal signals to extract the spike trains from the noisy calcium signals' time-series. In this manuscript, we propose a nested Bayesian finite mixture specification that allows the estimation of spiking activity and, simultaneously, reconstructing the distributions of the calcium transient spikes' amplitudes under different experimental conditions. The proposed model leverages two nested layers of random discrete mixture priors to borrow information between experiments and discover similarities in the distributional patterns of neuronal responses to different stimuli. Furthermore, the spikes' intensity values are also clustered within and between experimental conditions to determine the existence of common (recurring) response amplitudes. Simulation studies and the analysis of a data set from the Allen Brain Observatory show the effectiveness of the method in clustering and detecting neuronal activities.
The ability of a radar to discriminate in both range and Doppler velocity is completely characterized by the ambiguity function (AF) of its transmit waveform. Mathematically, it is obtained by correlating the waveform with its Doppler-shifted and delayed replicas. We consider the inverse problem of designing a radar transmit waveform that satisfies the specified AF magnitude. This process can be viewed as a signal reconstruction with some variation of phase retrieval methods. We provide a trust-region algorithm that minimizes a smoothed non-convex least-squares objective function to iteratively recover the underlying signal-of-interest for either time- or band-limited support. The method first approximates the signal using an iterative spectral algorithm and then refines the attained initialization based upon a sequence of gradient iterations. Our theoretical analysis shows that unique signal reconstruction is possible using signal samples no more than thrice the number of signal frequencies or time samples. Numerical experiments demonstrate that our method recovers both time- and band-limited signals from even sparsely and randomly sampled AFs with mean-square-error of $1\times 10^{-6}$ and $9\times 10^{-2}$ for the full noiseless samples and sparse noisy samples, respectively.
We consider the task of learning the parameters of a {\em single} component of a mixture model, for the case when we are given {\em side information} about that component, we call this the "search problem" in mixture models. We would like to solve this with computational and sample complexity lower than solving the overall original problem, where one learns parameters of all components. Our main contributions are the development of a simple but general model for the notion of side information, and a corresponding simple matrix-based algorithm for solving the search problem in this general setting. We then specialize this model and algorithm to four common scenarios: Gaussian mixture models, LDA topic models, subspace clustering, and mixed linear regression. For each one of these we show that if (and only if) the side information is informative, we obtain parameter estimates with greater accuracy, and also improved computation complexity than existing moment based mixture model algorithms (e.g. tensor methods). We also illustrate several natural ways one can obtain such side information, for specific problem instances. Our experiments on real data sets (NY Times, Yelp, BSDS500) further demonstrate the practicality of our algorithms showing significant improvement in runtime and accuracy.