Magnetic resonance imaging (MRI) plays an important role in modern medical diagnostic but suffers from prolonged scan time. Current deep learning methods for undersampled MRI reconstruction exhibit good performance in image de-aliasing which can be tailored to the specific kspace undersampling scenario. But it is very troublesome to configure different deep networks when the sampling setting changes. In this work, we propose a deep plug-and-play method for undersampled MRI reconstruction, which effectively adapts to different sampling settings. Specifically, the image de-aliasing prior is first learned by a deep denoiser trained to remove general white Gaussian noise from synthetic data. Then the learned deep denoiser is plugged into an iterative algorithm for image reconstruction. Results on in vivo data demonstrate that the proposed method provides nice and robust accelerated image reconstruction performance under different undersampling patterns and sampling rates, both visually and quantitatively.
Scientists continue to develop increasingly complex mechanistic models to reflect their knowledge more realistically. Statistical inference using these models can be challenging since the corresponding likelihood function is often intractable and model simulation may be computationally burdensome. Fortunately, in many of these situations, it is possible to adopt a surrogate model or approximate likelihood function. It may be convenient to conduct Bayesian inference directly with the surrogate, but this can result in bias and poor uncertainty quantification. In this paper we propose a new method for adjusting approximate posterior samples to reduce bias and produce more accurate uncertainty quantification. We do this by optimizing a transform of the approximate posterior that maximizes a scoring rule. Our approach requires only a (fixed) small number of complex model simulations and is numerically stable. We demonstrate good performance of the new method on several examples of increasing complexity.
We introduce a convergent hierarchy of lower bounds on the minimum value of a real homogeneous polynomial over the sphere. The main practical advantage of our hierarchy over the sum-of-squares (SOS) hierarchy is that the lower bound at each level of our hierarchy is obtained by a minimum eigenvalue computation, as opposed to the full semidefinite program (SDP) required at each level of SOS. In practice, this allows us to go to much higher levels than are computationally feasible for the SOS hierarchy. For both hierarchies, the underlying space at the $k$-th level is the set of homogeneous polynomials of degree $2k$. We prove that our hierarchy converges as $O(1/k)$ in the level $k$, matching the best-known convergence of the SOS hierarchy when the number of variables $n$ is less than the half-degree $d$ (the best-known convergence of SOS when $n \geq d$ is $O(1/k^2)$). More generally, we introduce a convergent hierarchy of minimum eigenvalue computations for minimizing the inner product between a real tensor and an element of the spherical Segre-Veronese variety, with similar convergence guarantees. As examples, we obtain hierarchies for computing the (real) tensor spectral norm, and for minimizing biquadratic forms over the sphere. Hierarchies of eigencomputations for more general constrained polynomial optimization problems are discussed.
We propose here selected actual features of measurement problems based on our concerns in our respective fields of research. Their technical similarity in apparently disconnected fields motivate this common communication. Problems of coherence and consistency, correlation, randomness and uncertainty are exposed in various fields including physics, decision theory and game theory, while the underlying mathematical structures are very similar.
Generative diffusion models have achieved spectacular performance in many areas of generative modeling. While the fundamental ideas behind these models come from non-equilibrium physics, in this paper we show that many aspects of these models can be understood using the tools of equilibrium statistical mechanics. Using this reformulation, we show that generative diffusion models undergo second-order phase transitions corresponding to symmetry breaking phenomena. We argue that this lead to a form of instability that lies at the heart of their generative capabilities and that can be described by a set of mean field critical exponents. We conclude by analyzing recent work connecting diffusion models and associative memory networks in view of the thermodynamic formulations.
In this work, we analyse space-time reduced basis methods for the efficient numerical simulation of hemodynamics in arteries. The classical formulation of the reduced basis (RB) method features dimensionality reduction in space, while finite differences schemes are employed for the time integration of the resulting ordinary differential equation (ODE). Space-time reduced basis (ST-RB) methods extend the dimensionality reduction paradigm to the temporal dimension, projecting the full-order problem onto a low-dimensional spatio-temporal subspace. Our goal is to investigate the application of ST-RB methods to the unsteady incompressible Stokes equations, with a particular focus on stability. High-fidelity simulations are performed using the Finite Element (FE) method and BDF2 as time marching scheme. We consider two different ST-RB methods. In the first one - called ST-GRB - space-time model order reduction is achieved by means of a Galerkin projection; a spatio-temporal velocity basis enrichment procedure is introduced to guarantee stability. The second method - called ST-PGRB - is characterized by a Petrov--Galerkin projection, stemming from a suitable minimization of the FOM residual, that allows to automatically attain stability. The classical RB method - denoted as SRB-TFO - serves as a baseline for the theoretical development. Numerical tests have been conducted on an idealized symmetric bifurcation geometry and on the patient-specific one of a femoropopliteal bypass. The results show that both ST-RB methods provide accurate approximations of the high-fidelity solutions, while considerably reducing the computational cost. In particular, the ST-PGRB method exhibits the best performance, as it features a better computational efficiency while retaining accuracies in accordance with theoretical expectations.
Numerous applications in the field of molecular communications (MC) such as healthcare systems are often event-driven. The conventional Shannon capacity may not be the appropriate metric for assessing performance in such cases. We propose the identification (ID) capacity as an alternative metric. Particularly, we consider randomized identification (RI) over the discrete-time Poisson channel (DTPC), which is typically used as a model for MC systems that utilize molecule-counting receivers. In the ID paradigm, the receiver's focus is not on decoding the message sent. However, he wants to determine whether a message of particular significance to him has been sent or not. In contrast to Shannon transmission codes, the size of ID codes for a Discrete Memoryless Channel (DMC) grows doubly exponentially fast with the blocklength, if randomized encoding is used. In this paper, we derive the capacity formula for RI over the DTPC subject to some peak and average power constraints. Furthermore, we analyze the case of state-dependent DTPC.
We study the multiplicative hazards model with intermittently observed longitudinal covariates and time-varying coefficients. For such models, the existing {\it ad hoc} approach, such as the last value carried forward, is biased. We propose a kernel weighting approach to get an unbiased estimation of the non-parametric coefficient function and establish asymptotic normality for any fixed time point. Furthermore, we construct the simultaneous confidence band to examine the overall magnitude of the variation. Simulation studies support our theoretical predictions and show favorable performance of the proposed method. A data set from cerebral infarction is used to illustrate our methodology.
We show that the classical fourth order accurate compact finite difference scheme with high order strong stability preserving time discretizations for convection diffusion problems satisfies a weak monotonicity property, which implies that a simple limiter can enforce the bound-preserving property without losing conservation and high order accuracy. Higher order accurate compact finite difference schemes satisfying the weak monotonicity will also be discussed.
We consider state and parameter estimation for compartmental models having both time-varying and time-invariant parameters. Though the described Bayesian computational framework is general, we look at a specific application to the susceptible-infectious-removed (SIR) model which describes a basic mechanism for the spread of infectious diseases through a system of coupled nonlinear differential equations. The SIR model consists of three states, namely, the three compartments, and two parameters which control the coupling among the states. The deterministic SIR model with time-invariant parameters has shown to be overly simplistic for modelling the complex long-term dynamics of diseases transmission. Recognizing that certain model parameters will naturally vary in time due to seasonal trends, non-pharmaceutical interventions, and other random effects, the estimation procedure must systematically permit these time-varying effects to be captured, without unduly introducing artificial dynamics into the system. To this end, we leverage the robustness of the Markov Chain Monte Carlo (MCMC) algorithm for the estimation of time-invariant parameters alongside nonlinear filters for the joint estimation of the system state and time-varying parameters. We demonstrate performance of the framework by first considering a series of examples using synthetic data, followed by an exposition on public health data collected in the province of Ontario.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.