Positron emission tomography (PET) has been widely used for the diagnosis of serious diseases including cancer and Alzheimer's disease, based on the uptake of radiolabelled molecules that target certain pathological signatures. Recently, a novel imaging mode known as positronium lifetime imaging (PLI) has been shown possible with time-of-flight (TOF) PET as well. PLI is also of practical interest because it can provide complementary disease information reflecting conditions of the tissue microenvironment via mechanisms that are independent of tracer uptake. However, for the present practical systems that have a finite TOF resolution, the PLI reconstruction problem has yet to be fully formulated for the development of accurate reconstruction algorithms. This paper addresses this challenge by developing a statistical model for the PLI data and deriving from it a maximum-likelihood algorithm for reconstructing lifetime images alongside the uptake images. By using realistic computer simulation data, we show that the proposed algorithm can produce quantitatively accurate lifetime images for a 570~ps TOF PET system.
This paper focuses on the design, analysis and implementation of a new preconditioning concept for linear second order partial differential equations, including the convection-diffusion-reaction problems discretized by Galerkin or discontinuous Galerkin methods. We expand on the approach introduced by Gergelits et al. and adapt it to the more general settings, assuming that both the original and preconditioning matrices are composed of sparse matrices of very low ranks, representing local contributions to the global matrices. When applied to a symmetric problem, the method provides bounds to all individual eigenvalues of the preconditioned matrix. We show that this preconditioning strategy works not only for Galerkin discretization, but also for the discontinuous Galerkin discretization, where local contributions are associated with individual edges of the triangulation. In the case of non-symmetric problems, the method yields guaranteed bounds to real and imaginary parts of the resulting eigenvalues. We include some numerical experiments illustrating the method and its implementation, showcasing its effectiveness for the two variants of discretized (convection-)diffusion-reaction problems.
The utilization of renewable energy technologies, particularly hydrogen, has seen a boom in interest and has spread throughout the world. Ethanol steam reformation is one of the primary methods capable of producing hydrogen efficiently and reliably. This paper provides an in-depth study of the reformulated system both theoretically and numerically, as well as a plan to explore the possibility of converting the system into its conservation form. Lastly, we offer an overview of several numerical approaches for solving the general first-order quasi-linear hyperbolic equation to the particular model for ethanol steam reforming (ESR). We conclude by presenting some results that would enable the usage of these ODE/PDE solvers to be used in non-linear model predictive control (NMPC) algorithms and discuss the limitations of our approach and directions for future work.
Removal or cancellation of noise has wide-spread applications for imaging and acoustics. In every-day-life applications, denoising may even include generative aspects which are unfaithful to the ground truth. For scientific applications, however, denoising must reproduce the ground truth accurately. Here, we show how data can be denoised via a deep convolutional neural network such that weak signals appear with quantitative accuracy. In particular, we study X-ray diffraction on crystalline materials. We demonstrate that weak signals stemming from charge ordering, insignificant in the noisy data, become visible and accurate in the denoised data. This success is enabled by supervised training of a deep neural network with pairs of measured low- and high-noise data. This way, the neural network learns about the statistical properties of the noise. We demonstrate that using artificial noise does not yield such quantitatively accurate results. Our approach thus illustrates a practical strategy for noise filtering that can be applied to challenging acquisition problems.
We address the challenge of getting efficient yet accurate recognition systems with limited labels. While recognition models improve with model size and amount of data, many specialized applications of computer vision have severe resource constraints both during training and inference. Transfer learning is an effective solution for training with few labels, however often at the expense of a computationally costly fine-tuning of large base models. We propose to mitigate this unpleasant trade-off between compute and accuracy via semi-supervised cross-domain distillation from a set of diverse source models. Initially, we show how to use task similarity metrics to select a single suitable source model to distill from, and that a good selection process is imperative for good downstream performance of a target model. We dub this approach DistillNearest. Though effective, DistillNearest assumes a single source model matches the target task, which is not always the case. To alleviate this, we propose a weighted multi-source distillation method to distill multiple source models trained on different domains weighted by their relevance for the target task into a single efficient model (named DistillWeighted). Our methods need no access to source data, and merely need features and pseudo-labels of the source models. When the goal is accurate recognition under computational constraints, both DistillNearest and DistillWeighted approaches outperform both transfer learning from strong ImageNet initializations as well as state-of-the-art semi-supervised techniques such as FixMatch. Averaged over 8 diverse target tasks our multi-source method outperforms the baselines by 5.6%-points and 4.5%-points, respectively.
Time series clustering is a central machine learning task with applications in many fields. While the majority of the methods focus on real-valued time series, very few works consider series with discrete response. In this paper, the problem of clustering ordinal time series is addressed. To this aim, two novel distances between ordinal time series are introduced and used to construct fuzzy clustering procedures. Both metrics are functions of the estimated cumulative probabilities, thus automatically taking advantage of the ordering inherent to the series' range. The resulting clustering algorithms are computationally efficient and able to group series generated from similar stochastic processes, reaching accurate results even though the series come from a wide variety of models. Since the dynamic of the series may vary over the time, we adopt a fuzzy approach, thus enabling the procedures to locate each series into several clusters with different membership degrees. An extensive simulation study shows that the proposed methods outperform several alternative procedures. Weighted versions of the clustering algorithms are also presented and their advantages with respect to the original methods are discussed. Two specific applications involving economic time series illustrate the usefulness of the proposed approaches.
Noiseless compressive sensing is a protocol that enables undersampling and later recovery of a signal without loss of information. This compression is possible because the signal is usually sufficiently sparse in a given basis. Currently, the algorithm offering the best tradeoff between compression rate, robustness, and speed for compressive sensing is the LASSO (l1-norm bias) algorithm. However, many studies have pointed out the possibility that the implementation of lp-norms biases, with p smaller than one, could give better performance while sacrificing convexity. In this work, we focus specifically on the extreme case of the l0-based reconstruction, a task that is complicated by the discontinuity of the loss. In the first part of the paper, we describe via statistical physics methods, and in particular the replica method, how the solutions to this optimization problem are arranged in a clustered structure. We observe two distinct regimes: one at low compression rate where the signal can be recovered exactly, and one at high compression rate where the signal cannot be recovered accurately. In the second part, we present two message-passing algorithms based on our first results for the l0-norm optimization problem. The proposed algorithms are able to recover the signal at compression rates higher than the ones achieved by LASSO while being computationally efficient.
The Koopman operator provides a linear perspective on non-linear dynamics by focusing on the evolution of observables in an invariant subspace. Observables of interest are typically linearly reconstructed from the Koopman eigenfunctions. Despite the broad use of Koopman operators over the past few years, there exist some misconceptions about the applicability of Koopman operators to dynamical systems with more than one fixed point. In this work, an explanation is provided for the mechanism of lifting for the Koopman operator of a dynamical system with multiple attractors. Considering the example of the Duffing oscillator, we show that by exploiting the inherent symmetry between the basins of attraction, a linear reconstruction with three degrees of freedom in the Koopman observable space is sufficient to globally linearize the system.
In a variety of tomographic applications, data cannot be fully acquired, leading to a severely underdetermined image reconstruction problem. In such cases, conventional methods generate reconstructions with significant artifacts. In order to remove these artifacts, regularization methods must be applied that beneficially incorporate additional information. An important example of such methods is TV reconstruction. It is well-known that this technique can efficiently compensate for the missing data and reduce reconstruction artifacts. At the same time, however, tomographic data is also contaminated by noise, which poses an additional challenge. The use of a single penalty term (regularizer) within a variational regularization framework must therefore account for both, the missing data and the noise. However, a single regularizer may not be ideal for both tasks. For example, the TV regularizer is a poor choice for noise reduction across different scales, in which case $\ell^1$-curvelet regularization methods work well. To address this issue, in this paper we introduce a novel variational regularization framework that combines the advantages of two different regularizers. The basic idea of our framework is to perform reconstruction in two stages, where the first stage mainly aims at accurate reconstruction in the presence of noise, and the second stage aims at artifact reduction. Both reconstruction stages are connected by a data consistency condition, which makes them close to each other in the data domain. The proposed method is implemented and tested for limited view CT using a combined curvelet-TV-approach. To this end, we define and implement a curvelet transform adapted to the limited view problem and demonstrate the advantages of our approach in a series of numerical experiments in this context.
Uncertainty quantification in image restoration is a prominent challenge, mainly due to the high dimensionality of the encountered problems. Recently, a Bayesian uncertainty quantification by optimization (BUQO) has been proposed to formulate hypothesis testing as a minimization problem. The objective is to determine whether a structure appearing in a maximum a posteriori estimate is true or is a reconstruction artifact due to the ill-posedness or ill-conditioness of the problem. In this context, the mathematical definition of having a ``fake structure" is crucial, and highly depends on the type of structure of interest. This definition can be interpreted as an inpainting of a neighborhood of the structure, but only simple techniques have been proposed in the literature so far, due to the complexity of the problem. In this work, we propose a data-driven method using a simple convolutional neural network to perform the inpainting task, leading to a novel plug-and-play BUQO algorithm. Compared to previous works, the proposed approach has the advantage that it can be used for a wide class of structures, without needing to adapt the inpainting operator to the area of interest. In addition, we show through simulations on magnetic resonance imaging, that compared to the original BUQO's hand-crafted inpainting procedure, the proposed approach provides greater qualitative output images. Python code will be made available for reproducibility upon acceptance of the article.
Following unprecedented success on the natural language tasks, Transformers have been successfully applied to several computer vision problems, achieving state-of-the-art results and prompting researchers to reconsider the supremacy of convolutional neural networks (CNNs) as {de facto} operators. Capitalizing on these advances in computer vision, the medical imaging field has also witnessed growing interest for Transformers that can capture global context compared to CNNs with local receptive fields. Inspired from this transition, in this survey, we attempt to provide a comprehensive review of the applications of Transformers in medical imaging covering various aspects, ranging from recently proposed architectural designs to unsolved issues. Specifically, we survey the use of Transformers in medical image segmentation, detection, classification, reconstruction, synthesis, registration, clinical report generation, and other tasks. In particular, for each of these applications, we develop taxonomy, identify application-specific challenges as well as provide insights to solve them, and highlight recent trends. Further, we provide a critical discussion of the field's current state as a whole, including the identification of key challenges, open problems, and outlining promising future directions. We hope this survey will ignite further interest in the community and provide researchers with an up-to-date reference regarding applications of Transformer models in medical imaging. Finally, to cope with the rapid development in this field, we intend to regularly update the relevant latest papers and their open-source implementations at \url{//github.com/fahadshamshad/awesome-transformers-in-medical-imaging}.