Current and upcoming radio-interferometers are expected to produce volumes of data of increasing size that need to be processed in order to generate the corresponding sky brightness distributions through imaging. This represents an outstanding computational challenge, especially when large fields of view and/or high resolution observations are processed. We have investigated the adoption of modern High Performance Computing systems specifically addressing the gridding, FFT-transform and w-correction of imaging, combining parallel and accelerated solutions. We have demonstrated that the code we have developed can support dataset and images of any size compatible with the available hardware, efficiently scaling up to thousands of cores or hundreds of GPUs, keeping the time to solution below one hour even when images of the size of the order of billion or tens of billion of pixels are generated. In addition, portability has been targeted as a primary objective, both in terms of usability on different computing platforms and in terms of performance. The presented results have been obtained on two different state-of-the-art High Performance Computing architectures.
This paper is concerned with the inverse medium problem of determining the location and shape of penetrable scattering objects from measurements of the scattered field. We study a sampling indicator function for recovering the scattering object in a fast and robust way. A flexibility of this indicator function is that it is applicable to data measured in near-field regime or far-field regime. The implementation of the function is simple and does not involve solving any ill-posed problems. The resolution analysis and stability estimate of the indicator function are investigated using the factorization analysis of the far-field operator along with the Funk-Hecke formula. The performance of the method is verified on both simulated and experimental data.
We propose a new approach for large-scale high-dynamic range computational imaging. Deep Neural Networks (DNNs) trained end-to-end can solve linear inverse imaging problems almost instantaneously. While unfolded architectures provide robustness to measurement setting variations, embedding large-scale measurement operators in DNN architectures is impractical. Alternative Plug-and-Play (PnP) approaches, where the denoising DNNs are blind to the measurement setting, have proven effective to address scalability and high-dynamic range challenges, but rely on highly iterative algorithms. We propose a residual DNN series approach, also interpretable as a learned version of matching pursuit, where the reconstructed image is a sum of residual images progressively increasing the dynamic range, and estimated iteratively by DNNs taking the back-projected data residual of the previous iteration as input. We demonstrate on radio-astronomical imaging simulations that a series of only few terms provides a reconstruction quality competitive with PnP, at a fraction of the cost.
We consider the minimization of the cost of actuation error under resource constraints for real-time tracking in wireless autonomous systems. A transmitter monitors the state of a discrete random process and sends updates to the receiver over an unreliable wireless channel. The receiver takes actions according to the estimated state of the source. For each discrepancy between the real state of the source and the estimated one, we consider a different cost of actuation error. This models the case where some states, and consequently the corresponding actions to be taken, are more important than others. We provide two algorithms: one reaching an optimal solution but of high complexity, and one providing a suboptimal solution but with low complexity. The performance of the two algorithms are quite close as shown by the simulations.
Finding a realistic deformation that transforms one image into another, in case large deformations are required, is considered a key challenge in medical image analysis. Having a proper image registration approach to achieve this could unleash a number of applications requiring information to be transferred between images. Clinical adoption is currently hampered by many existing methods requiring extensive configuration effort before each use, or not being able to (realistically) capture large deformations. A recent multi-objective approach that uses the Multi-Objective Real-Valued Gene-pool Optimal Mixing Evolutionary Algorithm (MO-RV-GOMEA) and a dual-dynamic mesh transformation model has shown promise, exposing the trade-offs inherent to image registration problems and modeling large deformations in 2D. This work builds on this promise and introduces MOREA: the first evolutionary algorithm-based multi-objective approach to deformable registration of 3D images capable of tackling large deformations. MOREA includes a 3D biomechanical mesh model for physical plausibility and is fully GPU-accelerated. We compare MOREA to two state-of-the-art approaches on abdominal CT scans of 4 cervical cancer patients, with the latter two approaches configured for the best results per patient. Without requiring per-patient configuration, MOREA significantly outperforms these approaches on 3 of the 4 patients that represent the most difficult cases.
The problem of packing equal circles in a circle is a classic and famous packing problem, which is well-studied in academia and has a variety of applications in industry. This problem is computationally challenging, and researchers mainly focus on small-scale instances with the number of circular items n less than 320 in the literature. In this work, we aim to solve this problem on large scale. Specifically, we propose a novel geometric batch optimization method that not only can significantly speed up the convergence process of continuous optimization but also reduce the memory requirement during the program's runtime. Then we propose a heuristic search method, called solution-space exploring and descent, that can discover a feasible solution efficiently on large scale. Besides, we propose an adaptive neighbor object maintenance method to maintain the neighbor structure applied in the continuous optimization process. In this way, we can find high-quality solutions on large scale instances within reasonable computational times. Extensive experiments on the benchmark instances sampled from n = 300 to 1,000 show that our proposed algorithm outperforms the state-of-the-art algorithms and performs excellently on large scale instances. In particular, our algorithm found 10 improved solutions out of the 21 well-studied moderate scale instances and 95 improved solutions out of the 101 sampled large scale instances. Furthermore, our geometric batch optimization, heuristic search, and adaptive maintenance methods are general and can be adapted to other packing and continuous optimization problems.
In recent years, deep learning has been successfully applied in various scientific domains. Following these promising results and performances, it has recently also started being evaluated in the domain of radio astronomy. In particular, since radio astronomy is entering the Big Data era, with the advent of the largest telescope in the world - the Square Kilometre Array (SKA), the task of automatic object detection and instance segmentation is crucial for source finding and analysis. In this work, we explore the performance of the most affirmed deep learning approaches, applied to astronomical images obtained by radio interferometric instrumentation, to solve the task of automatic source detection. This is carried out by applying models designed to accomplish two different kinds of tasks: object detection and semantic segmentation. The goal is to provide an overview of existing techniques, in terms of prediction performance and computational efficiency, to scientists in the astrophysics community who would like to employ machine learning in their research.
In many applications of X-ray computed tomography, an unsupervised segmentation of the reconstructed 3D volumes forms an important step in the image processing chain for further investigation of the digitized object. Therefore, the goal is to train a clustering algorithm on the volume, which produces a voxelwise classification by assigning a cluster index to each voxel. However, clustering methods, e.g., K-Means, typically have an asymptotic polynomial runtime with respect to the dataset size, and thus, these techniques are rarely applicable to large volumes. In this work, we introduce a novel clustering technique based on random sampling, which allows for the voxelwise classification of arbitrarily large volumes. The presented method conducts efficient linear passes over the data to extract a representative random sample of a fixed size on which the classifier can be trained. Then, a final linear pass performs the segmentation and assigns a cluster index to each individual voxel. Quantitative and qualitative evaluations show that excellent results can be achieved even with a very small sample size. Consequently, the unsupervised segmentation by means of clustering becomes feasible for arbitrarily large volumes.
Owing to effective and flexible data acquisition, unmanned aerial vehicle (UAV) has recently become a hotspot across the fields of computer vision (CV) and remote sensing (RS). Inspired by recent success of deep learning (DL), many advanced object detection and tracking approaches have been widely applied to various UAV-related tasks, such as environmental monitoring, precision agriculture, traffic management. This paper provides a comprehensive survey on the research progress and prospects of DL-based UAV object detection and tracking methods. More specifically, we first outline the challenges, statistics of existing methods, and provide solutions from the perspectives of DL-based models in three research topics: object detection from the image, object detection from the video, and object tracking from the video. Open datasets related to UAV-dominated object detection and tracking are exhausted, and four benchmark datasets are employed for performance evaluation using some state-of-the-art methods. Finally, prospects and considerations for the future work are discussed and summarized. It is expected that this survey can facilitate those researchers who come from remote sensing field with an overview of DL-based UAV object detection and tracking methods, along with some thoughts on their further developments.
Deep learning techniques have received much attention in the area of image denoising. However, there are substantial differences in the various types of deep learning methods dealing with image denoising. Specifically, discriminative learning based on deep learning can ably address the issue of Gaussian noise. Optimization models based on deep learning are effective in estimating the real noise. However, there has thus far been little related research to summarize the different deep learning techniques for image denoising. In this paper, we offer a comparative study of deep techniques in image denoising. We first classify the deep convolutional neural networks (CNNs) for additive white noisy images; the deep CNNs for real noisy images; the deep CNNs for blind denoising and the deep CNNs for hybrid noisy images, which represents the combination of noisy, blurred and low-resolution images. Then, we analyze the motivations and principles of the different types of deep learning methods. Next, we compare the state-of-the-art methods on public denoising datasets in terms of quantitative and qualitative analysis. Finally, we point out some potential challenges and directions of future research.
Image segmentation is an important component of many image understanding systems. It aims to group pixels in a spatially and perceptually coherent manner. Typically, these algorithms have a collection of parameters that control the degree of over-segmentation produced. It still remains a challenge to properly select such parameters for human-like perceptual grouping. In this work, we exploit the diversity of segments produced by different choices of parameters. We scan the segmentation parameter space and generate a collection of image segmentation hypotheses (from highly over-segmented to under-segmented). These are fed into a cost minimization framework that produces the final segmentation by selecting segments that: (1) better describe the natural contours of the image, and (2) are more stable and persistent among all the segmentation hypotheses. We compare our algorithm's performance with state-of-the-art algorithms, showing that we can achieve improved results. We also show that our framework is robust to the choice of segmentation kernel that produces the initial set of hypotheses.