The free-form deformation model can represent a wide range of non-rigid deformations by manipulating a control point lattice over the image. However, due to a large number of parameters, it is challenging to fit the free-form deformation model directly to the deformed image for deformation estimation because of the complexity of the fitness landscape. In this paper, we cast the registration task as a multi-objective optimization problem (MOP) according to the fact that regions affected by each control point overlap with each other. Specifically, by partitioning the template image into several regions and measuring the similarity of each region independently, multiple objectives are built and deformation estimation can thus be realized by solving the MOP with off-the-shelf multi-objective evolutionary algorithms (MOEAs). In addition, a coarse-to-fine strategy is realized by image pyramid combined with control point mesh subdivision. Specifically, the optimized candidate solutions of the current image level are inherited by the next level, which increases the ability to deal with large deformation. Also, a post-processing procedure is proposed to generate a single output utilizing the Pareto optimal solutions. Comparative experiments on both synthetic and real-world images show the effectiveness and usefulness of our deformation estimation method.
Modern methods for counting people in crowded scenes rely on deep networks to estimate people densities in individual images. As such, only very few take advantage of temporal consistency in video sequences, and those that do only impose weak smoothness constraints across consecutive frames. In this paper, we advocate estimating people flows across image locations between consecutive images and inferring the people densities from these flows instead of directly regressing them. This enables us to impose much stronger constraints encoding the conservation of the number of people. As a result, it significantly boosts performance without requiring a more complex architecture. Furthermore, it allows us to exploit the correlation between people flow and optical flow to further improve the results. We also show that leveraging people conservation constraints in both a spatial and temporal manner makes it possible to train a deep crowd counting model in an active learning setting with much fewer annotations. This significantly reduces the annotation cost while still leading to similar performance to the full supervision case.
The flourishing low-Earth orbit (LEO) constellation communication network provides a promising solution for seamless coverage services to Internet-of-Things (IoT) terminals. However, confronted with massive connectivity and rapid variation of terrestrial-satellite link (TSL), the traditional grant-free random-access schemes always fail to match this scenario. In this paper, a new non-orthogonal multiple-access (NOMA) transmission protocol that incorporates orthogonal time frequency space (OTFS) modulation is proposed to solve these problems. Furthermore, we propose a two-stages joint active user detection and channel estimation scheme based on the training sequences aided OTFS data frame structure. Specifically, in the first stage, with the aid of training sequences, we perform active user detection and coarse channel estimation by recovering the sparse sampled channel vectors. And then, we develop a parametric approach to facilitate more accurate result of channel estimation with the previously recovered sampled channel vectors according to the inherent characteristics of TSL channel. Simulation results demonstrate the superiority of the proposed method in this kind of high-mobility scenario in the end.
Recent development of Deep Reinforcement Learning has demonstrated superior performance of neural networks in solving challenging problems with large or even continuous state spaces. One specific approach is to deploy neural networks to approximate value functions by minimising the Mean Squared Bellman Error function. Despite great successes of Deep Reinforcement Learning, development of reliable and efficient numerical algorithms to minimise the Bellman Error is still of great scientific interest and practical demand. Such a challenge is partially due to the underlying optimisation problem being highly non-convex or using incorrect gradient information as done in Semi-Gradient algorithms. In this work, we analyse the Mean Squared Bellman Error from a smooth optimisation perspective combined with a Residual Gradient formulation. Our contribution is two-fold. First, we analyse critical points of the error function and provide technical insights on the optimisation procure and design choices for neural networks. When the existence of global minima is assumed and the objective fulfils certain conditions we can eliminate suboptimal local minima when using over-parametrised neural networks. We can construct an efficient Approximate Newton's algorithm based on our analysis and confirm theoretical properties of this algorithm such as being locally quadratically convergent to a global minimum numerically. Second, we demonstrate feasibility and generalisation capabilities of the proposed algorithm empirically using continuous control problems and provide a numerical verification of our critical point analysis. We outline the short coming of Semi-Gradients. To benefit from an approximate Newton's algorithm complete derivatives of the Mean Squared Bellman error must be considered during training.
This paper presents an Improved Bayesian Optimization (IBO) algorithm to solve complex high-dimensional epidemic models' optimal control solution. Evaluating the total objective function value for disease control models with hundreds of thousands of control time periods is a high computational cost. In this paper, we improve the conventional Bayesian Optimization (BO) approach from two parts. The existing BO methods optimize the minimizer step for once time during each acquisition function update process. To find a better solution for each acquisition function update, we do more local minimization steps to tune the algorithm. When the model is high dimensions, and the objective function is complicated, only some update iterations of the acquisition function may not find the global optimal solution. The IBO algorithm adds a series of Adam-based steps at the final stage of the algorithm to increase the solution's accuracy. Comparative simulation experiments using different kernel functions and acquisition functions have shown that the Improved Bayesian Optimization algorithm is effective and suitable for handing large-scale and complex epidemic models under study. The IBO algorithm is then compared with four other global optimization algorithms on three well-known synthetic test functions. The effectiveness and robustness of the IBO algorithm are also demonstrated through some simulation experiments to compare with the Particle Swarm Optimization algorithm and Random Search algorithm. With its reliable convergence behaviors and straightforward implementation, the IBO algorithm has a great potential to solve other complex optimal control problems with high dimensionality.
The depth information is useful in many image processing applications. However, since taking a picture is a process of projection of a 3D scene onto a 2D imaging sensor, the depth information is embedded in the image. Extracting the depth information from the image is a challenging task. A guiding principle is that the level of blurriness due to defocus is related to the distance between the object and the focal plane. Based on this principle and the widely used assumption that Gaussian blur is a good model for defocus blur, we formulate the problem of estimating the spatially varying defocus blurriness as a Gaussian blur classification problem. We solved the problem by training a deep neural network to classify image patches into one of the 20 levels of blurriness. We have created a dataset of more than 500000 image patches of size 32x32 which are used to train and test several well-known network models. We find that MobileNetV2 is suitable for this application due to its low memory requirement and high accuracy. The trained model is used to determine the patch blurriness which is then refined by applying an iterative weighted guided filter. The result is a defocus map that carries the information of the degree of blurriness for each pixel. We compare the proposed method with state-of-the-art techniques and we demonstrate its successful applications in adaptive image enhancement, defocus magnification, and multi-focus image fusion.
Lagrangian data assimilation of complex nonlinear turbulent flows is an important but computationally challenging topic. In this article, an efficient data-driven statistically accurate reduced-order modeling algorithm is developed that significantly accelerates the computational efficiency of Lagrangian data assimilation. The algorithm starts with a Fourier transform of the high-dimensional flow field, which is followed by an effective model reduction that retains only a small subset of the Fourier coefficients corresponding to the energetic modes. Then a linear stochastic model is developed to approximate the nonlinear dynamics of each Fourier coefficient. Effective additive and multiplicative noise processes are incorporated to characterize the modes that exhibit Gaussian and non-Gaussian statistics, respectively. All the parameters in the reduced order system, including the multiplicative noise coefficients, are determined systematically via closed analytic formulae. These linear stochastic models succeed in forecasting the uncertainty and facilitate an extremely rapid data assimilation scheme. The new Lagrangian data assimilation is then applied to observations of sea ice floe trajectories that are driven by atmospheric winds and turbulent ocean currents. It is shown that observing only about $30$ non-interacting floes in a $200$km$\times200$km domain is sufficient to recover the key multi-scale features of the ocean currents. The additional observations of the floe angular displacements are found to be suitable supplements to the center-of-mass positions for improving the data assimilation skill. In addition, the observed large and small floes are more useful in recovering the large- and small-scale features of the ocean, respectively. The Fourier domain data assimilation also succeeds in recovering the ocean features in the areas where cloud cover obscures the observations.
Benefit from the quick development of deep learning techniques, salient object detection has achieved remarkable progresses recently. However, there still exists following two major challenges that hinder its application in embedded devices, low resolution output and heavy model weight. To this end, this paper presents an accurate yet compact deep network for efficient salient object detection. More specifically, given a coarse saliency prediction in the deepest layer, we first employ residual learning to learn side-output residual features for saliency refinement, which can be achieved with very limited convolutional parameters while keep accuracy. Secondly, we further propose reverse attention to guide such side-output residual learning in a top-down manner. By erasing the current predicted salient regions from side-output features, the network can eventually explore the missing object parts and details which results in high resolution and accuracy. Experiments on six benchmark datasets demonstrate that the proposed approach compares favorably against state-of-the-art methods, and with advantages in terms of simplicity, efficiency (45 FPS) and model size (81 MB).
Seam-cutting and seam-driven techniques have been proven effective for handling imperfect image series in image stitching. Generally, seam-driven is to utilize seam-cutting to find a best seam from one or finite alignment hypotheses based on a predefined seam quality metric. However, the quality metrics in most methods are defined to measure the average performance of the pixels on the seam without considering the relevance and variance among them. This may cause that the seam with the minimal measure is not optimal (perception-inconsistent) in human perception. In this paper, we propose a novel coarse-to-fine seam estimation method which applies the evaluation in a different way. For pixels on the seam, we develop a patch-point evaluation algorithm concentrating more on the correlation and variation of them. The evaluations are then used to recalculate the difference map of the overlapping region and reestimate a stitching seam. This evaluation-reestimation procedure iterates until the current seam changes negligibly comparing with the previous seams. Experiments show that our proposed method can finally find a nearly perception-consistent seam after several iterations, which outperforms the conventional seam-cutting and other seam-driven methods.
Image segmentation is an important component of many image understanding systems. It aims to group pixels in a spatially and perceptually coherent manner. Typically, these algorithms have a collection of parameters that control the degree of over-segmentation produced. It still remains a challenge to properly select such parameters for human-like perceptual grouping. In this work, we exploit the diversity of segments produced by different choices of parameters. We scan the segmentation parameter space and generate a collection of image segmentation hypotheses (from highly over-segmented to under-segmented). These are fed into a cost minimization framework that produces the final segmentation by selecting segments that: (1) better describe the natural contours of the image, and (2) are more stable and persistent among all the segmentation hypotheses. We compare our algorithm's performance with state-of-the-art algorithms, showing that we can achieve improved results. We also show that our framework is robust to the choice of segmentation kernel that produces the initial set of hypotheses.
Detecting objects and estimating their pose remains as one of the major challenges of the computer vision research community. There exists a compromise between localizing the objects and estimating their viewpoints. The detector ideally needs to be view-invariant, while the pose estimation process should be able to generalize towards the category-level. This work is an exploration of using deep learning models for solving both problems simultaneously. For doing so, we propose three novel deep learning architectures, which are able to perform a joint detection and pose estimation, where we gradually decouple the two tasks. We also investigate whether the pose estimation problem should be solved as a classification or regression problem, being this still an open question in the computer vision community. We detail a comparative analysis of all our solutions and the methods that currently define the state of the art for this problem. We use PASCAL3D+ and ObjectNet3D datasets to present the thorough experimental evaluation and main results. With the proposed models we achieve the state-of-the-art performance in both datasets.